Jephe Wu - http://linuxtecres.blogspot.com
Objective: use snapmanager for Oracle to backup production database to DR and restore from there
Environment: Oracle standalone database 11gR2 11.2.0.3 running in Oracle VM guest backed by Oracle VM server 3.1.1., use centralized netapp storage to store binary and database files.
Steps:
1. create the coresponding volumes in DR netapp for each volume in primary database
ssh -n netapp 'df -Ah'
ssh -n netapp 'df -h'
ssh -n netapp 'vol status -v volname'
ssh -n netapp 'qtree status'
ssh -n netapp 'rdfile /etc/rc'
ssh -n netapp 'rdfile /etc/exports'
ssh -n netapp 'rdfile /etc/quotas'
vol create volname aggr1 100g
snap reserve volname 0
snap sched volname 0 0 0
For binary, create default volume without disabling snap shot and schedule.
2. configuring snapmirror
if snapmirror is uninitialized, need to be in restricted mode first
vol restrict volname
snapmirror initialize -S src:vol dst:vol
snapmirror resync -S srv:vol dst:vol
snapmirror update dstvol
break it:
snapmirror quiesce
snapmirror break dstvolname
3. recover database
startup dr VM
restore control file volume snapshot # use snap restore ....
restore coresponding snapshot for datafile volume
rman target /
rman> restore controlfile from 'put any one of backup controlfile name from snapshot here'
rman> startup mount;
rman> recover database
rman> alter database open resetlogs
4. for smo postscript to work
enable http and https admin on dr storage so that postscript for resync snapmirror will work
on dr netapp:
disable automatic snapmirror (in /etc/snapmirror.conf, use five dash )
ssh into netapp
dr> options httpd.admin.enable on
dr> options httpd.enable on
use tcpdump -nn host 10.1.30.2
snapdrive config list
snapdrive config set root password
How to use snapmanager to implement Disaster Recovery for Oracle database
How to separate datafiles from all other Oracle database files for SnapManager vol-based fast restore
Jephe Wu - http://linuxtechres.blogspot.com
Objective: to make Oracle RAC database 11gR2 11.2.0.3 64bit to be eligible for fast restore
Environment: Oracle 11.2.0.3 RAC databases 2 node VMs sitting on an Oracle VM Server 3.1.1 hypervisor, using centralized Netapp storage for the following volumes: /u01 for binary, /u02 for datafile, password file, pfile, ocr/votedisk, controlfile, online and archived redo logs
Concept: In oder to be eligible for fast restore, we need to separte datafile volume, let /u02 is only for datafiles.
Steps:
1. pfile/spfile and password file
In Oracle RAC, we use symbolic link for shared spfile for RAC, in defualt pfile:
Assuming database name is racdb, 2 instance names are racdb1 and racdb2
su - oracle
cd $ORACLE_HOME/dbs
more initINSTANCENAME.ora
oracle@racdb1:/u01/app/oracle/product/11.2.0/dbhome_1/dbs$ more initracdb1.ora
SPFILE='/u06/spfile/spfileracdb.ora'
the password file which is symbolic link:
lrwxrwxrwx 1 oracle oinstall 28 Mar 14 16:57 orapwracdb1 -> /u06/passwdfile/orapwracdb
2. control files
sqlplus / as sysdba
sql> create pfile='/tmp/pfile' from spfile;
sql> exit
vi /tmp/pfile for control_files line
sql> shutdown immediate; # for both database
copy required controlfiles from existing one to new one in new locations
sqlplus / as sysdba
sql> startup mount;
sql> create spfile='/u06/spfile/spfileracdb.ora' from pfile='/tmp/pfile'
sql> alter database open;
3. online redo log file
offline method:
sqlplus / as sysdba
sql> shutdown immediate;
sql> exit
cp required online redo log to new locations
sqlplus / as sysdba
sql> startup mount;
alter datbase rename file 'old location in old controlfile' to 'new location'
...
e.g.
alter database rename file '/u03/oralog/racdb/redo02.log' to '/u04/oralog/racdb/redo02.log';
alter database rename file '/u03/oralog/racdb/redo03.log' to '/u04/oralog/racdb/redo03.log';
sql> alter database open;
sql> select member from v$logfile;
online method:
ALTER DATABASE ADD LOGFILE THREAD 1
GROUP 5 ('/u04/oralog/NRMAPS0/redo05.log') size 100M,
GROUP 6 ('/u04/oralog/NRMAPS0/redo06.log') size 100M;
ALTER DATABASE ADD LOGFILE THREAD 2
GROUP 7 ('/u04/oralog/NRMAPS0/redo07.log') size 100M,
GROUP 8 ('/u04/oralog/NRMAPS0/redo08.log') size 100M;
alter system archive log current;
alter system switch logfile;
alter database drop logfile group 1;
4. ocr/votedisk
/u01/app/11.2.0/grid/ocrcheck
crsctl query css votedisk
oracle@db02:/u02$ more /etc/oracle/ocr.loc
ocrconfig_loc=/u02/crscfg/ocr
local_only=FALSE
===OCR===
As root user on racdb1: run
touch /u06/crscfg/ocrdisk
chown root:oinstall /u06/crscfg/ocrdisk
chmod 640 /u06/crscfg/ocrdisk
/u01/app/11.2.0/grid/bin/ocrconfig -showbackup
/u01/app/11.2.0/grid/bin/ocrconfig -manualbackup
root@db01:/tmp# /u01/app/11.2.0/grid/bin/crsctl check crs
CRS-4638: Oracle High Availability Services is online
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
/u01/app/11.2.0/grid/bin/ocrconfig –add /u06/crscfg/ocrdisk
/u01/app/11.2.0/grid/bin/ocrcheck
/u01/app/11.2.0/grid/bin/ocrconfig –delete /u02/crscfg/ocr
/u01/app/11.2.0/grid/bin/ocrcheck
===Votedisk===
root@db01:/tmp# /u01/app/11.2.0/grid/bin/crsctl query css votedisk
## STATE File Universal Id File Name Disk group
-- ----- ----------------- --------- ---------
1. ONLINE 10ab40543f574f1abf3a5f3135fb3532 (/u02/crscfg/vdsk) []
Located 1 voting disk(s).
/u01/app/11.2.0/grid/bin/crsctl add css votedisk /u06/crscfg/votedisk
/u01/app/11.2.0/grid/bin/crsctl delete css votedisk /u02/crscfg/vdsk
Useful commands:
show parameter control;
show parameter pfile;
orapwd file=orapwTEST01 password=Welc0me1 ignorecase=n
select * from v$pwfile_users;
5. check CRS status
su - oracle
. grid_env
crs_stat -t
or
crsctl stat res -t
How to mount Oracle VM guest image and fsck it
Jephe Wu - http://linuxtechres.blogspot.com
Objective: modify and fix Oracle VM guest OS image file
Environment: Oracle VM server 3.1.1, Oracle Linux 5.8 64bit OS image
Steps:
1. mount system image as follows
[root@ovms03 db02]# file /OVS/Repositories/0004fb00000300009471703af52bc769/VirtualDisks/0004fb00001200006ff9f87b9279c6dd.img
/OVS/Repositories/0004fb00000300009471703af52bc760/VirtualDisks/0004fb00001200006ff9f87b9279c6dr.img: x86 boot sector; partition 1: ID=0x83, active, starthead 0, startsector 2048, 208896 sectors; partition 2: ID=0x83, starthead 0, startsector 210944, 24954880 sectors, code offset 0x48
Note: there are two partitions inside this image file.
[root@ovms03 db02]# mount -o offset=$((512*210944)) /OVS/Repositories/0004fb00000300009471703af52bc760/VirtualDisks/0004fb00001200006ff9f87b9279c6dr.img /mnt/a
[root@ovms03 db02]# mount -o offset=$((512*2048))
/OVS/Repositories/0004fb00000300009471703af52bc760/VirtualDisks/0004fb00001200006ff9f87b9279c6dr.img /mnt/a/boot/
2. chroot to it and modify accordingly
cd /mnt/a
chroot .
chkconfig --list
vi /etc/hosts
umount /mnt/a/boot
umount /mnt/a
3. fsck image file if required.
[root@ovms03 db02]# losetup /dev/loop0 /OVS/Repositories/0004fb00000300009471703af52bc769/VirtualDisks/0004fb00001200006ff9f87b9279c6dr.img -o $((512*210944))
[root@ovms03 goepsdb02]# fsck /dev/loop0
fsck 1.39 (29-May-2006)
e2fsck 1.39 (29-May-2006)
/: clean, 147679/2985984 files, 955713/3119360 blocks
[root@ovms03 goepsdb02]# losetup -d /dev/loop0
at 3/17/2013 09:42:00 PM 0 comments
Labels: fsck, mount, offset, Oracle VM, virtualization
Understanding and configuring Percona server 5.5 with high server performance
Jephe Wu - http://linuxtechres.blogspot.com
Objective: understanding and configuring Percona server 5.5 with high server performance
Environment: CentOS 6.3 64bit, Percona server 5.5
Part I: the most important parameters for tuning mysql server performance
• innodb_buffer_pool_size
• innodb_log_file_sized
1. for innodb_buffer_pool_size
size it properly:
-----------------
You probably should set it at least 80% of your total memory the server can see.
Actually, if your server is dedicated mysql server, without nightly backup job scheduled, you might want to set it in such way it only left 4G to
Linux OS.
Monitoring for usage:
--------------------
Innodb_buffer_pool_pages_total - The total size of the buffer pool, in pages.
Innodb_buffer_pool_pages_dirty/ Innodb_buffer_pool_pages_total - the percentage of dirty pages
Innodb_buffer_pool_pages_data - The number of pages containing data (dirty or clean).
buffer pool page size: - always 16k per page
---------------------
[root@db03 ~]# echo "20000*1024/16" |bc
1280000
[root@db03 ~]# grep innodb_buffer_pool_size /etc/my.cnf
innodb_buffer_pool_size=20000M
---------------------
When innodb flush pages to disk:
-----------------------------
a. LRU list to flush
If there's no free pages to hold the data read from disk, innodb will LRU list to flush least recently used page.
b. Fust list
used if the percentage of dirty pages reach innodb_max_dirty_pages_pct, innodb will write pages from buffer pool memory to disk.
c. checkpoint acitivity
When innodb log file circles, it must make sure the coresponding dirty pages have been flushed to disk already before overwritting log file content.
Refer to http://www.mysqlperformanceblog.com/2011/01/13/different-flavors-of-innodb-flushing/
Monitoring flush pages:
[root@db04 ~]# mysql -uroot -ppassword -e 'show global status' | grep -i Innodb_buffer_pool_pages
Innodb_buffer_pool_pages_data 919597
Innodb_buffer_pool_pages_dirty 6686
Innodb_buffer_pool_pages_flushed 115285384
Innodb_buffer_pool_pages_LRU_flushed 0
Innodb_buffer_pool_pages_free 4321306
Innodb_buffer_pool_pages_made_not_young 0
Innodb_buffer_pool_pages_made_young 11000
Innodb_buffer_pool_pages_misc 1976
Innodb_buffer_pool_pages_old 339440
Innodb_buffer_pool_pages_total 5242879
How innodb flush dirty pages to disk:
------------------------------------
Innodb uses background thread to merge writes together to make it as sequential write, so it can improve performance.
It's called lazy flush. Another flush is called 'furious flushing' which means it has to make sure dirty pages have been written to disk before
overwriting transaction log files. So large innodb log file size will improve performance because innodb doesn't have to write dirty page more often.
2. For innodb_log_file_size
Before writing to innodb log file, it uses innodb_log_buffer_size, range is 1M-8M, don't have to be very big unless you write a lot of huge blob records.
the log entries are not page-based. Innodb will write buffer content to log file when transaction commits.
How to size it properly, a few ways below:
-----------------------
http://www.mysqlperformanceblog.com/2008/11/21/how-to-calculate-a-good-innodb-log-file-size/
1) use method mentioned in above blog
mysql> pager grep sequence
show engine innodb status\G
--------------
mysql> pager grep sequence
PAGER set to 'grep sequence'
mysql> show engine innodb status\G select sleep(60)\Gshow engine innodb status\G
Log sequence number 9591605145344
1 row in set (0.00 sec)
1 row in set (59.99 sec)
Log sequence number 9591617510628
1 row in set (0.00 sec)
mysql> select (9591617510628-9591605145344)/1024/1024 as MB_per_min;
+-------------+
| MB_per_min |
+-------------+
| 11.79245377 |
+-------------+
1 row in set (0.00 sec)
mysql> select 11.79245377*60/2 as MB_per_hour for each file in group.
+------------------+
| MB_per_hour |
+------------------+
| 353.773613100000 |
+------------------+
1 row in set (0.00 sec)
------------------
2) use innodb_os_log_written - The number of bytes written to the log file
You can monitor this parameter for 10 seconds during peak hour time, then get value X kb/s.
use x * 1800s(half an hour for one of two log files) for log file size.
3) monitor actual file size modified time, make it at least half an hour for each log file modification time.
Part II - Other useful parameters in /etc/my.cnf for innodb
1. innodb_max_dirty_pages_pct
This is an integer in the range from 0 to 99. The default value is 75. The main thread in InnoDB tries to write pages from the buffer pool so that the
percentage of dirty (not yet written) pages will not exceed this value.
2. innodb_flush_method=O_DIRECT - eliminate Double Buffering
3. default-storage-engine=innodb
4. innodb_file_per_table
5. innodb_flush_log_at_trx_commit=2
Value Meaning
0 Write to the log and flush to disk once per second
1 Write to the log and flush to disk at each commit
2 Write to the log at each commit, but flush to disk only once per second
Note that if you do not set the value to 1, InnoDB does not guarantee ACID prop-
erties; up to about a second’s worth of the most recent transactions may be lost if a
crash occurs.
6. max_connections=512
7. max_connect_errors
# default it's 10 only, we will get error like Host X is blocked because of many connection errors; unblock with mysqladmin flush-hosts
max_connect_errors=5000
# end
Part III - monitoring mysql server performance
1. aborted_client / abort_connects
2. bytes_sent/bytes_received , the number of bytes received/sent from/to all client
3. connections , the number of connection attempts (success or not ) to mysql server
4. created_tmp_disk_tables/created_tmp_tables , created on-disk temporary tables / the total number of created temporary tables
5. innodb_log_waits - The number of times that the log buffer was too small and a wait was required for it to be flushed before continuing.
mysql -uroot -ppassword -e 'show global status' | grep -i Innodb_log_waits
6. Innodb_os_log_written
The number of bytes written to the log file.
7. Innodb_rows_deleted
The number of rows deleted from InnoDB tables.
Innodb_rows_inserted
The number of rows inserted into InnoDB tables.
Innodb_rows_read
The number of rows read from InnoDB tables.
Innodb_rows_updated
The number of rows updated in InnoDB tables.
8. Max_used_connections - The maximum number of connections that have been in use simultaneously since the server started.
[root@db03 ~]# mysql -uroot -ppassword -e 'show global status' | grep -i Max_used_connections
Max_used_connections 353
9. Open_tables
The number of tables that are open.
10. Select_full_join
The number of joins that perform table scans because they do not use indexes. If this value is not 0, you should carefully check the indexes of your
tables.
11. Select_range_check - The number of joins without keys that check for key usage after each row. If this is not 0, you should carefully check the
indexes of your tables.
[root@db03 ~]# mysql -uroot -ppassword -e 'show global status' | grep -i Select_range_check
Select_range_check 1453
12. slow_queries
[root@db03 ~]# mysql -uroot -ppassword -e 'show global status' | grep -i Slow_queries
Slow_queries 38180440
13. [root@db03 ~]# mysql -uroot -ppassword -e 'show global status' | grep -i Table_locks_waited
Table_locks_waited 304
The number of times that a request for a table lock could not be granted immediately and a wait was needed. If this is high and you have performance
problems, you should first optimize your queries, and then either split your table or tables or use replication.
14. thread_connected/thread_running/max_used_connection
Max_used_connections
The maximum number of connections that have been in use simultaneously since the server started.
Threads_running
The number of threads that are not sleeping.
Threads_connected
The number of currently open connections.