How to use zabbix to monitor MS SQL database performance

Jephe Wu -

Environment: Zabbix 1.8.X monitoring system, MS SQL server 2005
Objective: to monitor MS SQL server performance by zabbix,

Concept: use sys.dm_os_performance_counters which resides in master and msdb database.

According to, sys.dm_os_performance_counters has 3 kinds of values as follows

1. value/base: (e.g. buffer cache hit ratio)

take counter with cntr_type = 537003264 and divide it by
'base' counter with cntr_type = 1073939712.

for example:
("Buffer cache hit ratio" / "Buffer cache hit ratio base") *100
will give you the 'Buffer cache hit ratio'

2. Point In Time:
These are point-in-time counters.They hold the value at the
current point-in-time.These counters have cntr_type = 65792.

FROM sys.dm_os_performance_counters
WHERE counter_name = 'User Connections'
              AND object_name Like '%General Statistics%'

3. cumulative values

meaning the
value must be compared at 2 diffrerent times by calculating the
differences between the values.For instance,get a counter value,
save it, then get it again after 5 seconds for example and the right
final counter value will be:
(@SecondCounterValue-@FirstCounterValue)/5 seconds.

These counters have cntr_type = 272696576.

1. use zabbix agent to monitor performance counter
You can run 'perfmon' from run command line, then add a performance counter to list all kinds of sql server counters.

SQL: Number Users Connected         perf_counter["\SQLServer:General Statistics\User Connections"]   
SQL: Lock Waits per second         perf_counter["\SQLServer:Locks(_Total)\Lock Waits/sec"]   
SQL: Number of Deadlocks per second    perf_counter["\SQLServer:Locks(_Total)\Number of Deadlocks/sec"]
SQL: Total Server Memory         perf_counter["\SQLServer:Memory Manager\Total Server Memory (KB)"]

For sql server services monitoring:
use zabbix agent and key 'service_state[MSSQLSERVER]' to monitor , show value is 'Windows service state', type of information is 'Numeric(unsigned)', data type is DEcimal.
Same for service_state[SQLServerAgent] and service_state[SQLBrowser]

that is service_state[<service_name>].

2. use Dbforbix to monitor performance counter

You need to create a sql user 'zabbix' under security inside sql server managament studio(security-logins) and grant sysadmin role to it.
And make sure zabbix is able to login through sql authentication mode. (right click server name in object explorer, properities, security, choose 'SQL server and Windows authentication mode' instead of just Windows authentication mode. Otherwise, zabbix user is not able to login. (Check sql error log if it's unable to login)

You can use the following settings in dbforbix ms sql query configuration:

restoredelay.Query=select restore_delay from msdb.dbo.log_shipping_secondary_databases where secondary_database='WSS_CONTENT';

pagelifeexpectancy.Query=select cntr_value from sys.dm_os_performance_counters where counter_name='Page life expectancy' and instance_name='';
lockwait.Query=select cntr_value from sys.dm_os_performance_counters where counter_name='Lock Waits/sec' and instance_name='_Total';
processblocked.Query=select cntr_value from sys.dm_os_performance_counters where counter_name='Processes blocked';
checkpointpage.Query=select cntr_value from sys.dm_os_performance_counters where counter_name='Checkpoint pages/sec';
pagesplit.Query=select cntr_value from sys.dm_os_performance_counters where counter_name='Page Split/sec';
userconnections.Query=select cntr_value from sys.dm_os_performance_counters where counter_name='User Connections';
batchrequest.Query=select cntr_value from sys.dm_os_performance_counters where counter_name='Batch Requests/sec';
sqlcompilation.Query=select cntr_value from sys.dm_os_performance_counters where counter_name='SQL Compilations/sec';
sqlrecompilation.Query=select cntr_value from sys.dm_os_performance_counters where counter_name='SQL Re-Compilations/sec';

note: table sys.dm_os_performance_counters is in master/msdb system views.




3. - How to use performance counters in sys.dm_os_performance_counters DMV

4. and

How to use standby sharepoint farm to access logshipped secondary database

Jephe Wu -

Environment:  sharepoint 2010 farm(2 nodes plus a SQL 2008 database) at primary site, a logshipped database with sharepoint farm(1 node) at DR site.
Objective: If primary site database is down, how to use DR site sharepoint webfarm (or primary site sharepoint ) to connect to logshipped content database.


1.  configure logshipping to prepare for DR purpose

2. use standby mode when configuring logshiping rather than norecovery mode, also choose 'disconnect user when logshipping'

3. only wss content database need to be logshipped according to SharePoing 2010 Disaster Recovery Guid.

4. when DR is happening, stop sql restoration job, backup logshiping database from primary database.

5. rename secondary database  for wss content database, so standby sharepoint farm will use logshipped database for contenet database, please refer to this article

  1. Change the directory to the 12-hive bin where you can run the STSADM commands from:
    C:\Program Files\Common Files\Microsoft Shared\web server extensions\12\BIN\
  2. Delete the original content database with the following STSADM command using the database with the GUID you looked up previously (be sure to change UrlOfYourCentralAdministration and NamedInstanceOfYourSqlServer to your names):
    stsadm -o deletecontentdb -url http://UrlOfYourCentralAdministration -databasename SharePoint_AdminContent_<GUID> -databaseserver NamedInstanceOfYourSqlServer
  3. Re-Associate the backed up database with your Central Administration with the following STSADM command (be sure to change UrlOfCentralYourAdministration and NamedInstanceOfYourSqlServer to your names):
    stsadm -o addcontentdb -url http:// UrlOfYourCentralAdministration -databasename SharePoint_AdminContent -databaseserver NamedInstanceOfYourSqlServer
 5. References:
 Sharepoint 2010 disaster recovery guide pdf book (this book talks about only logship wss content database)

Troubleshooting Apache/PHP 500 error

Jephe Wu -

Problem: apache access log shows 500 error code whenever accessing a PHP page which should be prompted to download a docx file.
Environment: CentOS 5.6 64bit, Apache httpd 2.2.3, PHP 5.2.10 from centos test repository

Troubleshooting steps:

1. When problem happens, browser doesn't show anything, only apache access log files shows 500 error code which indicates some sort of issues.

2. Since it's php page, we need to enable error reporting in /etc/php.ini(centos) or /etc/php5/apache2/php.ini(ubuntu). The following is for enabling error reporting.

error_reporting  =  E_ALL & ~E_NOTICE
display_errors = On
display_startup_errors = Off
log_errors = On

This way, we can see some kind of error showing on the screen when we access that php page. Here is the error showing on the screen after enabling that:

Fatal error: Allowed memory size of 67108864 bytes exhausted (tried to allocate 71 bytes) in /var/www/xxxx/xxxxxx.php on line 297.

3. check memory limit in /etc/php.ini

root@reports01:/etc/ # cat php.ini  | grep memory
memory_limit = 256M; Maximum amount of memory a script may consume (256)

Actually, 67108864 bytes is exactly 64M, which is not 256M that indicated in php.ini.

4. find out where 64M is defined in the php code

root@reports01:/var/www/docgen# grep 64M * -R
lib/header.php:ini_set('memory_limit', '64M');

5. remove that line in php code, problem solved.

1. how to use php 5.2 for CentOS 5.6?

2. PHP 5.2 on Ubuntu 10.04 Lucid Lynx

Migrate Oracle database from 11g to 10g on another server

Jephe Wu -

: Windows 2003 server SE SP2, Oracle 11gR2(, source, dev db) and Oracle 10g(, dest, prod db)
Objective: Migrate/merge/mix the database on 11gr2 to 10g.


1. find out all schemas on source db.
2. create tablespaces with suffix _dev on dest db
3. rename schemas with suffix _dev after import by dmpdp/impdp
4. check compatibility level from source, make sure it's less or equal to dest db version

1. export on source db to get tablespace and users creation script, and password of schemas
[source] exp system file=path_to_dump.exp log=path_to_log.exp full=y

only let it run for a while to get those users/tablespaces creation definition.

2. create tablespace on dest db
create tablespace training_dev datafile 'path_to' size 100m autoextend on next 10m [flashback off]
create user training_dev identified by values 'xxxxx'(get from step1) default tablespace training_dev temporary tablespace temp;

3. check compatibility level on source db

SELECT name, value FROM v$parameter WHERE name = 'compatible';

4. dump database

expdp system dumpfile=path_to_dump.expdp logfile=path_to_log.logdp exclude=statistics schemas=training

note: if you don't specify directory, it will use default directory 'data_pump_dir'. You can get it from command
sqlplus / as sysdba
select * from dba_directories;

5. import database
impdp system dumpfile=xxxxx logfile=xxxxx remap_schema=training:training_dev remap_tablespace=training:training_dev transform=oid:n

note: you can test it if you are able to import between 10g and 11g without creating training_dev tablespace and users first.