How to use new cn=config in Openldap under CentOS 5/6


Jephe Wu - http://linuxtechres.blogspot.com

Objective: understanding Openldap new feature cn=config and configure openldap as centralized user login and address book
Environment: CentOS 5
Concept:

Historically, Openldap uses static configuration, which means if you need to modify configuration, you have to stop/start slapd again, it needs downtime. Now we have options to use OLC(On-Line configuration), cn=config and slapd.d configuration.

The feature is (at version 2.4) still optional which means that slapd.conf, while formally deprecated, will continue to work.


Steps:
1. make sure /etc/openldap/slapd.conf contains inetorgperson.schema


include /etc/openldap/schema/core.schema
include /etc/openldap/schema/cosine.schema
include /etc/openldap/schema/inetorgperson.schema
include /etc/openldap/schema/nis.schema

Add the following into slapd.conf:

database config
rootdn "cn=admin,cn=config"
rootpw {SSHA}QoTQ3JyrnNgDiMPEYncHY43tDxaezh5w

Note: add it before the first database definition if you need to use the unique feature brought by cn=config configuration.

2. use slappasswd to generate password which will be put in /etc/openldap/slapd.conf 

3. to avoid warnings about performance
cp /etc/openldap/DB_CONFIG.example /var/lib/ldap/DB_CONFIG
or
maybe cp /usr/share/doc/openldap-servers-2.*/DB_CONFIG.example /var/lib/ldap/DB_CONFIG


4. convert format to cn=config

# test it first before convert
[root@wordpress schema]# /usr/sbin/slaptest -f //etc/openldap/slapd.conf -v
bdb_db_open: Warning - No DB_CONFIG file found in directory /var/lib/ldap: (2)
Expect poor performance for suffix dc=jephe,dc=com.
config file testing succeeded


# convert slapd.conf to cn=config format
cd /etc/openldap
mkdir slapd.d
/usr/sbin/slaptest -f //etc/openldap/slapd.conf -v -F slapd.d
chown -R ldap:ldap *
mv slapd.conf slapd.conf.bak  [not necessary, just for ensuring we are using slapd.d, not slapd.conf]
/etc/init.d/ldap restart


5. view all content of ldap server

slapcat


6. prepare to import data into ldap 

vi /etc/openldap/slapd.conf to modify dn, dc line

If you are using slapd.d(cn=config), you should modify file /etc/openldap/slapd.d/cn=config/olcDatabase={1}bdb.ldif file, change
olcSuffix, olcRootDN and olcRootPW(no by default) lines.

Then use the command below to import:
ldapadd -x -D 'cn=Manager,dc=jephe,dc=com' -W -f test.ldif

[root@wordpress openldap]# more root.ldif
dn: dc=jephe,dc=com
dc: jephe
objectClass: dcObject
objectClass: organization
organizationName: Openlogic

# for normal shadow account for only login authentication ldap

[root@wordpress openldap]# more /tmp/jwu.ldif
dn: uid=jwu,dc=jephe,dc=com
uid: jwu
cn: Jephe Wu
objectClass: account
objectClass: posixAccount
objectClass: top
objectClass: shadowAccount
userPassword: {crypt}$1$TEDFGNB3$VDJn0DD1e5OjG04.Uz7NH0
shadowLastChange: 14335
shadowMax: 99999
shadowWarning: 7
loginShell: /bin/bash
uidNumber: 10000
gidNumber: 10000
homeDirectory: /home/jwu
gecos: usuario1

Note: GECOS
This field is optional and used only for informational purposes. Usually, it contains the full username. GECOS stands for "General Electric Comprehensive Operating System", please refer to http://linux.die.net/man/5/passwd.

# for normal shadow account as well as address book purpose

[root@wordpress openldap]# more zwu.ldif
dn: uid=zwu,dc=jephe,dc=com
uid: zwu
cn: Zhitan Wu
sn: Wu
#objectClass: account   #comment out, otherwise it will conflict with inetorgperson
objectClass: posixAccount
objectClass: top
objectClass: shadowAccount
objectClass: inetorgperson
userPassword: {crypt}$1$TEDFGNB3$VDJn0DD1e5OjG04.Uz7NH0
shadowLastChange: 14335
shadowMax: 99999
shadowWarning: 7
loginShell: /bin/bash
uidNumber: 10001
gidNumber: 10001
homeDirectory: /home/zwu
gecos: usuario1
mail: zwu@jephe.com

# for usrpassword authentication only [for Cognos LDAP login]

[root@ldap1 scripts]# more batchuser.ldif
# entry-id: 1
dn: dc=dev,dc=com
dc: dev
objectClass: top
objectClass: domain

# entry-id: 2
dn: ou=Special Users,dc=dev,dc=com
objectClass: top
objectClass: organizationalUnit
ou: Special Users
description: Special Administrative Accounts

# entry-id: 3
dn: ou=People,dc=dev,dc=com
objectClass: top
objectClass: organizationalunit
ou: People

# entry-id: 4
dn: ou=Groups,dc=dev,dc=com
objectClass: top
objectClass: organizationalunit
ou: Groups

dn: cn=admin,ou=Groups,dc=dev,dc=com
cn: admin
objectClass: top
objectClass: groupofuniquenames
ou: Groups
uniqueMember: uid=jephe,ou=People,dc=dev,dc=com
uniqueMember: uid=zhitan,ou=People,dc=dev,dc=com

# entry-id: 5
dn: uid=zhitan,ou=People,dc=dev,dc=com
uid: zhitan
objectClass: inetorgperson
givenName: Zhitan
sn: Wu
cn: Zhitan Wu
userPassword: {SSHA}h7HBuirlNhYJl1TwVEtKqJlJVCb53cqm


7. References:

http://www.howtoforge.com/install-and-configure-openldap-on-centos-5
http://www.zytrax.com/books/ldap/ch6/slapd-config.html
http://olex.openlogic.com/wazi/2011/using-openldap-for-remote-authentication/

phpldapadmin
http://www.padl.com/OSS/MigrationTools.html

http://www.zytrax.com/books/ldap/ch5/ - OpenLDAP Samples

Linux Best Practice


Jephe Wu - http://linuxtechres.blogspot.com

Objective:  Best industry software for each category, best usage and websites etc



  • websites
http://www.zytrax.com/tech/
http://www.herongyang.com/
http://www.datadisk.co.uk/
  • web server reporting tools

AWSTAT - http://www.awstats.org/docs/awstats_compare.html
analog

  • Proxy server - squid



  • IDS

HIDS - ossec (for both Linux and Windows)


  • audited shell

sudosh2


  • Server monitoring

Zabbix


  • centralized logging

rsyslog


  • VPN

openvpn


  • NAS

openfiler
freeNAS


  • OpenLDAP
http://www.zytrax.com/books/ldap/

How to deal with SQL server disk space


Objective: understanding different scenario for disk space issues

scenario 1: How to reduce transaction log file size
Problem: transaction log file is huge (more than 100GB), due to weekly maintenance job running on production database
Environment:  MS SQL server 2005, recovery mode is 'full', transaction log backup is not enabled, logshipping is not enabled.
Solution: change database recovery mode from full to simple, then shrink transaction log to free up unused space, you cannot free up space if the unused space is 0%.

scenario 2: How to move database files to free up space
Problem: D drive has not enough space, you need to move one database to F drive on the same server instance.
Envrionment: MS SQL server 2005. recovery mode is 'simple'. not transaction log backup, no logshipping.
Solution: 

For moving database files within same instance server
a. alter database modify  (apply to move database files within same instance server)
For example:
USE master;
GO
-- Return the logical file name.
SELECT name, physical_name AS CurrentLocation, state_desc
FROM sys.master_files
WHERE database_id = DB_ID(N'AdventureWorks2012')
    AND type_desc = N'LOG';
GO
ALTER DATABASE AdventureWorks2012 SET OFFLINE;
GO
-- Physically move the file to a new location.
-- In the following statement, modify the path specified in FILENAME to
-- the new location of the file on your server.
ALTER DATABASE AdventureWorks2012 
    MODIFY FILE ( NAME = AdventureWorks2012_Log, 
                  FILENAME = 'C:\NewLoc\AdventureWorks2008R2_Log.ldf');
GO
ALTER DATABASE AdventureWorks2012 SET ONLINE;
GO
--Verify the new location.
SELECT name, physical_name AS CurrentLocation, state_desc
FROM sys.master_files
WHERE database_id = DB_ID(N'AdventureWorks2012')
    AND type_desc = N'LOG';
For moving database files within same instance or different instance/servers:
a. use database detach and reattach, this actually works for same server instance and also different server or different instance.
b. backup and restore database with move option to relocate files

References:

Setup snmp and snmptrap monitoring under Zabbix


Jephe Wu - http://linuxtechres.blogspot.com

Objective: understanding snmp, snmptrap under zabbix

Note:For zabbix, –with-net-snmp zabbix compiling option is not needed if you only want to receive traps. if you also want to poll snmp, then it is required.


part I - snmp monitoring - snmpget, snmpwalk

1. snmp items with zabbix
refer to http://www.zabbix.com/wiki/howto/monitor/snmp/snmp?s[]=snmp&s[]=item


2. get snmp item name
# snmpwalk snmpserver -v1 -c public
# snmpwalk -c public -v 2c ipaddress


root@snmp:~/ # snmpget -c public -v 1 172.16.62.3 .1.3.6.1.4.1.9.9.171.1.2.1.1.0
SNMPv2-SMI::enterprises.9.9.171.1.2.1.1.0 = Gauge32: 9

Note: 1.3.6.1.4.1.9.9.171.1.2.1.1 is a cisco ipsec mib at http://www.oidview.com/mibs/9/CISCO-IPSEC-FLOW-MONITOR-MIB.html, refer to cikeGlobalActiveTunnels


Refer to free snmp MIB library at http://www.oidview.com/mibs/detail.html
and http://www.iana.org/assignments/enterprise-numbers

Part II - snmptrap


1. snmptrapd.conf 
root@snmptrap:/etc/snmp/ # chkconfig --list snmptrapd
snmptrapd       0:off 1:off 2:on 3:on 4:on 5:on 6:off


root@snmptrap:/etc/snmp/ # /etc/init.d/snmptrapd status
snmptrapd (pid  27227) is running...


root@snmptrap:/etc/snmp/ # more snmptrapd.conf 
authCommunity execute public
authCommunity execute PUBLIC
disableAuthorization yes
#log syslog
traphandle default /usr/bin/zabbix_snmptraphandler.pl
traphandle default /usr/bin/perl /usr/bin/traptoemail -s jephe.mailserver.com -f snmptrap@snmptrap.domain jwu@domain.com


Note: if community string is not public, please change the first/second line.
refer to http://www.zabbix.com/wiki/howto/monitor/snmp/snmp_traps_-_a_new_solution for traphandle perl script




2. test it from snmptrap server itself, you should receive email.
# snmptrap -Ci -v 2c -c public localhost "" "NET-SNMP-MIB::netSnmpExperimental" NET-SNMP-MIB:netSnmpExperimental s "test"


root@snmptrap:/etc/snmp/ # which snmptrap
/usr/bin/snmptrap
root@snmptrap:/etc/snmp/ # rpm -qf /usr/bin/snmptrap
net-snmp-utils-5.3.2.2-14.el5_7.1


Note: you might want to look at SNMPTT at http://snmptt.sourceforge.net/ 




3. sample zabbix templates for snmp devices


http://www.zabbix.com/wiki/templates/cisco_2960



Part III - References
http://www.zabbix.com/wiki/howto/monitor/snmp/a_simple_snmp_trap_handler
http://www.oidview.com/mibs/detail.html
http://www.iana.org/assignments/enterprise-numbers

http://ireasoning.com/mibbrowser.shtml  (better to use for snmp walk)
http://www.manageengine.com/products/mibbrowser-free-tool/download.html  (absolutely free, for snmpwalk, put .iso.org.dod.internet.mgmt in the Object ID column, then click 'snmpwalk' button)

http://www.youtube.com/watch?v=TiBrB0qxQao
http://www.youtube.com/watch?v=J0fPuXZySXE&feature=related
http://www.youtube.com/watch?v=biH-UAPxo_0&feature=related


How to make partition under Zabbix


Jephe Wu - http://linuxtechres.blogspot.com

Environment: CentOS 6.2 64bit, newly configured Zabbix 1.8.6 server
Objective: make partition for those tables

Concepts:
In order to increase performance, you can create parititon for some tables, refer to http://zabbixzone.com/zabbix/partitioning-tables/ and http://pastebin.com/nY8WG9Fr


Steps:

1.  modified zabbix partition procedure, keep 2 days for daily partition and 2 months for monthly partition, only keep 12 months for trends and trends_uint tables.

# more createpartitionproc.sql
/**************************************************************
MySQL Auto Partitioning Procedure for Zabbix 1.8
  http://zabbixzone.com/zabbix/partitioning-tables/

  Author:  Ricardo Santos (rsantos at gmail.com)
  Version: 20110518
  Modied: Phil - 20110526
 should be run daily, can surive if run weekly for daily partitions

mysql -B -h localhost -u zabbix -pPASSWORD zabbix -e "CALL create_daily_zabbix_partitions();

 should be run monthly, can surive if run every year for monthly partitions

mysql -B -h localhost -u zabbix -pPASSWORD zabbix -e "CALL create_daily_zabbix_partitions();
mysql -B -h localhost -u zabbix -pPASSWORD zabbix -e "CALL create_monthly_zabbix_partitions();

**************************************************************/
DELIMITER //
DROP PROCEDURE IF EXISTS `zabbix`.`create_daily_zabbix_partitions` //
CREATE PROCEDURE `zabbix`.`create_daily_zabbix_partitions` ()
BEGIN
CALL zabbix.create_next_daily_partitions("zabbix","history");
CALL zabbix.create_next_daily_partitions("zabbix","history_log");
CALL zabbix.create_next_daily_partitions("zabbix","history_str");
CALL zabbix.create_next_daily_partitions("zabbix","history_text");
CALL zabbix.create_next_daily_partitions("zabbix","history_uint");
CALL zabbix.drop_old_daily_partitions("zabbix","history");
CALL zabbix.drop_old_daily_partitions("zabbix","history_log");
CALL zabbix.drop_old_daily_partitions("zabbix","history_str");
CALL zabbix.drop_old_daily_partitions("zabbix","history_text");
CALL zabbix.drop_old_daily_partitions("zabbix","history_uint");
END //
DROP PROCEDURE IF EXISTS `zabbix`.`create_monthly_zabbix_partitions` //
CREATE PROCEDURE `zabbix`.`create_monthly_zabbix_partitions` ()
BEGIN
CALL zabbix.create_next_monthly_partitions("zabbix","acknowledges");
CALL zabbix.create_next_monthly_partitions("zabbix","alerts");
CALL zabbix.create_next_monthly_partitions("zabbix","auditlog");
CALL zabbix.create_next_monthly_partitions("zabbix","events");
CALL zabbix.create_next_monthly_partitions("zabbix","service_alarms");
CALL zabbix.create_next_monthly_partitions("zabbix","trends");
CALL zabbix.create_next_monthly_partitions("zabbix","trends_uint");
CALL zabbix.drop_old_monthly_partitions("zabbix","acknowledges");
CALL zabbix.drop_old_monthly_partitions("zabbix","alerts");
CALL zabbix.drop_old_monthly_partitions("zabbix","auditlog");
CALL zabbix.drop_old_monthly_partitions("zabbix","events");
CALL zabbix.drop_old_monthly_partitions("zabbix","service_alarms");
CALL zabbix.drop_old_monthly_partitions2("zabbix","trends");
CALL zabbix.drop_old_monthly_partitions2("zabbix","trends_uint");
END //
/* create a week in advance, in case something happens to cron */
DROP PROCEDURE IF EXISTS `zabbix`.`create_next_daily_partitions` //
CREATE PROCEDURE `zabbix`.`create_next_daily_partitions` (SCHEMANAME varchar(64), TABLENAME varchar(64))
BEGIN
DECLARE NEXTCLOCK timestamp;
DECLARE PARTITIONNAME varchar(16);
DECLARE CLOCK int;
SET @totaldays = 7;
SET @i = 1;
createloop: LOOP
SET NEXTCLOCK = DATE_ADD(NOW(),INTERVAL @i DAY);
SET PARTITIONNAME = DATE_FORMAT( NEXTCLOCK, 'p%Y%m%d' );
SET CLOCK = UNIX_TIMESTAMP(DATE_FORMAT(DATE_ADD( NEXTCLOCK ,INTERVAL 1 DAY),'%Y-%m-%d 00:00:00'));
CALL zabbix.create_partition( SCHEMANAME, TABLENAME, PARTITIONNAME, CLOCK );
SET @i=@i+1;
IF @i > @totaldays THEN
LEAVE createloop;
END IF;
END LOOP;
END //
/* we want to keep at least 7, but we will scan and make sure we haven't missed and so 7-14 */
DROP PROCEDURE IF EXISTS `zabbix`.`drop_old_daily_partitions` //
CREATE PROCEDURE `zabbix`.`drop_old_daily_partitions` (SCHEMANAME varchar(64), TABLENAME varchar(64))
BEGIN
DECLARE OLDCLOCK timestamp;
DECLARE PARTITIONNAME varchar(16);
DECLARE CLOCK int;
SET @mindays = 2;
SET @maxdays = @mindays+7;
SET @i = @maxdays;
droploop: LOOP
SET OLDCLOCK = DATE_SUB(NOW(),INTERVAL @i DAY);
SET PARTITIONNAME = DATE_FORMAT( OLDCLOCK, 'p%Y%m%d' );
CALL zabbix.drop_partition( SCHEMANAME, TABLENAME, PARTITIONNAME );
SET @i=@i-1;
IF @i <= @mindays THEN
LEAVE droploop;
END IF;
END LOOP;
END //
/* we want to keep at least 12, but we will scan and make sure we haven't missed and so 7-14 */
DROP PROCEDURE IF EXISTS `zabbix`.`create_next_monthly_partitions` //
CREATE PROCEDURE `zabbix`.`create_next_monthly_partitions` (SCHEMANAME varchar(64), TABLENAME varchar(64))
BEGIN
DECLARE NEXTCLOCK timestamp;
DECLARE PARTITIONNAME varchar(16);
DECLARE CLOCK int;
SET @totalmonths = 12;
SET @i = 1;
createloop: LOOP
SET NEXTCLOCK = DATE_ADD(NOW(),INTERVAL @i MONTH);
SET PARTITIONNAME = DATE_FORMAT( NEXTCLOCK, 'p%Y%m' );
SET CLOCK = UNIX_TIMESTAMP(DATE_FORMAT(DATE_ADD( NEXTCLOCK ,INTERVAL 1 MONTH),'%Y-%m-01 00:00:00'));
CALL zabbix.create_partition( SCHEMANAME, TABLENAME, PARTITIONNAME, CLOCK );
SET @i=@i+1;
IF @i > @totalmonths THEN
LEAVE createloop;
END IF;
END LOOP;
END //

/* we want to keep at least 12 months, but we will scan and make sure we haven't missed and so 7-14 */
DROP PROCEDURE IF EXISTS `zabbix`.`drop_old_monthly_partitions2` //
CREATE PROCEDURE `zabbix`.`drop_old_monthly_partitions2` (SCHEMANAME varchar(64), TABLENAME varchar(64))
BEGIN
DECLARE OLDCLOCK timestamp;
DECLARE PARTITIONNAME varchar(16);
DECLARE CLOCK int;
SET @minmonths = 12;
SET @maxmonths = @minmonths+3;
SET @i = @maxmonths;
droploop: LOOP
SET OLDCLOCK = DATE_SUB(NOW(),INTERVAL @i MONTH);
SET PARTITIONNAME = DATE_FORMAT( OLDCLOCK, 'p%Y%m' );

CALL zabbix.drop_partition( SCHEMANAME, TABLENAME, PARTITIONNAME );
SET @i=@i-1;
IF @i <= @minmonths THEN
LEAVE droploop;
END IF;
END LOOP;
END //
/* we want to keep at least 2 months, but we will scan and make sure we haven't missed and so 7-14 */
DROP PROCEDURE IF EXISTS `zabbix`.`drop_old_monthly_partitions` //
CREATE PROCEDURE `zabbix`.`drop_old_monthly_partitions` (SCHEMANAME varchar(64), TABLENAME varchar(64))
BEGIN
DECLARE OLDCLOCK timestamp;
DECLARE PARTITIONNAME varchar(16);
DECLARE CLOCK int;
SET @minmonths = 2;
SET @maxmonths = @minmonths+3;
SET @i = @maxmonths;
droploop: LOOP
SET OLDCLOCK = DATE_SUB(NOW(),INTERVAL @i MONTH);
SET PARTITIONNAME = DATE_FORMAT( OLDCLOCK, 'p%Y%m' );

CALL zabbix.drop_partition( SCHEMANAME, TABLENAME, PARTITIONNAME );
SET @i=@i-1;
IF @i <= @minmonths THEN
LEAVE droploop;
END IF;
END LOOP;
END //
DROP PROCEDURE IF EXISTS `zabbix`.`create_partition` //
CREATE PROCEDURE `zabbix`.`create_partition` (SCHEMANAME varchar(64), TABLENAME varchar(64), PARTITIONNAME varchar(64), CLOCK int)
BEGIN
DECLARE RETROWS int;
SELECT COUNT(1) INTO RETROWS
FROM `information_schema`.`partitions`
WHERE `table_schema` = SCHEMANAME AND `table_name` = TABLENAME AND `partition_name` = PARTITIONNAME;

IF RETROWS = 0 THEN
SELECT CONCAT( "create_partition(", SCHEMANAME, ",", TABLENAME, ",", PARTITIONNAME, ",", CLOCK, ")" ) AS msg;
      SET @sql = CONCAT( 'ALTER TABLE `', SCHEMANAME, '`.`', TABLENAME, '`',
' ADD PARTITION (PARTITION ', PARTITIONNAME, ' VALUES LESS THAN (', CLOCK, '));' );
PREPARE STMT FROM @sql;
EXECUTE STMT;
DEALLOCATE PREPARE STMT;
END IF;
END //
DROP PROCEDURE IF EXISTS `zabbix`.`drop_partition` //
CREATE PROCEDURE `zabbix`.`drop_partition` (SCHEMANAME varchar(64), TABLENAME varchar(64), PARTITIONNAME varchar(64))
BEGIN
DECLARE RETROWS int;
SELECT COUNT(1) INTO RETROWS
FROM `information_schema`.`partitions`
WHERE `table_schema` = SCHEMANAME AND `table_name` = TABLENAME AND `partition_name` = PARTITIONNAME;

IF RETROWS = 1 THEN
SELECT CONCAT( "drop_partition(", SCHEMANAME, ",", TABLENAME, ",", PARTITIONNAME, ")" ) AS msg;
      SET @sql = CONCAT( 'ALTER TABLE `', SCHEMANAME, '`.`', TABLENAME, '`',
' DROP PARTITION ', PARTITIONNAME, ';' );
PREPARE STMT FROM @sql;
EXECUTE STMT;
DEALLOCATE PREPARE STMT;
END IF;
END //
DELIMITER ;

2. import procedure into mysql database
mysql -uroot -ppassword < createpartitionproc.sql

3. alter table to create initial parititon

if you directly run command in cronjob like 'mysql -B -h localhost -u zabbix -pzabbix zabbix -e "CALL create_daily_zabbix_partitions();"', you will get error:
create_partition(zabbix,history,p20120517,1337263200)
ERROR 1505 (HY000) at line 1: Partition management on a not partitioned table is not possible

4. put cronjob in /etc/cron.d
[root@db01 cron.d]# more mysqlpartition
30 1 * * * mysql  mysql -B -h localhost -u zabbix -pzabbix zabbix -e "CALL create_daily_zabbix_partitions();"
30 1 1 * * mysql mysql -B -h localhost -u zabbix -pzabbix zabbix -e "CALL create_monthly_zabbix_partitions();"

5. create initial partitions

 alter table history partition by range(clock) (partition p20120516 values less than (unix_timestamp("2012-05-17 00:00:00")));
 alter table history_str partition by range(clock) (partition p20120516 values less than (unix_timestamp("2012-05-17 00:00:00")));
 alter table history_uint partition by range(clock) (partition p20120516 values less than (unix_timestamp("2012-05-17 00:00:00")));

 But when running for history_log and history_text tables, we got error below:
 mysql> alter table history_log partition by range(clock) (partition p20120516 values less than (1337176800));
 ERROR 1503 (HY000): A PRIMARY KEY must include all columns in the table's partitioning function

 mysql> alter table history_text partition by range(clock) (partition p20120516 values less than (1337176800));
 ERROR 1503 (HY000): A PRIMARY KEY must include all columns in the table's partitioning function

 You need to put 'clock' column into primary key column and unique index key column as follows:

 ALTER TABLE  `zabbix`.`history_log` DROP PRIMARY KEY ,
 ADD PRIMARY KEY (  `id` ,  `clock` ,  `itemid` )


 ALTER TABLE  `zabbix`.`history_log` DROP INDEX  `history_log_2` ,
 ADD UNIQUE  `history_log_2` (  `itemid` ,  `id` ,  `clock` )


  Same thing happens to monthly tables:

  mysql> alter table acknowledges partition by range (clock) ( partition p201205 values less than (unix_timestamp("2012-06-01 00:00:00")));
  ERROR 1503 (HY000): A PRIMARY KEY must include all columns in the table's partitioning function
  mysql> alter table alerts partition by range (clock) ( partition p201205 values less than (unix_timestamp("2012-06-01 00:00:00")));
  ERROR 1503 (HY000): A PRIMARY KEY must include all columns in the table's partitioning function
  mysql> alter table auditlog partition by range (clock) ( partition p201205 values less than (unix_timestamp("2012-06-01 00:00:00")));
  ERROR 1503 (HY000): A PRIMARY KEY must include all columns in the table's partitioning function
  mysql> alter table events partition by range (clock) ( partition p201205 values less than (unix_timestamp("2012-06-01 00:00:00")));
  ERROR 1503 (HY000): A PRIMARY KEY must include all columns in the table's partitioning function
  mysql> alter table service_alarms partition by range (clock) ( partition p201205 values less than (unix_timestamp("2012-06-01 00:00:00")));
  ERROR 1503 (HY000): A PRIMARY KEY must include all columns in the table's partitioning function
  
  mysql> alter table trends partition by range (clock) ( partition p201205 values less than (unix_timestamp("2012-06-01 00:00:00")));
  Query OK, 64 rows affected (0.05 sec)
  Records: 64  Duplicates: 0  Warnings: 0


  mysql> alter table trends_uint partition by range (clock) ( partition p201205 values less than (unix_timestamp("2012-06-01 00:00:00")));
  Query OK, 90 rows affected (0.02 sec)
  Records: 90  Duplicates: 0  Warnings: 0

  6. manually run cronjob once to create necessary daily and monthly tables in advance.



mysql -B -h localhost -u zabbix -pzabbix zabbix -e "CALL create_daily_zabbix_partitions();
mysql -B -h localhost -u zabbix -pzabbix zabbix -e "CALL create_monthly_zabbix_partitions();


  7. verify tables with partitions
  mysql -uzabbix -pzabbix zabbix
  mysql> show create table history;

  8. References:
  http://zabbixzone.com/zabbix/partitioning-tables/
  http://pastebin.com/nY8WG9Fr
   


















Get openldap to authenticate with ldap server without TLS/SSL under CentOS 6


Jephe Wu - http://linuxtechres.blogspot.com

Objective: use ldap to authenticate user with openldap server without TLS/SSL
environment: CentOS 6.1 64bit (ldap client), openldap server


Concepts:

Since RHEL6/CentOS6, it uses sssd and nslcd in ldap client to authenticate with ldap server by default. And it requires TLS/SSL cert during authentication because it will transit password in plaintext otherwise.

Refer to http://docs.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/6/html/Deployment_Guide/SSSD-Troubleshooting.html
----------
To perform authentication, SSSD requires that the communication channel be encrypted. This means that if sssd.conf is configured to connect over a standard protocol (ldap://), it attempts to encrypt the communication channel with Start TLS. If sssd.conf is configured to connect over a secure protocol (ldaps://), then SSSD uses SSL.

This means that the LDAP server must be configured to run in SSL or TLS. TLS must be enabled for the standard LDAP port (389) or SSL enabled on the secure LDAPS port (636). With either SSL or TLS, the LDAP server must also be configured with a valid certificate trust.
---------

Steps:

1. make sure the following packages are installed 
pam_ldap
nss_pam_ldapd  (thanks Eric to point out it, it's not underscore, it's dash)
nss-pam-ldapd

run rpm -qa  | grep ldap to check

2. modify /etc/sysconfig/authconfig
change
FORCELEGACY=no
to
FORCELEGACY=yes

3. run setup command under putty
choose 'Use LDAP' in User Information
choose 'Use Shadow Password', 'Use LDAP Authentication', 'Use Fingerprint reader' and 'Local authorization is sufficient' ,then Next
type in Server and Base DN: part such as
ldap://ldap.jephe
dc=jephewu,dc=com

Note: it will stop sssd daemon and disable it from statup by running 'chkconfig sssd off'

If you run 'setup' before changing FORCELEGACY from no to yes, it have to change it then run setup again to stop sssd, and changing /etc/pam.d/system-auth, modify all pam_sss.so to pam_ldap.so.

Note: /etc/pam.d/system-auth is a symbolic link to /etc/pam.d/system-auth-ac

4. test it
ssh as root
# getent passwd # should show all ldap users such as jwu
# getent group
# id jwu
# su - jwu


5. Troubleshooting:

a. put debug_level = 9 in /etc/sssd/sssd.conf under domain/LDAP part
or directory run
#sssd -d4 , then check /var/log/sssd/* when login.

b. nss_initgroups_ignoreusers
The LDAP server is queried even for users found in /etc/passwd.

To setup permissions correctly, the login environment needs to find all the groups that a user is a member of. If you have configured NSS to lookup groups in LDAP (group: ldap in /etc/nsswitch.conf), then the NSS library will lookup group information in LDAP for users in /etc/passwd too. To avoid this lookup for users whose group membership information is not stored on LDAP, add that user to the nss_initgroups_ignoreusers option in /etc/ldap.conf.

c. http://docs.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/6/html/Deployment_Guide/SSSD-Troubleshooting.html

d. make sure you started nslcd daemon, #chconfig nslcd on

e. ldap client, unable to login as root when ldap server is down.
put the following into /etc/pam.d/system-auth after pam_unix.so line

account     required      /lib/security/$ISA/pam_unix.so
account     sufficient   /lib/security/$ISA/pam_localuser.so

f. master-slave ldap sync
A syncrepl slave LDAP server doesn't sync with the master if the sync interval is more then 35 minutes on RHEL5

6. References:
http://www.server-world.info/en/note?os=CentOS_6&p=ldap&f=2
http://docs.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/6-Beta/html/Deployment_Guide/SSSD-Troubleshooting.html
http://www.server-world.info/en/note?os=CentOS_6&p=ldap&f=3

Troubleshooting ORA-03135: connection lost contact issue

Jephe Wu - http://linuxtechres.blogspot.com


Environment: Oracle database 10.2.0.3 RAC 64bit, websphere application server on RHEL4.5 use JDBC driver,Juniper firewalls (failover)
Problem: application server got error "java.sql.SQLException: ORA-03135: connection lost contact" and connection timeout from database
Objective: to fix the connection lost issue


Steps:
1. search the error message to find out the root cause in Oracle support and google:

Troubleshooting ORA-3135 Connection Lost Contact [ID 787354.1]
Resolving Problems with Connection Idle Timeout With Firewall [ID 257650.1]
Achive Log shipments failing in environment with Juniper firewall [ID 1075432.1]
Logs are not shipped to the physical standby database [ID 1130523.1]
ORA-03135: connetion lost contact while shipping from Primary server to standby server [ID 739522.1]

According to Oracle doc ID 787354.1, this ORA-3135 might be caused by one of the following:

(1) Transparent Application Failover (TAF)
(2) Idle Connection Timeout
(3) Firewall Manipulating  Sqlnet Data.

Otherwise, it might need further analysis.
Also, according to ORA-3135 connection lost contact tips: http://www.dba-oracle.com/t_ora_03135_connection_lost_contact.htm, it might be due to customized profile.

sqlplus / as sysdba
sql> set line 32000
sql> select profile, resource_name, limit
from dba_profiles where resource_name IN ('IDLE_TIME', 'CONNECT_TIME');

note: some commands:
sql> create profile myprof limit connect_time 30;


From application side, to avoid this idle connection timeout, refer to -  Putting a firewall between your AppServer and DBMS -  http://www.websphere-world.com/modules.php?name=News&file=article&sid=522to

2. check firewall idle connection timeout and ALG settings
Checked Juniper firewall SQL ALG setting and idle connection timeout

# check alg
# check service ssh
# check service SQL*Net V2        

And some articles:

a. What is Idle timeouts when any service set - J-Net Community - http://forums.juniper.net/t5/ScreenOS-Firewalls-NOT-SRX/What-is-Idle-timeouts-when-Any-service-set/td-p/163

b. Viewing list of ALGs and disabling an ALG differs on screenos versions - http://kb.juniper.net/InfoCenter/index?page=content&id=KB13509
c. Issues with SQL database applications when traffic traverses SRX - http://kb.juniper.net/InfoCenter/index?page=content&id=KB21550
d. Increasing the session idle timeout of a particular service - http://kb.juniper.net/InfoCenter/index?page=content&id=KB4652&actp=LIST
e. How do I verify the NCP settings? (for SQL ALG) - http://kb.juniper.net/InfoCenter/index?page=content&id=KB9283
f. [SRX]what conditions are required for SQL ALG? - http://kb.juniper.net/InfoCenter/index?page=content&id=KB22418&cat=JUNOS&actp=LIST
g. Users losing connections when the IVE Active/Passive Cluster Fails over - http://kb.juniper.net/InfoCenter/index?page=content&id=KB8539&cat=SSL_VPN&actp=LIST

3. We have disabled SQL ALG and application server seemed busy at the time of connection lost.
Our monitoring shows Firewall actually failed over to another one at the time of connection lost and switched back, so the application server got "ORA-03135: connection lost contact" error twice and need to restart application.


       

Understanding how CUPS works


Jephe Wu - http://linuxtechres.blogspot.com

Environment: CentOS 6.2, cups
Objective: setting a network printer(HP JetDirect at 192.168.0.100:9100)

How does it work? - http://www.cups.org/documentation.php/overview.html
Printing procedure diagram: - http://upload.wikimedia.org/wikipedia/commons/thumb/6/64/Cups_simple.svg/150px-Cups_simple.svg.png

Accoring to http://en.wikipedia.org/wiki/CUPS

user data(image or webpage) -> MIME(/etc/cups/mime.types and /etc/cups/mime.convs) -> PostScript data-> PostScript filter(prefilter: PPD) to add printer specific options like page range etc -> CUPS backend(if postscript printer) or CUPS-raster format(by Foomatic or Ghostscript)-> CUPS backends

user data(image or webpage) -> MIME(/etc/cups/mime.types and /etc/cups/mime.convs) -> diretly raster data -> final filter to printer specific data -> CUPS backends

printer-specific format (ls -l /usr/lib/cups/filter) then send to backends which are parallel, serial, USB ports and network backends via IPP, JetDirecdt(AppSocket), line printer daemon(LPD) and SMB protocol.


Steps:

1. find out remote printer's IP address by one of methods below:

a. on printer's control panel
b. printing printer's configuration or status page
c.For Mac address, get it by printer test page or at the bottom of the printer server.
then you can manully configure IP address in DHCP server like this:

host hostname {
  hardware ethernet mac-address;
  fixed-address ip-address;
}

2. network protocols in CUPS - http://www.cups.org/documentation.php/network.html
In order to support network printers in CUPS, the following protocols are used:


AppSocket protocol(JetDirect protocol)
Note: it's generally the most reliable network protocol used for printers, it's on port 9100 and use socket URI

Internet Printing Protocol(IPP)
Note:IPP is the only protocol that CUPS supports natively and is supported by some network printers and print servers. However, since many printers do not implement IPP properly, only use IPP when the vendor actually documents official support for it. IPP printing normally happens over port 631 and uses the http and ipp URI schemes:

Line Printer Deamon (LPD) Protocol
Note: LPD is the original network printing protocol and is supported by many network printers. Due to limitations in the LPD protocol, we do not recommend using it if the printer or server supports one of the other protocols. LPD printing normally happens over port 515 and uses the lpd URI scheme:

3. printer discovery
Whenever you view the administration web page or a list of supported device URIs, the snmp backend will probe the local network(s) using Simple Network Management Protocol (SNMP) broadcasts use snmp to discover your printers:

default community string is 'public', if your printer is not using default one, change
/etc/cups/snmp.conf

run 'CUPS_DEBUG_LEVEL=2 /usr/lib/cups/backend/snmp 2>&1 | tee snmp.log' to find out what kind of printer you have at the specified address in /etc/cups/snmp.conf
 1  INFO: Using default SNMP Address @LOCAL
 2  INFO: Using default SNMP Community public
 3  DEBUG: Scanning for devices in "public" via "@LOCAL"...
....

If you don't see your printer listed, or the wrong information is listed, then you need to gather more information on the printer.

The easiest way to do this is to run the snmpwalk command:

snmpwalk -Cc -v 1 -c public 192.168.0.100 | tee snmpwalk.log

look for the following information:

HOST-RESOURCES-MIB::hrDeviceType.1 = OID: HOST-RESOURCES-TYPES::hrDevicePrinter
HOST-RESOURCES-MIB::hrDeviceDescr.1 = STRING: HP LaserJet 4000 Series

4. configure Axis and Linksys print servers - http://www.cups.org/documentation.php/network.html

5. How to configure printer from web interface http://localhost:631
After you choose a HP printer and its model and click the 'add printer', CUPS will put the coresponding PPD file under /etc/cups/ppd directory.

5.a From command line CLI - to configure a printer

# lpadmin -p printer1 -v usb:/dev/usb/lp0 \
  -P /usr/share/cups/model/Epson-LP-S3000-eplaser-jp.ppd -E 



5.b [root@devdb ppd]# lpstat -p -a
printer 2 is idle.  enabled since Sat 12 May 2012 02:52:19 PM EST
printer hp4300_7 is idle.  enabled since Sat 12 May 2012 03:10:23 PM EST
printer test is idle.  enabled since Sat 12 May 2012 02:50:21 PM EST
2 accepting requests since Sat 12 May 2012 02:52:19 PM EST
hp4300_7 accepting requests since Sat 12 May 2012 03:10:23 PM EST
test accepting requests since Sat 12 May 2012 02:50:21 PM EST

5.c enable cups printer queue.

# /usr/bin/cupsenable lps3000
# /usr/sbin/accept lps3000



5. References:
http://www.enterprisenetworkingplanet.com/windows/article.php/52231_3621876_2/Push-Windows-Printer-Drivers-with-CUPS.htm
http://en.wikipedia.org/wiki/CUPS

With Samba installed, users can address printers on remote Windows computers and generic PostScript drivers can be used for printing across the network.

http://en.wikipedia.org/wiki/File:CUPS-block-diagram.svg


Zabbix partition deleting accident


Jephe Wu - http://linuxtechres.blogspot.com

Env: Zabbix 1.8.5 on CentOS 6.2 64bit.
config: daily and monthly partition has been configured on zabbix database, please refer to http://zabbixzone.com/zabbix/partitioning-tables/ and http://pastebin.com/nY8WG9Fr
accident: changed the minimum value of month from 12 to 2 to keep only 2 months data for trends and trends_uint tables.
e.g. the current month is Apr, deleted 201106-201201, left 201202-201204 plus the future months till 201204 objective: undo the deletion

Concept: restore the whole set of data from tape, then export the trends and trends_uint tables by mysqldump, create partitions again then import back to production database.


After incident, the table trends partition becomes this:
CREATE TABLE `trends` (
  `itemid` bigint(20) unsigned NOT NULL DEFAULT '0',
  `clock` int(11) NOT NULL DEFAULT '0',
  `num` int(11) NOT NULL DEFAULT '0',
  `value_min` double(16,4) NOT NULL DEFAULT '0.0000',
  `value_avg` double(16,4) NOT NULL DEFAULT '0.0000',
  `value_max` double(16,4) NOT NULL DEFAULT '0.0000',
  PRIMARY KEY (`itemid`,`clock`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8
/*!50100 PARTITION BY RANGE ( clock)
(PARTITION p201202 VALUES LESS THAN (1330520400) ENGINE = InnoDB,
 PARTITION p201203 VALUES LESS THAN (1333198800) ENGINE = InnoDB,
 PARTITION p201204 VALUES LESS THAN (1335794400) ENGINE = InnoDB,
 PARTITION p201205 VALUES LESS THAN (1338472800) ENGINE = InnoDB,
 PARTITION p201206 VALUES LESS THAN (1341064800) ENGINE = InnoDB,
 PARTITION p201207 VALUES LESS THAN (1343743200) ENGINE = InnoDB,
 PARTITION p201208 VALUES LESS THAN (1346421600) ENGINE = InnoDB,
 PARTITION p201209 VALUES LESS THAN (1349013600) ENGINE = InnoDB,
 PARTITION p201210 VALUES LESS THAN (1351688400) ENGINE = InnoDB,
 PARTITION p201211 VALUES LESS THAN (1354280400) ENGINE = InnoDB,
 PARTITION p201212 VALUES LESS THAN (1356958800) ENGINE = InnoDB,
 PARTITION p201301 VALUES LESS THAN (1359637200) ENGINE = InnoDB,
 PARTITION p201302 VALUES LESS THAN (1362056400) ENGINE = InnoDB,
 PARTITION p201303 VALUES LESS THAN (1364734800) ENGINE = InnoDB,
 PARTITION p201304 VALUES LESS THAN (1367330400) ENGINE = InnoDB) */ |


Steps: 


1. restore the whole set of backup data from tape, export the deleted partition data as .sql file.
  mysqldump -uzabbix -pzabbix zabbix --no-create-info --skip-add-locks -w "clock >=1309442400 and clock <1312120800" trends > trends_201107.sql
  mysqldump -uzabbix -pzabbix zabbix --no-create-info --skip-add-locks -w "clock >=1312120800 and clock <1314799200" trends > trends_201108.sql
  mysqldump -uzabbix -pzabbix zabbix --no-create-info --skip-add-locks -w "clock >=1314799200 and clock <1317391200" trends > trends_201109.sql
  ...
  mysqldump -uzabbix -pzabbix zabbix --no-create-info --skip-add-locks -w "clock >=1309442400 and clock <1312120800" trends_uint > 


trends_uint_201107.sql
  mysqldump -uzabbix -pzabbix zabbix --no-create-info --skip-add-locks -w "clock >=1312120800 and clock <1314799200" trends_uint > 


trends_uint_201108.sql
  mysqldump -uzabbix -pzabbix zabbix --no-create-info --skip-add-locks -w "clock >=1314799200 and clock <1317391200" trends_uint > 


trends_uint_201109.sql
  ...

2. create partiton back first ( it will use temporary name during creation, after finish, it will use trends.* name back again)

echo "ALTER TABLE trends PARTITION BY RANGE( clock ) (
 PARTITION p201106 VALUES LESS THAN (1309442400) ENGINE = InnoDB,
 PARTITION p201107 VALUES LESS THAN (1312120800) ENGINE = InnoDB,
 PARTITION p201108 VALUES LESS THAN (1314799200) ENGINE = InnoDB,
 PARTITION p201109 VALUES LESS THAN (1317391200) ENGINE = InnoDB,
 PARTITION p201110 VALUES LESS THAN (1320066000) ENGINE = InnoDB,
 PARTITION p201111 VALUES LESS THAN (1322658000) ENGINE = InnoDB,
 PARTITION p201112 VALUES LESS THAN (1325336400) ENGINE = InnoDB,
 PARTITION p201201 VALUES LESS THAN (1328014800) ENGINE = InnoDB,
 PARTITION p201202 VALUES LESS THAN (1330520400) ENGINE = InnoDB,
 PARTITION p201203 VALUES LESS THAN (1333198800) ENGINE = InnoDB,
 PARTITION p201204 VALUES LESS THAN (1335794400) ENGINE = InnoDB,
 PARTITION p201205 VALUES LESS THAN (1338472800) ENGINE = InnoDB,
 PARTITION p201206 VALUES LESS THAN (1341064800) ENGINE = InnoDB,
 PARTITION p201207 VALUES LESS THAN (1343743200) ENGINE = InnoDB,
 PARTITION p201208 VALUES LESS THAN (1346421600) ENGINE = InnoDB,
 PARTITION p201209 VALUES LESS THAN (1349013600) ENGINE = InnoDB,
 PARTITION p201210 VALUES LESS THAN (1351688400) ENGINE = InnoDB,
 PARTITION p201211 VALUES LESS THAN (1354280400) ENGINE = InnoDB,
 PARTITION p201212 VALUES LESS THAN (1356958800) ENGINE = InnoDB,
 PARTITION p201301 VALUES LESS THAN (1359637200) ENGINE = InnoDB,
 PARTITION p201302 VALUES LESS THAN (1362056400) ENGINE = InnoDB,
 PARTITION p201303 VALUES LESS THAN (1364734800) ENGINE = InnoDB,
 PARTITION p201304 VALUES LESS THAN (1367330400) ENGINE = InnoDB )" | mysql -u zabbix -pzabbix zabbix


echo "ALTER TABLE trends_uint PARTITION BY RANGE( clock ) (
 PARTITION p201106 VALUES LESS THAN (1309442400) ENGINE = InnoDB,
 PARTITION p201107 VALUES LESS THAN (1312120800) ENGINE = InnoDB,
 PARTITION p201108 VALUES LESS THAN (1314799200) ENGINE = InnoDB,
 PARTITION p201109 VALUES LESS THAN (1317391200) ENGINE = InnoDB,
 PARTITION p201110 VALUES LESS THAN (1320066000) ENGINE = InnoDB,
 PARTITION p201111 VALUES LESS THAN (1322658000) ENGINE = InnoDB,
 PARTITION p201112 VALUES LESS THAN (1325336400) ENGINE = InnoDB,
 PARTITION p201201 VALUES LESS THAN (1328014800) ENGINE = InnoDB,
 PARTITION p201202 VALUES LESS THAN (1330520400) ENGINE = InnoDB,
 PARTITION p201203 VALUES LESS THAN (1333198800) ENGINE = InnoDB,
 PARTITION p201204 VALUES LESS THAN (1335794400) ENGINE = InnoDB,
 PARTITION p201205 VALUES LESS THAN (1338472800) ENGINE = InnoDB,
 PARTITION p201206 VALUES LESS THAN (1341064800) ENGINE = InnoDB,
 PARTITION p201207 VALUES LESS THAN (1343743200) ENGINE = InnoDB,
 PARTITION p201208 VALUES LESS THAN (1346421600) ENGINE = InnoDB,
 PARTITION p201209 VALUES LESS THAN (1349013600) ENGINE = InnoDB,
 PARTITION p201210 VALUES LESS THAN (1351688400) ENGINE = InnoDB,
 PARTITION p201211 VALUES LESS THAN (1354280400) ENGINE = InnoDB,
 PARTITION p201212 VALUES LESS THAN (1356958800) ENGINE = InnoDB,
 PARTITION p201301 VALUES LESS THAN (1359637200) ENGINE = InnoDB,
 PARTITION p201302 VALUES LESS THAN (1362056400) ENGINE = InnoDB,
 PARTITION p201303 VALUES LESS THAN (1364734800) ENGINE = InnoDB,
 PARTITION p201304 VALUES LESS THAN (1367330400) ENGINE = InnoDB )" | mysql -u zabbix -pzabbix zabbix


or there's a better way, just split partition as follows (less disruptive way, need to re-arrange less data)


echo "ALTER TABLE trends REORGANIZE PARTITION p201202 INTO 
( PARTITION p201106 VALUES LESS THAN (1309442400) ENGINE = InnoDB,
 PARTITION p201107 VALUES LESS THAN (1312120800) ENGINE = InnoDB,
 PARTITION p201108 VALUES LESS THAN (1314799200) ENGINE = InnoDB,
 PARTITION p201109 VALUES LESS THAN (1317391200) ENGINE = InnoDB,
 PARTITION p201110 VALUES LESS THAN (1320066000) ENGINE = InnoDB,
 PARTITION p201111 VALUES LESS THAN (1322658000) ENGINE = InnoDB,
 PARTITION p201112 VALUES LESS THAN (1325336400) ENGINE = InnoDB,
 PARTITION p201201 VALUES LESS THAN (1328014800) ENGINE = InnoDB,
 PARTITION p201202 VALUES LESS THAN (1330520400) ENGINE = InnoDB)" | mysql -u zabbix -pzabbix zabbix


echo "ALTER TABLE trends_uint REORGANIZE PARTITION p201202 INTO 
( PARTITION p201106 VALUES LESS THAN (1309442400) ENGINE = InnoDB,
 PARTITION p201107 VALUES LESS THAN (1312120800) ENGINE = InnoDB,
 PARTITION p201108 VALUES LESS THAN (1314799200) ENGINE = InnoDB,
 PARTITION p201109 VALUES LESS THAN (1317391200) ENGINE = InnoDB,
 PARTITION p201110 VALUES LESS THAN (1320066000) ENGINE = InnoDB,
 PARTITION p201111 VALUES LESS THAN (1322658000) ENGINE = InnoDB,
 PARTITION p201112 VALUES LESS THAN (1325336400) ENGINE = InnoDB,
 PARTITION p201201 VALUES LESS THAN (1328014800) ENGINE = InnoDB,
 PARTITION p201202 VALUES LESS THAN (1330520400) ENGINE = InnoDB)" | mysql -u zabbix -pzabbix zabbix

If the partitioned table data is too large, you can reorganize or rebuild partition to reduce size as follows:

According to my test, after running

 'echo "ALTER TABLE trends_uint REORGANIZE PARTITION p201202 INTO ( PARTITION p201202 VALUES LESS THAN (1330520400) ENGINE = InnoDB)" | mysql -u zabbix -pzabbix zabbix'

, it reduce size to almost half of it.

you can also use rebuild partition just for reducing size.

echo "ALTER TABLE trends_uint rebuild PARTITION p201203" | mysql -uzabbix -pzabbix zabbix


3. import back to the database, for example
mysql -uzabbix -pzabbix zabbix < trends_201201.sql

4. references

mysql -uroot -ppasword -e "select * from mysql.proc"  
mysql -B -h localhost -u zabbix -pzabbix zabbix -e "CALL create_monthly_zabbix_partitions();" 
mysql -uroot -ppassword < createpartitionproc.sql 
mysql -uzabbix -pzabbix zabbix -e "show create table trends"
mysql> alter table trends_uint add partition ( partition p201201 VALUES LESS THAN (1328014800) ENGINE = InnoDB);
ERROR 1493 (HY000): VALUES LESS THAN value must be strictly increasing for each partition


http://zabbixzone.com/zabbix/history-and-trends/
http://zabbixzone.com/zabbix/partitioning-tables/
http://pastebin.com/nY8WG9Fr
http://zabbixzone.com/zabbix/making-your-frontend-faster/
http://zabbixzone.com/zabbix/backuping-only-the-zabbix-configuration/


mysqldump -u username -ppassword [--no-create-info] [--no-data]  database_name > dump.sql
mysqldump -u username -ppassword --all-databases > dump.sql

selinux should be disabled, otherwise web setup wizard is unable to connect to database
centos 6.2 minumal install installs iptables and selinux by default

strings trends_uint.frm # check partition info
strings trends_uint.par # check partition info

============
[root@db01 trends_partition_data]# mysql -uzabbix -pzabbix zabbix < trends_201201.sql
ERROR 1062 (23000) at line 42: Duplicate entry '100100000089189-1326373200' for key 'PRIMARY'
 solution: delete 1201 month partition data first, then import again.
---------
delete from trends where clock >= 1325336400 and clock < 1328014800;


[root@db01 etc]# mysql -uzabbix -pzabbix zabbix < trends_201201.sql 


echo "delete from trends where clock >= 1325336400 and clock < 1328014800" | mysql -uzabbix -pzabbix zabbix
echo "delete from trends_uint where clock >= 1325336400 and clock < 1328014800" | mysql -uzabbix -pzabbix zabbix
----------

/*!40101 SET @saved_cs_client     = @@character_set_client */;
/*!40101 SET character_set_client = utf8 */;
/*!40101 SET character_set_client = @saved_cs_client */;

If you encounter some error like character set client, use above in the front of sql data file for insert, remove 'lock table trends write' also

5. some command output
[root@db01 etc]# mysql -uzabbix -pzabbix zabbix -e "alter table trends repair patition p201106"
ERROR 1064 (42000) at line 1: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right

syntax to use near 'patition p201106' at line 1
[root@db01 etc]# mysql -uzabbix -pzabbix zabbix -e "alter table trends repair partition p201106"
+---------------+--------+----------+---------------------------------------------------------+
| Table         | Op     | Msg_type | Msg_text                                                |
+---------------+--------+----------+---------------------------------------------------------+
| zabbix.trends | repair | note     | The storage engine for the table doesn't support repair |
+---------------+--------+----------+---------------------------------------------------------+
[root@db01 etc]# mysql -uzabbix -pzabbix zabbix -e "alter table trends analyze partition p201106"
+---------------+---------+----------+----------+
| Table         | Op      | Msg_type | Msg_text |
+---------------+---------+----------+----------+
| zabbix.trends | analyze | status   | OK       |
+---------------+---------+----------+----------+
[root@db01 etc]# mysql -uzabbix -pzabbix zabbix -e "alter table trends check partition p201106"
+---------------+-------+----------+----------+
| Table         | Op    | Msg_type | Msg_text |
+---------------+-------+----------+----------+
| zabbix.trends | check | status   | OK       |
+---------------+-------+----------+----------+
[root@db01 etc]# mysql -uzabbix -pzabbix zabbix -e "alter table trends optimize partition p201106"
+---------------+----------+----------+-------------------------------------------------------------------+
| Table         | Op       | Msg_type | Msg_text                                                          |
+---------------+----------+----------+-------------------------------------------------------------------+
| zabbix.trends | optimize | note     | Table does not support optimize, doing recreate + analyze instead |
| zabbix.trends | optimize | status   | OK                                                                |
+---------------+----------+----------+-------------------------------------------------------------------+

Refer to http://dev.mysql.com/doc/refman/5.1/en/alter-table-partition-operations.html for alter table partition commands

Only a single instance of any one of the following options can be used in a given ALTER TABLE statement: PARTITION BY, ADD PARTITION, DROP PARTITION, TRUNCATE PARTITION, REORGANIZE PARTITION, or COALESCE PARTITION, ANALYZE PARTITION, CHECK PARTITION, OPTIMIZE PARTITION, REBUILD PARTITION, REMOVE PARTITIONING.

How to use rman backup to duplicate database and change db_name


Jephe Wu - http://linuxtechres.blogspot.com 


Environment: OL 5.7 32bit, Oracle database 11.2, sid=orcl
Objective: duplicate database on the same server then change db_name from orcl to devdb
concept: duplicate line in /etc/oratab, make orcl and devdb instance, then make everything in password file, spfile/pfile, controlfile, redo log, data file from orcl to devdb, only leave db_name in pfile/spfile as orcl for rman restore from orcl database backup, then use nid to change db_name

Note: If you are using DUPLICATE to create a standby database, then the name must be the same as the primary database


Part I - user rman to duplicate database on the same host

Steps:

1. use rman to backup source db 
rman target /
rman> backup database plus archivelog;
handle=/home/oracle/app/oracle/flash_recovery_area/ORCL/backupset/2012_05_04/o1_mf_annnn_TAG20120504T204647_7t7dwqjk_.bkp

tag=TAG20120504T204647 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
Finished backup at 04-MAY-12

Starting Control File and SPFILE Autobackup at 04-MAY-12
piece handle=/home/oracle/app/oracle/flash_recovery_area/ORCL/autobackup/2012_05_04/o1_mf_s_782426808_7t7dws7b_.bkp comment=NONE
Finished Control File and SPFILE Autobackup at 04-MAY-12

record down data file names and redo log names

sqlplus / as sysdba
sql> select name from v$datafile;
sql> select name from v$tempfile;
sql> select member from v$logfile;
sql> select name,open_mode,dbid from v$database;


2. vi /etc/oratab to add devdb line
orcl:/home/oracle/app/oracle/product/11.2.0/dbhome_1:N
devdb:/home/oracle/app/oracle/product/11.2.0/dbhome_1:N

. oraenv
orcl
cd $ORACLE_HOME/dbs
sqlplus / as sysdba
sql> create pfile='initdevdb.ora' from spfile;

3. set up devdb instance
. oraenv
devdb

cd $ORACLE_HOME/dbs
vi initdevdb.ora to change all orcl to devdb, except for db_name=orcl, otherwise, you are not able to mount database, this db_name will be changed after everything is done.


Note: you can also use the following parameters, after done, remove them.

db_file_name_convert = (PROD,devdb)
log_file_name_convert=(PROD,devdb)



then mkdir -p for those changed path

You must verify that all paths are accessible on the host where the database is being duplicated

example of destination database host pfile parameters:
------------

DB_NAME=devdb
CONTROL_FILES=(/dup/oracle/oradata/devdb/control01.ctl,
               /dup/oracle/oradata/devdb/control02.ctl)
DB_FILE_NAME_CONVERT=(/oracle/oradata/prod/,/oracle/oradata/devdb/)
LOG_FILE_NAME_CONVERT=(/oracle/oradata/prod/redo,/oracle/oradata/devdb/redo)
----------



sqlplus / as sysdba
sql> startup nomount;
sql> create spfile from pfile;
sql> show parameter spfile;
sql> exit;


rman target /
rman> restore controlfile from /home/oracle/app/oracle/flash_recovery_area/ORCL/autobackup/2012_05_04/o1_mf_s_782426808_7t7dws7b_.bkp


4. set new logfile names
. oraenv
devdb
sqlplus / as sysdba
sql> alter database mount;
sql> alter database rename file '/home/oracle/app/oracle/oradata/orcl/redo01.log' to

'/home/oracle/app/oracle/oradata/devdb/redo01.log';
sql> alter database rename file '/home/oracle/app/oracle/oradata/orcl/redo02.log' to

'/home/oracle/app/oracle/oradata/devdb/redo02.log';
sql> alter database rename file '/home/oracle/app/oracle/oradata/orcl/redo03.log' to

'/home/oracle/app/oracle/oradata/devdb/redo03.log';

5. rman restore and recover
. oraenv
devdb

rman target /
rman>
run { set newname for datafile 1 to '/home/oracle/app/oracle/oradata/devdb/system01.dbf';
set newname for datafile 2 to '/home/oracle/app/oracle/oradata/devdb/sysaux01.dbf';
set newname for datafile 3 to '/home/oracle/app/oracle/oradata/devdb/undotbs01.dbf';
set newname for datafile 4 to '/home/oracle/app/oracle/oradata/devdb/users01.dbf';
set newname for datafile 5 to '/home/oracle/app/oracle/oradata/devdb/example01.dbf';
restore database;
switch database to copy;
recover database;
alter database open resetlogs;
}

6. change db_name and dbid

. oraenv
devdb
sqlplus / as sysdba

sql> set line 32000
sql> select name from v$database;
sql> shutdown immediate;
sql> startup mount;
sql> exit


$ cd $ORACLE_HOME/dbs
$ orapwd file=orapwdevdb
note: filename is with prefix orapw, not orapwd


$ nid TARGET=SYS/oracle dbname=devdb
Connected to server version 11.2.0

Control Files in database:
    /home/oracle/app/oracle/oradata/devdb/control01.ctl
    /home/oracle/app/oracle/flash_recovery_area/devdb/control02.ctl

Change database ID and database name ORCL to DEVDB? (Y/[N]) => Y

Proceeding with operation
Changing database ID from 1294051154 to 687139782
Changing database name from ORCL to DEVDB
    Control File /home/oracle/app/oracle/oradata/devdb/control01.ctl - modified
    Control File /home/oracle/app/oracle/flash_recovery_area/devdb/control02.ctl - modified
    Datafile /home/oracle/app/oracle/oradata/devdb/system01.db - dbid changed, wrote new name
    Datafile /home/oracle/app/oracle/oradata/devdb/sysaux01.db - dbid changed, wrote new name
    Datafile /home/oracle/app/oracle/oradata/devdb/undotbs01.db - dbid changed, wrote new name
    Datafile /home/oracle/app/oracle/oradata/devdb/users01.db - dbid changed, wrote new name
    Datafile /home/oracle/app/oracle/oradata/devdb/example01.db - dbid changed, wrote new name
    Control File /home/oracle/app/oracle/oradata/devdb/control01.ctl - dbid changed, wrote new name
    Control File /home/oracle/app/oracle/flash_recovery_area/devdb/control02.ctl - dbid changed, wrote new name
    Instance shut down

Database name changed to DEVDB.
Modify parameter file and generate a new password file before restarting.
Database ID for database DEVDB changed to 687139782.
All previous backups and archived redo logs for this database are unusable.
Database is not aware of previous backups and archived logs in Recovery Area.
Database has been shutdown, open database with RESETLOGS option.
Succesfully changed database name and ID.
DBNEWID - Completed succesfully.


7. after change, restart db
. oraenv
devdb

sqlplus / as sysdba
sql> startup nomount;
sql> alter system set db_name=devdb scope=spfile;
sql> startup mount;
sql> alter database open resetlogs;

sql> select name,open_mode from v$database;

If you used db_file_name_convert and log_file_name_convert, you can now reset them:

alter system reset db_file_name_convert scope=spfile sid='*';
alter system reset log_file_name_convert scope=spfile sid='*';


8. verify
sqlplus / as sysdba
sql> select member from v$logfile;
sql> select name from v$tempfile;
sql> select name from v$datafile;
Sql> alter tablespace temp add tempfile '/home/oracle/app/oracle/oradata/devdb/temp01.dbf' size 100m autoextend on next 10m ;
Sql> alter tablespace temp drop tempfile '/home/oracle/app/oracle/oradata/orcl/temp01.dbf';
Sql> quit;

9. listener.ora and tnsname.ora
Add entries for devdb database in listener.ora and tnsname.ora if needed.


According to
http://dba.stackexchange.com/questions/338/can-i-change-the-sid-of-an-oracle-database, change additional items after dbname change as follows:

If database name being changed only then resetlogs is not required:
1. startup database in mount mode
      shutdown immediate
      startup mount
2. run nid to change database name:
      nid target=sys/syspassword@dbtns dbname=newname setname=YES
3. shutdown and start database in mount mode:
      shutdown immediate
      startup mount
4. change db_name in spfile (or in pfile editing the file):
      alter system set db_name=newname scope=spfile;
5. recreate password file:
      orapwd file=orapwnewname password=syspassword
6. startup the database
      startup
7. post rename steps:
      change SID in listener.ora
      correct tnsnames.ora
      remove old trace directories
      change /etc/oratab (UNIX) or rename windows service using oradim



10. references
 a. Rman backup and restore in 11gR2
http://docs.oracle.com/cd/E11882_01/backup.112/e10642.pdf


b.  http://dba-oracle.com/t_rman_clone_copy_database.htm
c. change db_name - http://docs.oracle.com/cd/B10500_01/server.920/a96652/ch14.htm

Part II - duplicate to remote host without connecting to target and catalog

Refer  to

RMAN 11GR2 : DUPLICATE WITHOUT CONNECTING TO TARGET DATABASE [ID 874352.1]
According to this support ID, you might need to use pfile as follow for duplicate database:

Method 1: 
db_name=v11dup
db_unique_name=v11dup
db_recovery_file_dest='/recovery_area'
control_files='/oradata/v11dup/control01.ctl'
audit_file_dest='/oradata/admin/v11dup'
compatible='11.2.0.0.0'
db_recovery_file_dest_size=100G
diagnostic_dest='/oradata/admin/v11dup'

If files are to be restored to a different location:

db_file_name_convert=('<old path>' , '<new path>')
log_file_name_convert=('<old path>' , '<new path>')


Note:
If  a database must be accessed from another host, you must set up a password file and Oracle Net connectivity
1. Create an Oracle Password File for the Auxiliary Instance
2. Ensure Oracle Net Connectivity to the Auxiliary Instance
The auxiliary instance must be accessible through Oracle Net. Before proceeding, start SQL*Plus to ensure that you can establish a connection to the auxiliary instance. Note that you must connect to the auxiliary instance with SYSDBA privileges, so a password file must exist.



In this example, we use only auxiliary channel to duplicate, without connecting to target and catalog, we copy FRA database backup to destination host, and put under any directory /home/oracle/DEVDB, this is called backup-based duplication, without connecting to target and catalog 


Steps:
Backup-based duplication:
1.  if you don't connect to target or catalog, only connect to auxiliary host for duplication, you must put the rman database backup files on destination host location specified by the BACKUP LOCATION option which  must contain sufficient backup sets, image copies, and archived logs to restore all of the files being duplicated, and recover them to the desired point in time. This can be any directory.

2. Preparation on source host
a. on source: backup database then copy to destination.
rman> backup database plus archivelog;
generate pfile from spfile if necessary, then copy to auxiliary host, you can put in any folder.


b. on destination:
modify pfile as follows , use db_file_name_convert and log_file_name_convert in pfile if needed.

DB_NAME=testdb
CONTROL_FILES=(/oracle/oradata/testdb/control01.ctl,
               /oracle/oradata/testdb/control02.ctl)
DB_FILE_NAME_CONVERT=(prod,testdb)
LOG_FILE_NAME_CONVERT=(prod,testdb)

startup nomount

create directory accordingly for testdb, such as 'mkdir -p /oracle/oradata/testdb'.

c. use rman to make database duplication
run rman on auxiliary host testdb as follows to duplicate database:

[oracle@testdb dbs]$ rman 


Recovery Manager: Release 11.2.0.1.0 - Production on Sun May 6 01:54:11 2012


Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.


Note: you don't have to connect to target host(source host)
target database Password: 
connected to target database: DEVDB (DBID=687139782)


RMAN> connect auxiliary /


connected to auxiliary database: TESTDB (not mounted)


RMAN> duplicate database devdb to testdb backup location '/home/oracle/DEVDB';


note: on source host, FRA directory is at '/home/oracle/app/oracle/flash_recovery_area_devdb/DEVDB/';


note: since you have copied backup over to testdb, you don't have to connect to target and use the same directory as target here, you can actually copy backup to any directory on auxiliary host.


Starting Duplicate Db at 06-MAY-12
using target database control file instead of recovery catalog
allocated channel: ORA_AUX_DISK_1
channel ORA_AUX_DISK_1: SID=20 device type=DISK


contents of Memory Script:
{
   sql clone "create spfile from memory";
}
executing Memory Script


sql statement: create spfile from memory


contents of Memory Script:
{
   shutdown clone immediate;
   startup clone nomount;
}
executing Memory Script


Oracle instance shut down


connected to auxiliary database (not started)
Oracle instance started


Total System Global Area     217157632 bytes


Fixed Size                     2211928 bytes
Variable Size                159387560 bytes
Database Buffers              50331648 bytes
Redo Buffers                   5226496 bytes


contents of Memory Script:
{
   sql clone "alter system set  db_name = 
 ''DEVDB'' comment=
 ''Modified by RMAN duplicate'' scope=spfile";
   sql clone "alter system set  db_unique_name = 
 ''TESTDB'' comment=
 ''Modified by RMAN duplicate'' scope=spfile";
   shutdown clone immediate;
   startup clone force nomount
   restore clone primary controlfile;
   alter clone database mount;
}
executing Memory Script


sql statement: alter system set  db_name =  ''DEVDB'' comment= ''Modified by RMAN duplicate'' scope=spfile


sql statement: alter system set  db_unique_name =  ''TESTDB'' comment= ''Modified by RMAN duplicate'' scope=spfile


Oracle instance shut down


Oracle instance started


Total System Global Area     217157632 bytes


Fixed Size                     2211928 bytes
Variable Size                159387560 bytes
Database Buffers              50331648 bytes
Redo Buffers                   5226496 bytes


Starting restore at 06-MAY-12
allocated channel: ORA_AUX_DISK_1
channel ORA_AUX_DISK_1: SID=18 device type=DISK


channel ORA_AUX_DISK_1: starting datafile backup set restore
channel ORA_AUX_DISK_1: restoring control file
channel ORA_AUX_DISK_1: reading from backup piece /home/oracle/app/oracle/flash_recovery_area_devdb/DEVDB/autobackup/2012_05_06/o1_mf_s_782529031_7tbjq7ml_.bkp
channel ORA_AUX_DISK_1: piece handle=/home/oracle/app/oracle/flash_recovery_area_devdb/DEVDB/autobackup/2012_05_06/o1_mf_s_782529031_7tbjq7ml_.bkp tag=TAG20120506T011031
channel ORA_AUX_DISK_1: restored backup piece 1
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:01
output file name=/home/oracle/oradata/testdb/control01.ctl
output file name=/home/oracle/oradata/testdb/control02.ctl
Finished restore at 06-MAY-12


database mounted


contents of Memory Script:
{
   set until scn  1815156;
   set newname for datafile  1 to 
 "/home/oracle/app/oracle/oradata/testdb/system01.dbf";
   set newname for datafile  2 to 
 "/home/oracle/app/oracle/oradata/testdb/sysaux01.dbf";
   set newname for datafile  3 to 
 "/home/oracle/app/oracle/oradata/testdb/undotbs01.dbf";
   set newname for datafile  4 to 
 "/home/oracle/app/oracle/oradata/testdb/users01.dbf";
   set newname for datafile  5 to 
 "/home/oracle/app/oracle/oradata/testdb/example01.dbf";
   restore
   clone database
   ;
}
executing Memory Script


executing command: SET until clause


executing command: SET NEWNAME


executing command: SET NEWNAME


executing command: SET NEWNAME


executing command: SET NEWNAME


executing command: SET NEWNAME


Starting restore at 06-MAY-12
using channel ORA_AUX_DISK_1


channel ORA_AUX_DISK_1: starting datafile backup set restore
channel ORA_AUX_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_AUX_DISK_1: restoring datafile 00001 to /home/oracle/app/oracle/oradata/testdb/system01.dbf
channel ORA_AUX_DISK_1: restoring datafile 00002 to /home/oracle/app/oracle/oradata/testdb/sysaux01.dbf
channel ORA_AUX_DISK_1: restoring datafile 00003 to /home/oracle/app/oracle/oradata/testdb/undotbs01.dbf
channel ORA_AUX_DISK_1: restoring datafile 00004 to /home/oracle/app/oracle/oradata/testdb/users01.dbf
channel ORA_AUX_DISK_1: restoring datafile 00005 to /home/oracle/app/oracle/oradata/testdb/example01.dbf
channel ORA_AUX_DISK_1: reading from backup piece /home/oracle/app/oracle/flash_recovery_area_devdb/DEVDB/backupset/2012_05_06/o1_mf_nnndf_TAG20120506T010904_7tbjnjvm_.bkp
channel ORA_AUX_DISK_1: piece handle=/home/oracle/app/oracle/flash_recovery_area_devdb/DEVDB/backupset/2012_05_06/o1_mf_nnndf_TAG20120506T010904_7tbjnjvm_.bkp tag=TAG20120506T010904
channel ORA_AUX_DISK_1: restored backup piece 1
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:01:25
Finished restore at 06-MAY-12


contents of Memory Script:
{
   switch clone datafile all;
}
executing Memory Script


datafile 1 switched to datafile copy
input datafile copy RECID=6 STAMP=782531765 file name=/home/oracle/app/oracle/oradata/testdb/system01.dbf
datafile 2 switched to datafile copy
input datafile copy RECID=7 STAMP=782531765 file name=/home/oracle/app/oracle/oradata/testdb/sysaux01.dbf
datafile 3 switched to datafile copy
input datafile copy RECID=8 STAMP=782531765 file name=/home/oracle/app/oracle/oradata/testdb/undotbs01.dbf
datafile 4 switched to datafile copy
input datafile copy RECID=9 STAMP=782531765 file name=/home/oracle/app/oracle/oradata/testdb/users01.dbf
datafile 5 switched to datafile copy
input datafile copy RECID=10 STAMP=782531765 file name=/home/oracle/app/oracle/oradata/testdb/example01.dbf


contents of Memory Script:
{
   set until scn  1815156;
   recover
   clone database
    delete archivelog
   ;
}
executing Memory Script


executing command: SET until clause


Starting recover at 06-MAY-12
using channel ORA_AUX_DISK_1


starting media recovery


archived log for thread 1 with sequence 9 is already on disk as file /home/oracle/app/oracle/flash_recovery_area_devdb/DEVDB/archivelog/2012_05_06/o1_mf_1_9_7tbjq5q2_.arc
archived log file name=/home/oracle/app/oracle/flash_recovery_area_devdb/DEVDB/archivelog/2012_05_06/o1_mf_1_9_7tbjq5q2_.arc thread=1 sequence=9
media recovery complete, elapsed time: 00:00:00
Finished recover at 06-MAY-12


contents of Memory Script:
{
   shutdown clone immediate;
   startup clone nomount;
   sql clone "alter system set  db_name = 
 ''TESTDB'' comment=
 ''Reset to original value by RMAN'' scope=spfile";
   sql clone "alter system reset  db_unique_name scope=spfile";
   shutdown clone immediate;
   startup clone nomount;
}
executing Memory Script


database dismounted
Oracle instance shut down


connected to auxiliary database (not started)
Oracle instance started


Total System Global Area     217157632 bytes


Fixed Size                     2211928 bytes
Variable Size                159387560 bytes
Database Buffers              50331648 bytes
Redo Buffers                   5226496 bytes


sql statement: alter system set  db_name =  ''TESTDB'' comment= ''Reset to original value by RMAN'' scope=spfile


sql statement: alter system reset  db_unique_name scope=spfile


Oracle instance shut down


connected to auxiliary database (not started)
Oracle instance started


Total System Global Area     217157632 bytes


Fixed Size                     2211928 bytes
Variable Size                159387560 bytes
Database Buffers              50331648 bytes
Redo Buffers                   5226496 bytes
sql statement: CREATE CONTROLFILE REUSE SET DATABASE "TESTDB" RESETLOGS ARCHIVELOG 
  MAXLOGFILES     16
  MAXLOGMEMBERS      3
  MAXDATAFILES      100
  MAXINSTANCES     8
  MAXLOGHISTORY      292
 LOGFILE
  GROUP  1 ( '/home/oracle/app/oracle/oradata/testdb/redo01.log' ) SIZE 50 M  REUSE,
  GROUP  2 ( '/home/oracle/app/oracle/oradata/testdb/redo02.log' ) SIZE 50 M  REUSE,
  GROUP  3 ( '/home/oracle/app/oracle/oradata/testdb/redo03.log' ) SIZE 50 M  REUSE
 DATAFILE
  '/home/oracle/app/oracle/oradata/testdb/system01.dbf'
 CHARACTER SET AL32UTF8




contents of Memory Script:
{
   set newname for tempfile  1 to 
 "/home/oracle/app/oracle/oradata/testdb/temp01.dbf";
   switch clone tempfile all;
   catalog clone datafilecopy  "/home/oracle/app/oracle/oradata/testdb/sysaux01.dbf", 
 "/home/oracle/app/oracle/oradata/testdb/undotbs01.dbf", 
 "/home/oracle/app/oracle/oradata/testdb/users01.dbf", 
 "/home/oracle/app/oracle/oradata/testdb/example01.dbf";
   switch clone datafile all;
}
executing Memory Script


executing command: SET NEWNAME


renamed tempfile 1 to /home/oracle/app/oracle/oradata/testdb/temp01.dbf in control file


cataloged datafile copy
datafile copy file name=/home/oracle/app/oracle/oradata/testdb/sysaux01.dbf RECID=1 STAMP=782531775
cataloged datafile copy
datafile copy file name=/home/oracle/app/oracle/oradata/testdb/undotbs01.dbf RECID=2 STAMP=782531775
cataloged datafile copy
datafile copy file name=/home/oracle/app/oracle/oradata/testdb/users01.dbf RECID=3 STAMP=782531775
cataloged datafile copy
datafile copy file name=/home/oracle/app/oracle/oradata/testdb/example01.dbf RECID=4 STAMP=782531775


datafile 2 switched to datafile copy
input datafile copy RECID=1 STAMP=782531775 file name=/home/oracle/app/oracle/oradata/testdb/sysaux01.dbf
datafile 3 switched to datafile copy
input datafile copy RECID=2 STAMP=782531775 file name=/home/oracle/app/oracle/oradata/testdb/undotbs01.dbf
datafile 4 switched to datafile copy
input datafile copy RECID=3 STAMP=782531775 file name=/home/oracle/app/oracle/oradata/testdb/users01.dbf
datafile 5 switched to datafile copy
input datafile copy RECID=4 STAMP=782531775 file name=/home/oracle/app/oracle/oradata/testdb/example01.dbf


contents of Memory Script:
{
   Alter clone database open resetlogs;
}
executing Memory Script


database opened
Finished Duplicate Db at 06-MAY-12


Method 2.  use set newname for datafile/tempfile/tablespace/database etc in 11gR2
Refer to http://aprakash.wordpress.com/2012/01/11/set-newname-command-made-easier-in-11gr2/
$ rman
rman> connect auxiliary /
rman> run {
set newname for database to '/oradata/devdb/%b';
set newname for tempfile 1 to '/oradata/devdb/%b';
duplicate database 'prod' 12345678 to orcl
backup location '/rman/backup/orcl' 
logfile
group 1 ('/path','/path2') size 50m reuse,
group 2 ('/path','/path2') size 50m reuse;
}


Note: 12345678 is the dbid. 


or from http://docs.oracle.com/cd/E11882_01/backup.112/e10642.pdf



RUN
{
  SET NEWNAME FOR DATAFILE 1 TO '/oradata1/system01.dbf'; 
  SET NEWNAME FOR DATAFILE 2 TO '/oradata2/sysaux01.dbf';
  SET NEWNAME FOR DATAFILE 3 TO '/oradata3/undotbs01.dbf';
  SET NEWNAME FOR DATAFILE 4 TO '/oradata4/users01.dbf'; 
  SET NEWNAME FOR DATAFILE 5 TO '/oradata5/users02.dbf';
  SET NEWNAME FOR TEMPFILE 1 TO '/oradatat/temp01.dbf'; 
  DUPLICATE TARGET DATABASE TO dupdb
    SKIP TABLESPACE tools
    LOGFILE
      GROUP 1 ('/duplogs/redo01a.log', 
               '/duplogs/redo01b.log') SIZE 4M REUSE, 
      GROUP 2 ('/duplogs/redo02a.log', 
               '/duplogs/redo02b.log') SIZE 4M REUSE;
}


or use configure auxname to rename datafile, then rman run {} part will use this setting to rename datafiles.
rman>

 CONFIGURE AUXNAME FOR DATAFILE 1 TO '/oradata1/system01.dbf'; 
  CONFIGURE AUXNAME FOR DATAFILE 2 TO '/oradata2/sysaux01.dbf';
  CONFIGURE AUXNAME FOR DATAFILE 3 TO '/oradata3/undotbs01.dbf';
  CONFIGURE AUXNAME FOR DATAFILE 4 TO '/oradata4/users01.dbf'; 
  CONFIGURE AUXNAME FOR DATAFILE 5 TO '/oradata5/users02.dbf';


run 
{
SET NEWNAME FOR TEMPFILE 1 TO '/oradatat/temp01.dbf'; 
DUPLICATE TARGET DATABASE
  TO dupdb
  SKIP TABLESPACE tools
  LOGFILE
    GROUP 1 ('/duplogs/redo01a.log',
             '/duplogs/redo01b.log') SIZE 4M REUSE,
    GROUP 2 ('/duplogs/redo02a.log',
             '/duplogs/redo02b.log') SIZE 4M REUSE;
}



Method 3. if you have chance to shutdown source db, here is faster way: 
refer to http://www.dba-oracle.com/oracle_tips_db_copy.htm
a. on source db:
alter database backup controlfile to trace 
sqlplus / as sysdba
sql> shutdown immediate;
copy datafiles, pfile, controlfile trace backup to destination, you can change datafile name and path if needed.


b. on destination db:
modify tracefile accordingly, such as change 


CREATE CONTROLFILE REUSE DATABASE "ORCL" NORESETLOGS
to
CREATE CONTROLFILE SET DATABASE "DEVDB" RESETLOGS

remove
“recover database” and “alter database open”


rename datafile if necessary


Create the bdump, udump and cdump directories


run trace file to create control file on new host:
sqlplus / as sysdba @tracefile.sql


enable archive logmode for new server


4. error messages:
RMAN-05541: no archived logs found in target database