Understanding qmail smtp relay

Environment: datacenter network 10.0.100.0/24, qmail server ip address: 10.0.100.16
Objective: how to configure smtp relay from other ips, including those from the same network segment  and another network segment ip 10.0.104.4


How qmail smtp relay works:

1.  check configuration files /var/qmail/control/rcpthosts and /var/qmail/control/smtproutes and /etc/tcp.smtp(actually useful file is /etc/tcp.smtp.cdb, you need to generate this cdb file after changing tcp.smtp and also make sure /etc/tcp.smtp.cdb file is world readable, very important, otherwise, qmail will ignore this file and refuse relay from any other hosts except for localhost )

Important: any changes to rcpthosts, smtproutes and tcp.smtp(thereafter, run the following tcprules command to generate tcp.smtp.cdb), no need to restart qmail to take effect, qmail will read them on the fly.

The command is :
# cd /etc/

# tcprules tcp.smtp.cdb tcp.smtp.temp < tcp.smtp
# chmod go+r tcp.smtp.cdb
2. under the following cases, qmail allows the smtp relay from other hosts:
a. the host appears in the /etc/tcp.smtp file and generated the coresponding /etc/tcp.smtp.cdb file.
for example:
127.:allow,RELAYCLIENT=""
10.0.100.:allow,TCPREMOTEHOST="",TCPREMOTEIP="",RELAYCLIENT=""
10.0.104.4:allow,TCPREMOTEHOST="",TCPREMOTEIP="",RELAYCLIENT=""
:allow
note: this will allow 10.0.100.0/24 segment and 10.0.104.4 ip to be able to relay email 
through this qmail server
b. if the smtp client ip does not appear in above tcp.smtp file, then alternative way is 
to add the recipient domain in the /var/qmail/control/rcpthosts.
for example: the following is the content of the file /var/qmail/control/rcpthosts
 
domain1.com
domain2.com

note: when you add any domain to above file, qmail will be using the new configuration on the fly. Although the smtp client ip is not inside the database file /etc/tcp.smtp.cdb, as long as the recipient email domain part appears in above rcpthosts file, the smtp relay is allowed.

c.  only after the smtp relay is allowed, you can decide which email gateway will be forwarded to for the specified domain, this can be configured in /var/qmail/control/smtproutes.

for example:

:10.0.100.1
domain1.com:10.0.100.14
domain2.com:10.0.100.15

note:  for different domain, the email will be sent to the different ip, for any other domains, will be sent to 10.0.100.1.


How to stop/start qmail:

You can vi /etc/inittab to comment out the svscanboot line, then run 'init q' to take effect, after that, vi /etc/inittab to uncomment it then run 'init q' again.


 
 

How to configure a new Cisco router or switch

Environment: Cisco 2960G switch 48ports
Objective: Configuring it as a managable switch and use port 48 as monitoring port for SNORT server use.


Steps:

1. Use Putty to specify COM1 to connect to switch console, then power on the switch.
2. waiting  for the switch startup, it takes some time.
3. follow the default configuration wizard
4. specify vlan1 as management port, to specify IP address etc
5. specify ssh username and password

username root secret yourpassword

6. configuring the rest
no enable password
no ip domain-lookup

ip domain-name yourdomain.com
crypto key generate rsa (then give 1024)
note: above 2 lines are for enabling ssh server function

clock timezone SGT 8 (not in conf t prompt)
service password-encryption

no ip http server
no ip http secure-server
snmp-server community public RO
line con 0
line vty 0 4
  login local
  transport input ssh
line vty 5 15
  login local  (using local authentication, not remote authentication server)
  transport input ssh  (only accept ssh incoming connection)

7. configuring monitoring information
monitor session source vlan 1
monitor session 1 destination interface GigabitEthernet0/48

8. configuring ntp servers
ntp server 1.2.3.4

note: if you don't set the ntp server, after router's power is off and power it on again, it will lose clock information. If there's ntp server available, it will get the correct time from NTP server immediately after power on

9. setting clock
clock set 14:28:00 20 OCT 2009 (in conf t prompt)

10. save
wr

11. disable logging message
no logging console
no logging monitor  (This command disables logging for terminal lines other than the system console)
logging buffered 16384
logging trap notifications  (This command provides notification (level 5) messaging to the specified syslog server. The default logging level for all devices (console, monitor, buffer, and traps) is debugging (level 7). Leaving the trap logging level at 7 produces many extraneous messages that are of little or no concern to the health of the network. It is recommended that the default logging level for traps be set to 5. )

note:
You can synchronise the logging messages with your command prompt as follows:

Router(config)# line con 0
Router(config-line)# logging synchronous
Router(config)# line aux 0
Router(config-line)# logging synchronous
Router(config)# line vty 0 4
Router(config-line)# logging synchronous


12. some other useful commands
show clock
show monitor
show ssh
show line

hostname 48PortCiscoSwitch
ip name-server 1.1.1.1

13. the example of configuration
-----------------------------
version 12.2
no service pad
service timestamps debug datetime msec
service timestamps log datetime msec

service password-encryption
!
hostname 48PortCiscoSwitch
!
boot-start-marker
boot-end-marker
!
enable secret 5 $1$7zZ5$91N7FPR68YbLyRO4NE3jr/
!
username root secret 5 $1$2IEN$wmDzCnXsatjHtaPMrss4e.
no aaa new-model
clock timezone SGT 8
system mtu routing 1500
ip subnet-zero
!
no ip domain-lookup
ip domain-name jephe.com
!
spanning-tree mode pvst
spanning-tree extend system-id
!
vlan internal allocation policy ascending

interface GigabitEthernet0/1
!
interface GigabitEthernet0/2
.......
interface GigabitEthernet0/48
!
interface Vlan1
 ip address 10.0.0.100 255.255.255.0
 no ip route-cache
!
ip default-gateway 10.0.0.1
no ip http server
no ip http secure-server
snmp-server community public RO
!
control-plane
!
!
line con 0
 logging synchronous
line vty 0 4
 logging synchronous
 login local
 transport input ssh
line vty 5 15
 logging synchronous
 login local
 transport input ssh
!
!
monitor session 1 source vlan 1
monitor session 1 destination interface Gi0/48
ntp server 10.0.0.1

end
----------------------------

Appendix:
 Cisco routers/Switches have two privilege levels:

  • User EXEC mode—privilege level 1
  • Privileged EXEC mode—privilege level 15
When you log in to a Cisco router under the default configuration, you're in user EXEC mode (level 1). From this mode, you have access to some information about the router, such as the status of interfaces, and you can view routes in the routing table. However, you can't make any changes or view the running configuration file.
Because of these limitations, most Cisco router users immediately type enable to get out of user EXEC mode. By default, typing enable takes you to level 15, privileged EXEC mode. In the Cisco IOS, this level is equivalent to having root privileges in UNIX or administrator privileges in Windows. In other words, you have full access to the router.
For networks maintained by just a few people, everyone typically has the password to get to privileged mode

Cisco IOS provides for 16 different privilege levels ranging from 0 to 15.

The Cisco IOS comes with 2 predefined user levels.
User EXEC mode runs at privilege level 1 and “enabled” mode (privileged EXEC mode)runs at level 15.

Setting up 64bit Oracle 11gR1 on 64bit OEL5.4 with ramfs and hugepage configuration

Jephe Wu - http://linuxtechres.blogspot.com


Environment:  HP Proliant Server DL360P6, 16G RAM, 8x300G SAS hard disk, Oracle Enterprise Linux(OEL) 5.4 x86-64, Oracle 11g 64bit
Objective: Setting up Oracle server with Oracle recommended RAID and using ramfs and hugepage to improve performance.


Part I:  RAID, Partition and Installation of OEL and Oracle


Steps:

1. Oracle suggest to use RAID1 for control file, redo log, archive log and temp tablespace. RAID10 for database files which means you can put /u01 partition on RAID1+0 (search metalink for

Doc ID: 30286.1 -  I/O Tuning with Different RAID Configurations )
2. For installation of OEL and Oracle, you can refer to otn.oracle.com tutorial part.
Note: for the installation of OEL, you should run 'mkdir -p /u01/app/oracle /u01/app/oraInventory/' after following the tutorial and before installing Oracle 11g.

note: for 64bit OEL, when putting shmmax and shmall, don't follow the tutorial if the existing default value is already more than the one you will be putting.

3. You might need to install PAE kernel on 32bit OEL to recognize more than 4G memory from OS level.

Part II: Optimize the system from OS point of view
Steps:

1. kernel boot parameter to optimize I/O
Edit the /etc/grub.conf file and add the following parameter to the kernel that's being used,
title Red Hat Enterprise Linux Server (2.6.18-8.el5)
        root (hd0,0) 
        kernel /vmlinuz-2.6.18-8.el5 ro root=/dev/sda2 elevator=deadline
        initrd /initrd-2.6.18-8.el5.img
2. add the following mount parameter to /etc/fstab for /u01 partition
defaults,noatime.
3.  remove all unnecessary services, use 'ntsysv --level 2345' to remove them, only leave services like crond, sysstat, sendmail, syslog, sshd, irq-balance, portmap, nfslock, network.
4. install HP Support Pack for hardware monitoring.

Part III: Optimize Oracle with ramfs and hugepage

Steps:

1.  using ramfs instead of default tmpfs which comes with OEL
  • Edit the /etc/rc.local file and add the following entries to it to configure the computer to mount ramfs over the /dev/shm directory, whenever you start the computer:





    umount /dev/shm
    mount -t ramfs ramfs /dev/shm
    chown oracle:oinstall /dev/shm
    In the preceding commands, oracle is the owner of Oracle software files and oinstall is the group for Oracle owner account. So we only have Oracle installed which will be using ramfs.

    ramfs cannot be pageable, tmpfs can be pageable/swpable. There is no page replacement mechanism overhead.HugePages are universally regarded as pinned.







    Test

    tmpfs
    (previously shmfs)
    ramfs
    Fill maximum space and continue writin display error continue writing
    Fixed Size Yes No
    Uses Swap Yes No
    Volatile Storage Yes Yes



  • Run the following command to check if the /dev/shm directory is mounted with the ramfs type:





    # mount | grep shm
    ramfs on /dev/shm type ramfs (rw)
  • Run the following command to check the permissions on the /dev/shm directory:
# ls -ld /dev/shm
drwxr-xr-x  3 oracle oinstall 0 Jan 13 12:12 /dev/shm
 
note: about tmpfs (previously called shmfs) - from http://en.wikipedia.org/wiki/Tmpfs
tmpfs is supported by the Linux kernel from version 2.4 and up.[3] tmpfs (previously known as shmfs) distinguishes itself from the Linux ramdisk device by allocating memory dynamically and by allowing less-used pages to be moved onto swap space. ramfs, in contrast, does not make use of swap (which can be an advantage or disadvantage). In addition, MFS and some older versions of ramfs did not grow and shrink dynamically and instead used a fixed amount of memory at all times.

Usage of tmpfs for example is "mount -t tmpfs -o size=1G,nr_inodes=10k,mode=0700 tmpfs /space" which will allow up to 1 GiB in RAM/swap with 10240 inodes and only accessible by the owner of /space. Owner can be overridden with the uid/gid mount options, and will otherwise default to root. The filesystem's maximum size can also be changed on-the-fly, like "mount -o remount,size=2G /space".
/var/run and /var/lock can be tmpfs filesystems, to alleviate having to clean them up at each reboot.
 
 
2.  configuring memlock  
  • Increase max locked memory limit. Edit /etc/security/limits.conf and add





    oracle   soft   memlock 12582912 
    oracle   hard   memlock 12582912
    Logon to oracle then check max locked mem limit:        
    $ ulimit -l
    
    12582912
The memlock parameter specifies how much memory the oracle user can lock into its address space.  The memlock setting is specified in KB and must match the memory size of the number of Huge Pages that Oracle should be able to allocate.If memlock is too small, then no single Huge Page will be allocated when the Oracle database starts.

You can specify the maximum lockable memory for oracle user, above 12582912k (=12G*1024*1024), my total physical memory is 16G which is 3/4.

This value can be specified by calculating the number of hugepage, e.g. using the following ~/bin/hugepages_settings.sh to get 1000 hugepages, then 1000*2=2000m(=2000*1024k=2048000k), which you can put 2048000 in this memlock settings, so if you need to increase hugepage size, get the number first then increase here for memlock also.

Hugepage uses 2M continuous memory space per page.

3.  Configuring hugepage
According to Doc ID: 361468.1-HugePages on 64-bit Linux

HugePages is a feature of the Linux kernel  which allows larger pages to manage memory as the alternative to the small 4K pagesize. There is a general misconception where the HugePages is a feature specific to 32-bit Linux. The major function of 64-bit architecture and O/S with 64-bit support is the ability to address Very Large Memory (VLM) natively. With 32-bit architectures, generally speaking, VLM is accessed through a VLM window (See Note 200266.1), which is a data structure in the process address space (virtually max 4GB) that provides access to whole virtual address space from a "window" of a specific size.  On 32-bit Linux, we need to set the USE_INDIRECT_DATA_BUFFERS=TRUE and mount a shmfs/tmpfs/ramfs type of in-memory filesystem over /dev/shm.

 64-bit Linux does not need these mechanisms and can use the physical memory directly as the address space is virtually 16 EB (exabyte = 2^60 bytes) at maximum.

To check whether HugePages are supported/available on a running configuration, run:

    $ grep Huge /proc/meminfo 
and check the output:

    HugePages_Total:    
    HugePages_Free: 
    Hugepagesize: 
Regardless of the values printed, if you can see the lines above, the system supports HugePages.

 The AMM and HugePages are not compatible. One needs to disable AMM on 11g to be able to use HugePages

HugePages is a method to have larger pages where it is useful for working with very large memory. It is both useful in 32- and 64-bit configurations

HugePages can be used without indirect buffers on 64-bit systems (normal configuration will do).

Steps:
  • startup instance by runing
sqlplus / as sysdba
sql> startup nomount pfile='initorcl.ora'
sql> show parameter shared_pool_size
sql> show sga
sql> show parameter filesystemio_options
sql> show parameter disk_asynch_io
  • estimate the number of hugepages and finalize it
put the following program in $ORACLE_HOME/bin


#!/bin/bash
#
# hugepages_settings.sh
#
# Linux bash script to compute values for the
# recommended HugePages/HugeTLB configuration
#
# Note: This script does calculation for all shared memory
# segments available when the script is run, no matter it
# is an Oracle RDBMS shared memory segment or not.

# Check for the kernel version
KERN=`uname -r | awk -F. '{ printf("%d.%d\n",$1,$2); }'`

# Find out the HugePage size
HPG_SZ=`grep Hugepagesize /proc/meminfo | awk '{print $2}'`

# Start from 1 pages to be on the safe side and guarantee 1 free HugePage
NUM_PG=1

# Cumulative number of pages required to handle the running shared memory segments
for SEG_BYTES in `ipcs -m | awk '{print $5}' | grep "[0-9][0-9]*"`
do
   MIN_PG=`echo "$SEG_BYTES/($HPG_SZ*1024)" | bc -q`
   if [ $MIN_PG -gt 0 ]; then
      NUM_PG=`echo "$NUM_PG+$MIN_PG+1" | bc -q`
   fi
done

# Finish with results
case $KERN in
   '2.4') HUGETLB_POOL=`echo "$NUM_PG*$HPG_SZ/1024" | bc -q`;
          echo "Recommended setting: vm.hugetlb_pool = $HUGETLB_POOL" ;;
   '2.6') echo "Recommended setting: vm.nr_hugepages = $NUM_PG" ;;
    *) echo "Unrecognized kernel version $KERN. Exiting." ;;
esac

# End

----------
  • Run the following command to change the permission of the file:





    $ chmod +x hugepages_settings.sh
    

Run the hugepages_settings.sh script to compute the values for hugepages configuration:
$ ./hugepages_settings.sh
  1. Set the following kernel parameter:




    # sysctl -w vm.nr_hugepages=value_displayed_above
    
  2. To make the value of the parameter available for every time you restart the computer, edit the /etc/sysctl.conf file and add the following entry:




    vm.nr_hugepages=value_displayed_above
    
  3. Run the following command to check the available hugepages:




    $ grep Huge /proc/meminfo
    
  4. Restart the instance.
  5. Run the following command to check the available hugepages (1 or 2 pages free):




    $ grep Huge /proc/meminfo
note:
if you noticed the total hugepage number in /proc/meminfo is lower than the number you specified in /etc/sysctl.conf, you might have not enough memory or system cannot find the enough continuous 2M memory pages(in this case, hugepage cannot be used for shared memory segment although it occupied some but not enough memory area), so you might need to reboot server.

the purpose above is to startup instance only(startup nomount) then estimate how many hugepage should be specified in /etc/sysctl.conf, after that, shutdown instance, then make sure oracle user can see the new specified hugepage parameter /proc/meminfo, then startup instance and database again)

you can verify them using 'ipcs -m' and ls -l /dev/shm/ for buffer cache memory file system and shared pool.

Note:
1. hugepage and AMM
    The 11g AMM feature is enabled by the MEMORY_TARGET / MEMORY_MAX_TARGET instance initialization parameters (see Note 465048.1 for further information). That is also the case with a default database instance created using Database Configuration Assistant (DBCA).

    With AMM all SGA memory is allocated by creating files under /dev/shm. When Oracle DB does SGA allocations that way HugePages are not reserved/used. The use of AMM is absolutely incompatible with HugePages.

    Please also note that ramfs (instead of tmpfs mount over /dev/shm) is not supported for AMM at all. With AMM the Oracle database needs to grow and reduce the size of SGA dynamically. This is not possible with ramfs where it possible and supported with tmpfs (which is the default for the OS installation).

    If you want to use HugePages make sure that both MEMORY_TARGET / MEMORY_MAX_TARGET initialization parameters are disabled for the database instance.

    2.  hugepage and shared memory segment
    Huge Pages pool is not being used by the ramfs shared memory filesystems, only being used by shared memory segment which is what the command 'ipcs -m' is showing. Not ls -l /dev/shm/

     Part IV: 32bit Oracle - using VLM (no need special setting for 64bit which support VLM natively)
    On 32bit Oracle with VLM configuration, Note that huge pages will only apply to a small part of your SGA (shared, java, large pool) since your data blocks are not handled by huge pages but by indirect data buffers.

    You should use 64-bit operating system and Oracle. Using indirect block buffers has a performance overhead. If you use a 64-bit operating system all your problems are gone and you can use huge pages for the whole sga including buffer cache.
     
    for 32bit OS and Oracle, in order to configure VLM, do this:

    1. add the following into initorcl.ora (8G (=8192k*1048576) buffer cache and 2G shared pool)
    use_indirect_data_buffers=true
    db_block_buffers=1048576
    db_block_size=8192
    shared_pool_size=2831155200
    filesystemio_options='setall'
    
    note:
    change all DB_CACHE_SIZE (remove DB_xK_CACHE_SIZE parameters ) with DB_BLOCK_BUFFERS parameter.
    *.shared_pool_size has high priority than orcl.__shared_pool_size
    shared_io_poolsize=0 is reserved setting, don't change
     
    Part V: using async io and direct io (for both 32bit and 64bit Oracle) 
    
    • by default, RHEL5 and Oracle 11g




      disk_asynch_io=true (to confirm, use show parameter disk_asynch_io)

    • If you use filesystems




      filesystemio_options=setall  (to confirm, after startup database, show parameter filesystemio_opitons)

    • direct I/O
    Direct I/O is a feature of the file system whereby file reads and writes go directly from the applications to the storage device, bypassing the operating system read and write caches. Direct I/O is used by only a few applications that manage their own caches, such as databases.

    In the following example, $ORACLE_HOME/bin/oracle was relinked with async I/O:
    $ ldd $ORACLE_HOME/bin/oracle | grep libaio
    libaio.so.1 => /usr/lib/libaio.so.1 (0x0093d000)
    $ nm $ORACLE_HOME/bin/oracle | grep io_getevent
    w io_getevents@@LIBAIO_0.1
    $
    In the following example, $ORACLE_HOME/bin/oracle has NOT been relinked with async I/O: $ ldd $ORACLE_HOME/bin/oracle | grep libaio
    $ nm $ORACLE_HOME/bin/oracle | grep io_getevent
    w io_getevents
    $
    use the first number which is 515 to confirm the system is using asyncio, If it's not async
    system, the 515 will be zero:

    [oracle@jephe scripts]$ cat /proc/slabinfo | grep kioctx
    kioctx 515 540 256 15 1 : tunables 120 60 8 : slabdata 36 36 0

    so, on 32bit Oracle with VLM feature. We should use direct I/O and async filesystem I/O for Oracle, and hugepages for shared pool and ramfs for buffer cache, both hugepages and ramfs will lock the memory to prevent swapping

    Part VI: Appendix:

    1. http://www.puschitz.com/TuningLinuxForOracle.shtml
     2. my 64bit Oracle initorcl.ora file:
    -------------
    orcl.__oracle_base='/u01/app/oracle'#ORACLE_BASE set from environment
    *.audit_file_dest='/u01/app/oracle/admin/orcl/adump'
    *.audit_trail='db'
    *.compatible='11.1.0.0.0'
    *.control_files='/u01/app/oracle/oradata/orcl/control01.ctl','/u01/app/oracle/oradata/orcl/control02.ctl','/u01/app/orac
    le/oradata/orcl/control03.ctl'
    *.control_management_pack_access='DIAGNOSTIC+TUNING'
    *.cursor_sharing='similar'
    *.db_block_size=8192
    *.db_cache_size=6g
    *.db_domain=''
    *.db_name='orcl'
    *.db_recovery_file_dest='/u01/app/oracle/flash_recovery_area'
    *.db_recovery_file_dest_size=53687091200
    *.diagnostic_dest='/u01/app/oracle'
    *.dispatchers='(PROTOCOL=TCP) (SERVICE=orclXDB)'
    *.filesystemio_options='setall'
    *.java_pool_size=256m
    *.large_pool_size=256m
    *.log_archive_dest_1='location=/data/arch'
    *.log_archive_dest_2=''
    *.open_cursors=20000
    *.optimizer_mode='choose'
    *.pga_aggregate_target=1g
    *.processes=1000
    *.remote_login_passwordfile='EXCLUSIVE'
    *.session_cached_cursors=100
    *.shared_pool_size=2g
    *.streams_pool_size=64m
    *.sort_area_size=20971520
    *.sort_area_retained_size=2097152
    *.undo_tablespace='UNDOTBS1'
    db_writer_processes=2
    *.hash_area_size=20971520
    *.log_buffer=20971520
    *.java_jit_enabled=FALSE
    ------------------
    Summary:
    1.  using ramfs instead of tmpfs to prevent swapping whenever it's possible
    2.  using 64bit OS and Oracle whenever it's possible
    3.  on 64bit OS+Oracle, hugepage can be used for the whole SGA, while on 32bit OS +Oracle, hugepage can only be used for whole SGA except for data block buffer which is using indirect data buffer. On 32-bit Linux, we need to set the USE_INDIRECT_DATA_BUFFERS=TRUE and mount a shmfs/tmpfs/ramfs type of in-memory filesystem over /dev/shm.
    4. *.java_jit_enabled=FALSE, so that 64bit Oracle 11g can use exp to export data.

    5. to speed up oracle processing, you might consider to collect statistics for schema such as:
    exec dbms_stats.gather_schema_stats('JEPHE', cascade => true);
    exec dbms_stats.delete_table_stats('SYS','CDEF$');
    exec dbms_stats.delete_table_stats('SYS','CON$');
    exec dbms_stats.delete_table_stats('SYS','USER$');
    exec dbms_stats.gather_table_stats('SYS','CDEF$');
    exec dbms_stats.gather_table_stats('SYS','CON$');
    exec dbms_stats.gather_table_stats('SYS','USER$');


    How to let users to change OpenLDAP password themselves through Linux CLI

    Environment: OpenLDAP on Linux (CentOS, Fedora, Redhat or OEL), already configured the userpassword attribute.
    Objective:  to let users to change their LDAP userpassword attribute themselves.


    Steps:  
    1.  configure access control part in slapd.conf

    access to attr=userPassword
            by self write
            by anonymous auth
            by dn="cn=Manager,dc=dev,dc=domain,dc=com" write
            by * none
    
    # note: above 'by self write' and 'by anonymous auth' attibutes
    are very important, otherwise the users cannot change password by
    themselves.
     
    access to attr=proxyAccess
            by self read
            by dn="cn=Manager,dc=dev,dc=domain,dc=com" write
            by * none
    
    access to *
            by dn="cn=Manager,dc=dev,dc=domain,dc=com" write
            by users read
     
    2.  user ldapmodify to change it.
    
    • Method 1:  use ldapmodify with Manager DN
    ldapmodify -x -H ldap://10.0.0.1  -D 'cn=Manager,dc=dev,dc=domain,dc=com' -W -f jephe.ldapmodify

    [root@mars openldap]# more jephe.ldapmodify
    dn: uid=jephe,ou=people,dc=dev,dc=domain,dc=com
    changetype:modify
    replace: userpassword
    userpassword: {MD5}risfylFZSeXVT7IrjtlVdQ==

    You can use command 'slappasswd -h {MD5}' to generate userpassword line above
    New password: testing
    Re-enter new password: testing
    {MD5}risfylFZSeXVT7IrjtlVdQ==


    • Method 2 :  use ldapmodify with user own DN
    ldapmodify -x -H ldap://10.0.0.1 -D 'uid=jephe,ou=people,dc=dev,dc=domain,dc=com' -W -f jephe.ldapmodify
    Enter LDAP Password:
    modifying entry "uid=jephe,ou=people,dc=dev,dc=domain,dc=com"

    3. use ldappasswd to change it
    • Method 3:  use ldappasswd with Manager DN
    ldappasswd -x  -D cn=Manager,dc=dev,dc=domain,dc=com -w password  -s password uid=jephe,ou=People,dc=dev,dc=domain,dc=com
    Result: Success (0)

    • Method 4: use ldappasswd with user own DN
    ldappasswd -x  -D uid=jephe,ou=People,dc=dev,dc=domain,dc=com -w abcd1234  -s 12345 uid=jephe,ou=People,dc=dev,dc=domain,dc=com
    Result: Success (0)

    3. ldapmodify to modify some attributes:

    ldapmodify -c -x -D "cn=admin......" -w -f filename.ldif

    # more filename.ldif
    dn: cn=.....
    changetype: modify
    replace: loginShell
    loginshell: /bin/bash

    4. backup and restore ldap database


    slapcat -b "dc=jephe,dc=com" -l ldif_file
    /usr/local/openldap/sbin/slapadd -b "dc=jephe,dc=com" -l ldif_file


    How to use impdp/expdp to transfer schema from one database to another

    Jephe Wu - http://linuxtechres.blogspot.com

    Environment: Oracle 11g 32bit
    Objective: copy schemas base and u_user1 from UAT Oracle DB uatdb to schemas base and d_user1 in devdb
    Note: from database uatdb to devdb, the base schema is global schema, keep the name as base, but for u_user1 schema, change the schema name to d_user1



    Steps:
    1. stop listener - lsnrctl stop
    2. drop schema base and u_user1 in uatdb
    . oraenv
    uatdb

    sqlplus / as sysdba
    drop tablespace base including contents and datafiles;
    drop tablespace u_user1 including contents and datafiles;
    drop user base cascade;
    drop user u_user1 cascade;

    If you cannot drop user, such as it's saying the user has connection to the database. You can use the following commands to kill all the session relating to this user.

    sqlplus / as sysdba @killusersession.sql

    the content of killusersession.sql is:
    -----
    set head off feedback off pagesize 0 echo off term off linesize 32767 trimsp on tab off define off;
    spool /u01/scripts/killusersessionfinal.sql;
    alter user u_user1 account lock;
    select 'alter system kill session '''||sid||','||serial#||''' immediate;' from v$session where username='u_user1';

    select 'exit;' from dual;
    spool off;
    exit;
    ---------

    sqlplus / as sysdba @killusersessionfinal.sql

    3. create tablespace
    create tablespace u_user1 datafile '/u01/app/oracle/oradata/uatdb/base.dbf' size 100m autoextend on next 10m flashback off;

    note: use 'select name from v$datafile' to find the existing datafile path

    4. export and import files
    for base schema, since the source and destination schema name is the same, so use

    . oraenv
    uatdb
    expdp system dumpfile=base.dmpdp logfile=base.logdp schemas=base exclude=statistics
    note: this will use the default directory which is /u01/app/oracle/admin/uatdb/dpdump

    . oraenv
    devdb
    impdp system dumpfile=base.dmpdp logfile=base.logdp schema=base
    note: this will use the default directory which is /u01/app/oracle/admin/devdb/dpdump

    for u_user1 schema, we use the following impdp command:
    impdp system dumpfile=u_user1.dmpdp logfile=u_user1.logdp remap_schema=u_user1:d_user1 remap_tablespace=u_user1:d_user1 transform=oid:n

    5. preparing the existing public synonyms

    # preparing existing public synonyms and permission granting script
    cat > /u01/cronjob/existing_public_synonym_generation.sql << owner="'PUBLIC';"> exec dbms_utility.compile_schema('BASE');

    7. check invalid objects
    column object_type format A10;
    column object_name format A30;
    select object_type "Invalid BASE",object_name from dba_objects where owner='BASE' and status <> 'VALID' order by object_type,object_name;
    select object_type "Invalid D_USER1",object_name from dba_objects where owner='D_USER1' and status <> 'VALID' and object_name not like 'FB_%' order by object_type,object_name;

    select count(*) BASE, object_type from dba_objects where owner='BASE' and status <> 'VALID' group by object_type;
    select count(*) U_USER1, object_type from dba_objects where owner='U_USER1' and status <> 'VALID' and object_name not like 'FB_%' group by object_type;

    8. start listener
    after starting listener, you should 'alter system register' to make database to register with listener immediately.

    9. consistent=y

    expdp '/ as sysdba' dumpfile=jephe.dmp logfile=jephe.log flashback_time=systimestamp

    Note:
    a. 11.2 introduced legacy mode, now can use the old exp and imp parameters, including 'consistent=y'.
    b. by default expdp is consistent only for the table it is currently exporting.
    c.  flashback_time=systimestamp is same as 'consistent=y'.