Thursday, July 27, 2017

UNIX: Extend FS with new disks on HPUX

Thursday, July 27, 2017 0 Comments
The request was to add 250G of space to a n existing FS, so we need a Change Request raised; SAN team requested to add disks to the server and the rest is now up to you. I trust you have an ignite backup, health checks done prior to adding the new disks to the VG/extending the FS

Personally I am inclined towards AIX because of the how ancient command line results can get in HPUX, especially in file sytem size output. In AIX there is a flexibility to view FS in Gigabytes but the older versions of HPUX I was working with showed everything in bytes so I always kept a handy calculator to do my math. Yes, you still need to do a minimal amount of math to divide the kbs into Gbs. Ok, enough rant

The said file system is about 49 GB in size. It needs to be 250 GB in total size

spitfire:/root# bdf /p9500
Filesystem          kbytes    used   avail %used Mounted on
/dev/vgp9500/lvol1 52297728 40079375 11454767   78% /p9500
spitfire:/root# 

And it's VG and PV display output

--- Volume groups ---
VG Name                     /dev/vgp9500
VG Write Access             read/write     
VG Status                   available                 
Max LV                      255    
Cur LV                      1      
Open LV                     1      
Max PV                      48     
Cur PV                      4      
Act PV                      4      
Max PE per PV               16000        
VGDA                        8   
PE Size (Mbytes)            32              
Total PE                    1596    
Alloc PE                    1596    
Free PE                     0       
Total PVG                   1        
Total Spare PVs             0              
Total Spare PVs in use      0                     

   --- Logical volumes ---
   LV Name                     /dev/vgp9500/lvol1
   LV Status                   available/syncd           
   LV Size (Mbytes)            51072           
   Current LE                  1596      
   Allocated PE                1596        
   Used PV                     4       


   --- Physical volumes ---
   PV Name                     /dev/dsk/c58t4d4
   PV Name                     /dev/dsk/c59t4d4 Alternate Link
   PV Status                   available                
   Total PE                    399     
   Free PE                     0       
   Autoswitch                  On        

   PV Name                     /dev/dsk/c58t4d5
   PV Name                     /dev/dsk/c59t4d5 Alternate Link
   PV Status                   available                
   Total PE                    399     
   Free PE                     0       
   Autoswitch                  On        

   PV Name                     /dev/dsk/c58t4d6
   PV Name                     /dev/dsk/c59t4d6 Alternate Link
   PV Status                   available                
   Total PE                    399     
   Free PE                     0       
   Autoswitch                  On        

   PV Name                     /dev/dsk/c58t4d7
   PV Name                     /dev/dsk/c59t4d7 Alternate Link
   PV Status                   available                
   Total PE                    399     
   Free PE                     0       
   Autoswitch                  On        


   --- Physical volume groups ---
   PVG Name                    PVG0                       
   PV Name                     /dev/dsk/c58t4d4           
   PV Name                     /dev/dsk/c59t4d4           
   PV Name                     /dev/dsk/c58t4d5           
   PV Name                     /dev/dsk/c59t4d5           
   PV Name                     /dev/dsk/c58t4d6           
   PV Name                     /dev/dsk/c59t4d6           
   PV Name                     /dev/dsk/c58t4d7           
   PV Name                     /dev/dsk/c59t4d7  


--- Logical volumes ---
LV Name                     /dev/vgp9500/lvol1
VG Name                     /dev/vgp9500
LV Permission               read/write   
LV Status                   available/syncd           
Mirror copies               0            
Consistency Recovery        MWC                 
Schedule                    parallel     
LV Size (Mbytes)            51072           
Current LE                  1596      
Allocated PE                1596        
Stripes                     0       
Stripe Size (Kbytes)        0                   
Bad block                   NONE         
Allocation                  PVG-strict/distributed
IO Timeout (Seconds)        default             

   --- Distribution of logical volume ---
   PV Name            LE on PV  PE on PV  
   /dev/dsk/c58t4d4   399       399       
   /dev/dsk/c58t4d5   399       399       
   /dev/dsk/c58t4d6   399       399       

   /dev/dsk/c58t4d7   399       399 

1. You first check the syslog or dmesg to detect new disks. I have already done that and what's left is to scan those new disks and then run ioscan commad on the server

spitfire:/root# ioscan -fknH /1/0/12/1/0/4/1.112.32.0.2.2.3
Class     I  H/W Path        Driver   S/W State   H/W Type     Description
===========================================================================
disk    781  1/0/12/1/0/4/1.112.32.0.2.2.3  sdisk    CLAIMED     DEVICE       HP      OPEN-V
                            /dev/dsk/c81t2d3   /dev/rdsk/c81t2d3


spitfire:/root# ioscan -fknH 1/0/12/1/0/4/1.112.32.0.2.2.2
Class     I  H/W Path        Driver   S/W State   H/W Type     Description
===========================================================================
disk    780  1/0/12/1/0/4/1.112.32.0.2.2.2  sdisk    CLAIMED     DEVICE       HP      OPEN-V
    
                        /dev/dsk/c81t2d2   /dev/rdsk/c81t2d2

spitfire:/root# ioscan -fknH /1/0/12/1/0/4/1.112.32.0.2.2.1
Class     I  H/W Path        Driver   S/W State   H/W Type     Description
===========================================================================
disk    779  1/0/12/1/0/4/1.112.32.0.2.2.1  sdisk    CLAIMED     DEVICE       HP      OPEN-V
                            /dev/dsk/c81t2d1   /dev/rdsk/c81t2d1


spitfire:/root# ioscan -fnkH /0/0/12/1/0/4/1.111.32.0.2.2.3
Class     I  H/W Path        Driver   S/W State   H/W Type     Description
===========================================================================
disk    778  0/0/12/1/0/4/1.111.32.0.2.2.3  sdisk    CLAIMED     DEVICE       HP      OPEN-V
                            /dev/dsk/c80t2d3   /dev/rdsk/c80t2d3

spitfire:/root# ioscan -fnkH 0/0/12/1/0/4/1.111.32.0.2.2.0
Class     I  H/W Path        Driver   S/W State   H/W Type     Description
===========================================================================
disk    648  0/0/12/1/0/4/1.111.32.0.2.2.0  sdisk    CLAIMED     DEVICE       HP      OPEN-V
                            /dev/dsk/c80t2d0   /dev/rdsk/c80t2d0

2. Check to see if the disks are in use or available

Ensure disks does not belong to any VG:

spitfire:/root# pvdisplay /dev/dsk/c80t2d0
pvdisplay: Couldn't find the volume group to which
physical volume "/dev/dsk/c80t2d0" belongs.
pvdisplay: Cannot display physical volume "/dev/dsk/c80t2d0".
spitfire:/root# pvdisplay /dev/dsk/c81t2d3
pvdisplay: Couldn't find the volume group to which
physical volume "/dev/dsk/c81t2d3" belongs.
pvdisplay: Cannot display physical volume "/dev/dsk/c81t2d3".
spitfire:/root# pvdisplay /dev/dsk/c81t2d2
pvdisplay: Couldn't find the volume group to which
physical volume "/dev/dsk/c81t2d2" belongs.
pvdisplay: Cannot display physical volume "/dev/dsk/c81t2d2".
spitfire:/root# pvdisplay /dev/dsk/c81t2d1
pvdisplay: Couldn't find the volume group to which
physical volume "/dev/dsk/c81t2d1" belongs.
pvdisplay: Cannot display physical volume "/dev/dsk/c81t2d1".
spitfire:/root# pvdisplay /dev/dsk/c80t2d3
pvdisplay: Couldn't find the volume group to which
physical volume "/dev/dsk/c80t2d3" belongs.

pvdisplay: Cannot display physical volume "/dev/dsk/c80t2d3".

3. They're not, so we make them available now for VG vgp9500

spitfire:/root# pvcreate /dev/rdsk/c81t2d3
Physical volume "/dev/rdsk/c81t2d3" has been successfully created.
spitfire:/root# pvcreate /dev/rdsk/c81t2d2
Physical volume "/dev/rdsk/c81t2d2" has been successfully created.
spitfire:/root# pvcreate /dev/rdsk/c81t2d1
Physical volume "/dev/rdsk/c81t2d1" has been successfully created.
spitfire:/root# pvcreate /dev/rdsk/c80t2d3
Physical volume "/dev/rdsk/c80t2d3" has been successfully created.
spitfire:/root# pvcreate /dev/rdsk/c80t2d0
Physical volume "/dev/rdsk/c80t2d0" has been successfully created.

spitfire:/root# 

4. The next thing to do is to add the PV to VG vgp9500

spitfire:/root# vgextend -g PVG0 vgp9500 /dev/dsk/c81t2d3 /dev/dsk/c81t2d2 /dev/dsk/c81t2d1 /dev/dsk/c80t2d3
Current path "/dev/dsk/c59t4d4" is an alternate link, skip.
Current path "/dev/dsk/c59t4d5" is an alternate link, skip.
Current path "/dev/dsk/c59t4d6" is an alternate link, skip.
Current path "/dev/dsk/c59t4d7" is an alternate link, skip.
Volume group "vgp9500" has been successfully extended.
Physical volume group "PVG0" has been successfully extended.
Volume Group configuration for /dev/vgp9500 has been saved in /etc/lvmconf/vgp9500.conf
spitfire:/root# 

spitfire:/root# vgextend -g PVG0 vgp9500 /dev/dsk/c80t2d0
Current path "/dev/dsk/c59t4d4" is an alternate link, skip.
Current path "/dev/dsk/c59t4d5" is an alternate link, skip.
Current path "/dev/dsk/c59t4d6" is an alternate link, skip.
Current path "/dev/dsk/c59t4d7" is an alternate link, skip.
Current path "/dev/dsk/c80t2d3" is an alternate link, skip.
Volume group "vgp9500" has been successfully extended.
Physical volume group "PVG0" has been successfully extended.
Volume Group configuration for /dev/vgp9500 has been saved in /etc/lvmconf/vgp9500.conf
spitfire:/root# 

Because the disks have alternate links, I sped things up by typing all remaining disks name in a single command line, thus the truncated command below

spitfire:/root# d0 /dev/dsk/c80t2d1 /dev/dsk/c81t2d1 /dev/dsk/c80t2d2 /dev/dsk/c81t2d2 /dev/dsk/c80t2d3 /dev/dsk/c81t2d3       <
Current path "/dev/dsk/c59t4d4" is an alternate link, skip.
Current path "/dev/dsk/c59t4d5" is an alternate link, skip.
Current path "/dev/dsk/c59t4d6" is an alternate link, skip.
Current path "/dev/dsk/c59t4d7" is an alternate link, skip.
Current path "/dev/dsk/c80t2d3" is an alternate link, skip.
vgextend: The physical volume "/dev/dsk/c80t2d0" is already recorded in the "/etc/lvmtab" file.
vgextend: The physical volume "/dev/dsk/c81t2d1" is already recorded in the "/etc/lvmtab" file.
vgextend: The physical volume "/dev/dsk/c81t2d2" is already recorded in the "/etc/lvmtab" file.
vgextend: The physical volume "/dev/dsk/c80t2d3" is already recorded in the "/etc/lvmtab" file.
vgextend: The physical volume "/dev/dsk/c81t2d3" is already recorded in the "/etc/lvmtab" file.
vgextend: The physical volume "/dev/dsk/c80t2d0" is already recorded in the "/etc/lvmpvg" file for "PVG0" of "/dev/vgp9500".
vgextend: The physical volume "/dev/dsk/c81t2d1" is already recorded in the "/etc/lvmpvg" file for "PVG0" of "/dev/vgp9500".
vgextend: The physical volume "/dev/dsk/c81t2d2" is already recorded in the "/etc/lvmpvg" file for "PVG0" of "/dev/vgp9500".
vgextend: The physical volume "/dev/dsk/c80t2d3" is already recorded in the "/etc/lvmpvg" file for "PVG0" of "/dev/vgp9500".
vgextend: The physical volume "/dev/dsk/c81t2d3" is already recorded in the "/etc/lvmpvg" file for "PVG0" of "/dev/vgp9500".
Physical volume group "PVG0" has been successfully extended.

Volume Group configuration for /dev/vgp9500 has been saved in /etc/lvmconf/vgp9500.conf

5. Take a note of the the new VG reading

spitfire:/root# vgdisplay vgp9500
--- Volume groups ---
VG Name                     /dev/vgp9500
VG Write Access             read/write     
VG Status                   available                 
Max LV                      255    
Cur LV                      1      
Open LV                     1      
Max PV                      48     
Cur PV                      8      
Act PV                      8      
Max PE per PV               16000        
VGDA                        16  
PE Size (Mbytes)            32              
Total PE                    9592    
Alloc PE                    1596    
Free PE                     7996    
Total PVG                   1        
Total Spare PVs             0              
Total Spare PVs in use      0                     

because this is where your math skills will come in handy

The new PE size is:

7996 x 32 Mb
255872 / 1024 and you get 249 GB

Our request was to extend the lV with 250G

255872 + 51072
306944 / 1024 and you get 299 GB or about 306944 MB

6. We now extend the file system

spitfire:/root# lvextend -L 306944 /dev/vgp9500/lvol1
Logical volume "/dev/vgp9500/lvol1" has been successfully extended.
Volume Group configuration for /dev/vgp9500 has been saved in /etc/lvmconf/vgp9500.conf


spitfire:/root# lvdisplay /dev/vgp9500/lvol1
--- Logical volumes ---
LV Name                     /dev/vgp9500/lvol1
VG Name                     /dev/vgp9500
LV Permission               read/write   
LV Status                   available/syncd           
Mirror copies               0            
Consistency Recovery        MWC                 
Schedule                    parallel     
LV Size (Mbytes)            306944          
Current LE                  9592      
Allocated PE                9592        
Stripes                     0       
Stripe Size (Kbytes)        0                   
Bad block                   NONE         
Allocation                  PVG-strict/distributed
IO Timeout (Seconds)        default  

7. We now extend the file system

spitfire:/root# fsadm -b 306944m /p9500
fsadm: /etc/default/fs is used for determining the file system type
vxfs fsadm: /dev/vgp9500/rlvol1 is currently 52297728 sectors - size will be increased

spitfire:/root# bdf /p9500
Filesystem          kbytes    used   avail %used Mounted on
/dev/vgp9500/lvol1 314310656 40143595 257031680   14% /p9500

Done!

UNIX : How to break mirrored disks in HPUX

Thursday, July 27, 2017 0 Comments
This is a tutorial on how to break a mirrored disk, because one of it is faulty in a RAID/array group

After the SAN team has assigned a replacement disk on the server, it is time to break the mirror and add the new PV to the VG. I trust you have taken the necessary steps for a health check, backup and raised a Change Request for the disk replacement

 1. First we break the mirrored disk

abyss[/root]# pvchange -a N /dev/dsk/c6t3d0
Warning: Detaching a physical volume reduces the availability of data 
within the logical volumes residing on that disk.
Prior to detaching a physical volume or the last available path to it,
verify that there are alternate copies of the data 
available on other disks in the volume group.
If necessary, use pvchange(1M) to reverse this operation.
Physical volume "/dev/dsk/c6t3d0" has been successfully changed.
abyss[/root]#

2. We check the PV status and it must be unavailable

abyss[/root]# pvdisplay /dev/dsk/c6t3d0
--- Physical volumes ---
PV Name                     /dev/dsk/c6t3d0
VG Name                     /dev/vg01
PV Status                   unavailable              
Allocatable                 yes          
VGDA                        2   
Cur LV                      6      
PE Size (Mbytes)            32              
Total PE                    2170    
Free PE                     0       
Allocated PE                2170        
Stale PE                    2170    
IO Timeout (Seconds)        default             
Autoswitch                  On        

Proactive Polling           On 

3. Unattached the faulty disk from disk bay and replace the disk (done onsite by engineer)

4. Once the new disk has been attached, we run the ioscan command to get the new disk name

# ioscan -fnC disk

abyss[/root]# pvcreate /dev/rdsk/c6t3d0
pvcreate: The physical volume "/dev/dsk/c6t3d0" is already recorded in the "/etc/lvmtab" file.
abyss[/root]

5. We then restore the VG and make the PV available to the VG

abyss[/root]# vgcfgrestore -n vg01 /dev/rdsk/c6t3d0
Volume Group configuration has been restored to /dev/rdsk/c6t3d0

abyss[/root]# pvdisplay /dev/dsk/c6t3d0
--- Physical volumes ---
PV Name                     /dev/dsk/c6t3d0
VG Name                     /dev/vg01
PV Status                   unavailable              
Allocatable                 yes          
VGDA                        2   
Cur LV                      6      
PE Size (Mbytes)            32              
Total PE                    2170    
Free PE                     0       
Allocated PE                2170        
Stale PE                    2170    
IO Timeout (Seconds)        default             
Autoswitch                  On        
Proactive Polling           On  

abyss[/root]# pvchange -a y /dev/dsk/c6t3d0  
Physical volume "/dev/dsk/c6t3d0" has been successfully changed.

6. Sync the VG, depending on the size of the VG, sync may take a while. As you can see the number of stale extents keeps reducing

abyss[/root]# vgsync vg01

abyss[/root]# pvdisplay /dev/dsk/c6t3d0
--- Physical volumes ---
PV Name                     /dev/dsk/c6t3d0
VG Name                     /dev/vg01
PV Status                   available                
Allocatable                 yes          
VGDA                        2   
Cur LV                      6      
PE Size (Mbytes)            32              
Total PE                    2170    
Free PE                     0       
Allocated PE                2170        
Stale PE                    2166    
IO Timeout (Seconds)        default             
Autoswitch                  On        

Proactive Polling           On


abyss[/root]# pvdisplay -v /dev/dsk/c6t3d0 | grep stale | wc -l
2111
abyss[/root]# pvdisplay -v /dev/dsk/c6t3d0 | grep stale | wc -l
2106

The stale extend will eventually zero out, so run the pvdisplay command to get your final STALE PE reading

The entire process can be done at any time, downtime is not needed. Note that I didn't use the VG deactivate/reactivate commands anywhere in this tutorial. Command tested on HPUX 11.23

UNIX : Shell script to extract printers from servers and email to users

Thursday, July 27, 2017 0 Comments
This is a handy script especially if you are working with printers of all kinds running on hundreds of servers that does nothing but print all day and night and you want to troubleshoot/run maintenance/replace cartridges/remove/add those printers

For the purpose of this tutorial, I have renamed the servers and the email recipient list

The output will be in a CSV format emailed to user, and a copy is stored in /var/local/printers

I stored this script in /usr/local/bin/ and named it printer_list.sh script

This script also does a bit of a housekeeping, it removes existing csv file on the server it is run on which is older than 90 days

# Get printer config from all servers

OUT_DIR="/var/local/printers"
OUTPUT="${OUT_DIR}/all_printers_`date +%Y%m%d.%H%M%S`.csv"
HOSTNAMES="hedkandi bluekandi brocadeblue milkyway starrynite abyss " 
MAIL_LIST="brocadeblue@gmail.com hedkandi@gmail.com"
SUBJECT="PRINTERS_FROM_ALL_SERVERS_`date +%Y%m%d.%H%M%S`.csv"

echo "Server  ,Queue name,Device,Remote Device,HP-UX Interface Device,Port  ,IP add from hosts,PING,DNS IP add,Model" > $OUTPUT

for i in $HOSTNAMES
do
ssh $i "sh /usr/local/bin/printer_list.sh" >> $OUTPUT
done

#sort -t "," -k 2,2 -k1,1 $OUTPUT > ${OUTPUT}2
#mv  ${OUTPUT}2 $OUTPUT

# remove printer lists older than 90 days
find $OUT_DIR -name "all_printers*.csv" -mtime +90 -exec rm {} \;

# Mail file to selected people
uuencode $OUTPUT $OUTPUT | mail $MAIL_LIST 

Works on HP UX 11 and later

Wednesday, July 26, 2017

UNIX : JVM update on AIX

Wednesday, July 26, 2017 0 Comments
This is a short tutorial on how to do application update (or any other application update) on an AIX server

I trust that you have taken the necessary steps for a mksysb backup and raise a Change Request

I have changed the server name and the AIX version I am using is 5.3

1. We make a copy of the current JVM on the server

  # cp -rp /usr/java14_64 /usr/java14_64bak 

You have to create the backup directory first

# mkdir /usr/java14_64bak
# cd /usr/java14_64

# find . -print | cpio -pdl /usr/java14_64bak 

cpio will copy soft links in the directory 

2. We go to the temporary location of where the new JVM file is. I usually store it in /tnp

# cd /tmp/JVM

3. We unzip the file

# gunzip -c Java14.sdk.tar.gz | tar -xvf.

4. We extract the file

# tar -xvf Java14.sdk.tar.

5. Install the JDK/SDK base and update images. Do a preview installation first, if there are any error we will see it right away

        Run "smitty install"
        Select "Install and Update Software"
        Select "Install Software"
        Specify directory containing the images
        then press "Enter"

run preview first for  PREVIEW only? (install operation will NOT occur) and  Preview new LICENSE agreements?  

                                                      [Entry Fields]
* INPUT device / directory for software               /tmp/JVM/
* SOFTWARE to install                                [_all_latest]                                                               +
  PREVIEW only? (install operation will NOT occur)    yes                                                                        +
  COMMIT software updates?                            yes                                                                        +
  SAVE replaced files?                                no                                                                         +
  AUTOMATICALLY install requisite software?           yes                                                                        +
  EXTEND file systems if space needed?                yes                                                                        +
  OVERWRITE same or newer versions?                   no                                                                         +
  VERIFY install and check file sizes?                no                                                                         +
  Include corresponding LANGUAGE filesets?            yes                                                                        +
  DETAILED output?                                    no                                                                         +
  Process multiple volumes?                           yes                                                                        +
  ACCEPT new license agreements?                      yes                                                                         +
  Preview new LICENSE agreements?                     no  



preview results:

installp PREVIEW:  installation will not actually occur.
*******************************************************************************

+-----------------------------------------------------------------------------+
                    Pre-installation Verification...
+-----------------------------------------------------------------------------+
Verifying selections...done
Verifying requisites...done
Results...

SUCCESSES
---------
  Filesets listed in this section passed pre-installation verification
  and will be installed.
  
  Selected Filesets
  -----------------
  Java14_64.ext.commapi 1.4.2.0               # Java SDK 64-bit Comm API Ext...
  Java14_64.ext.javahelp 1.4.2.0              # Java SDK 64-bit JavaHelp
  Java14_64.license 1.4.2.250                 # Java SDK 64-bit License
  Java14_64.samples 1.4.2.0                   # Java SDK 64-bit Samples
  Java14_64.sdk 1.4.2.380                     # Java SDK 64-bit 
  Java14_64.source 1.4.2.380                  # Java SDK 64-bit Source
  

  << End of Success Section >>

6. Do a full installation, and check the updated Java version

@smurf01/tmp# java -fullversion
java full version "J2RE 1.4.2 IBM AIX 5L for PowerPC (64 bit JVM) build caix64142-20080515 (SR11)"

Done!

UNIX : Unmount FS, mount points, VG, release disks on AIX

Wednesday, July 26, 2017 0 Comments
I am using an older version of AIX for this, a 5.3 The current version is 7.2 I have renamed the servers for this tutorial

As usual, we will start with a request from customer to remove multiple FS and its VG, and release the disks back to SAN team

Please create a new CR for the removal of the following file systems, mount points and volume group from sushi01.
These were all used by the temporary SAP system CRC, we have now completed our upgrade testing in CRC and the SAN disk used needs to be unassigned and returned as free space.
I have already uninstalled the CRC SAP system and database, so all the file systems are empty and can simply be unmounted and removed.

We have a list of FS to be unmounted

Host : sushi01
Volume Group : crcvg
Mount point  File System Size
/db2/CRC  /dev/crclv 1 GB
/db2/CRC/log_dir  /dev/crclog_dirlv 20 GB
/db2/CRC/log_archive  /dev/crclogarchivelv 20 GB
/db2/CRC/log_retrieve  /dev/crcretrievelv 2 GB
/db2/CRC/saptemp1  /dev/crcsaptemp1lv 5 GB
/db2/CRC/sapdata1  /dev/crcsapdata1lv 160 GB
/db2/CRC/db2dump  /dev/crcdb2dumplv 2 GB
/db2/db2crc /dev/db2crclv 2.5 GB
/sapmnt/CRC  /dev/sapmntcrclv 8 GB
/usr/sap/CRC  /dev/usrsapcrclv 15 GB


And the removal of file systems

Host : sushi01
Volume Group : crcvg
Mount point File System Size
/db2/CRC /dev/crclv 1 GB
/db2/CRC/log_dir /dev/crclog_dirlv 20 GB
/db2/CRC/log_archive /dev/crclogarchivelv 20 GB
/db2/CRC/log_retrieve /dev/crcretrievelv 2 GB
/db2/CRC/saptemp1 /dev/crcsaptemp1lv 5 GB
/db2/CRC/sapdata1 /dev/crcsapdata1lv 160 GB
/db2/CRC/db2dump /dev/crcdb2dumplv 2 GB
/db2/db2crc /dev/db2crclv 2.5 GB
/sapmnt/CRC /dev/sapmntcrclv 8 GB
/usr/sap/CRC /dev/usrsapcrclv 15 GB

Since no one is using these file systems anymore, all I have to do is raise a Change Request, make the necessary mksysb system backup and go about my task

1. First we unmount the file systems

    #umount /db2/CRC 
    #umount /db2/CRC/log_dir 
    #umount /db2/CRC/log_archive 
    #umount /db2/CRC/log_retrieve 
    #umount /db2/CRC/saptemp1 
    #umount /db2/CRC/sapdata1 
    #umount /db2/CRC/db2dump 
    #umount /db2/db2crc 
    #umount /sapmnt/CRC 

    #umount /usr/sap/CRC 

2. Then we remove the VG from the server

    # varyoffvg crcvg

    # exportvg crcvg

3. Remove the disks used for this VG

    # rmdev -dl hdisk36
    # rmdev -dl hdisk77
    # rmdev -dl hdisk38
    # rmdev -dl hdisk39  
    # rmdev -dl hdisk40

    # rmdev -dl hdisk41  

4. Lastly I inform the SAN team of my work and to release the disks as free space

UNIX: Creating a File System on a Cluster server with New Disks Allocation

Wednesday, July 26, 2017 0 Comments
Yesterday I had blogged about extending an AIX file system and today we will do a new FS creation in a cluster server

As usual, we start with a user request

Hi Unix Adm

Pls create a new filesystem - /oracle/PRD/sapdata11 of size 80GB.

Thanks

First things first, we make sure the FS isn't created, and by the looks of the FS name request, there may be a /oracle/PRD/sapdata10 existing on the server, so look for it and find the VG name from it. We will need the VG name to add the new disks to, in order to create a FS

1. Look for the LV and VG to use to create the new FS

root@hedkandi:(root)> bdf /oracle/PRD/sapdata10
Filesystem          kbytes    used   avail %used Mounted on
/dev/vgprddata/lvol10
                   52428800 25665564 25090534   51% /oracle/PRD/sapdata10


2. So the current LV name is lvol10 and the VG name is vgprddata. We also check the size of available FREE PE on this VG which is about 38 GB

root@hedkandi:(root)> vgdisplay vgprddata
--- Volume groups ---
VG Name                     /dev/vgprddata
VG Write Access             read/write     
VG Status                   available, exclusive      
Max LV                      255    
Cur LV                      10     
Open LV                     10     
Max PV                      16     
Cur PV                      10     
Act PV                      10     
Max PE per PV               21247        
VGDA                        20  
PE Size (Mbytes)            4               
Total PE                    212440  
Alloc PE                    202468  
Free PE                     9972    
Total PVG                   0        
Total Spare PVs             0              
Total Spare PVs in use      0

The math I did to get this is

root@hedkandi:(root)> bc
9972*4
39888
39888/1024
38 GB

In order for me to raise a Change Request for this, I need to know the existing disk size so I can get the SAN team to allocate a new disk of the same size for my FS creation. I run the vgdisplay -v command to get a complete list of disks for VG vgprddata and check a few disks' sizes

From a truncated vgdisplay -v output:

   --- Physical volumes ---
   PV Name                     /dev/dsk/c10t0d3
   PV Name                     /dev/dsk/c12t0d3 Alternate Link
   PV Name                     /dev/dsk/c14t0d3 Alternate Link
   PV Name                     /dev/dsk/c8t0d3  Alternate Link
   PV Status                   available                
   Total PE                    21244   
   Free PE                     0       
   Autoswitch                  On        

   Proactive Polling           On 

Here, I picked disk c8t2d7 to check its size. Make sure you add the r in the /dsk when you run the diskinfo command because VG output doesn't show the raw character "r" of the SCSI disk


root@hedkandi:(root)> diskinfo /dev/rdsk/c8t2d7
SCSI describe of /dev/rdsk/c8t2d7:
             vendor: HP      
         product id: HSV200          
               type: direct access
               size: 87031808 Kbytes
   bytes per sector: 51

How do I know what size disk to ask for? Lets transform the size in kb to GB

root@hedkandi:(root)> bc
87031808/1024/1024
83

After SAN team has allocated the LUN, you do a hardware scan to detect the disks

ioscan -fn

root@hedkandi:(root)> insf -e
insf: Installing special files for asio0 instance 0 address 0/0/1/0
insf: Installing special files for asio0 instance 1 address 0/0/1/1
insf: Installing special files for sdisk instance 0 address 0/0/3/0.0.0.0
insf: Installing special files for sctl instance 0 address 0/0/3/0.0.7.0

I have truncated the output because it will show a pretty long list of all the hardware on the server

What I like to do is check the syslog at this stage, because it will show the new disks added

Jan 21 15:34:37 hedkandi vmunix: 0/5/1/0/4/0.1.0.0.0.3.0 sdisk
Jan 21 15:34:37 hedkandi vmunix: 0/5/1/0/4/0.1.1.0.0.3.0 sdisk
Jan 21 15:34:37 hedkandi vmunix: 0/2/1/0/4/0.1.0.0.0.3.0 sdisk
Jan 21 15:34:37 hedkandi vmunix: 0/2/1/0/4/0.1.1.0.0.3.0 sdisk

And what's left to do is to do an I/O scan on the disks individually and make sure their status is CLAIMED

root@hedkandi:(root)> ioscan -fnH 0/5/1/0/4/0.1.0.0.0.3.0
Class     I  H/W Path       Driver   S/W State   H/W Type     Description
==========================================================================
disk     63  0/5/1/0/4/0.1.0.0.0.3.0  sdisk    CLAIMED     DEVICE       HP      HSV200
                           /dev/dsk/c14t3d0   /dev/rdsk/c14t3d0

root@hedkandi:(root)> ioscan -fnH 0/5/1/0/4/0.1.1.0.0.3.0
Class     I  H/W Path       Driver   S/W State   H/W Type     Description
==========================================================================
disk     64  0/5/1/0/4/0.1.1.0.0.3.0  sdisk    CLAIMED     DEVICE       HP      HSV200
                           /dev/dsk/c8t3d0   /dev/rdsk/c8t3d0

root@hedkandi:(root)> ioscan -fnH 0/2/1/0/4/0.1.0.0.0.3.0
Class     I  H/W Path       Driver   S/W State   H/W Type     Description
==========================================================================
disk     65  0/2/1/0/4/0.1.0.0.0.3.0  sdisk    CLAIMED     DEVICE       HP      HSV200
                           /dev/dsk/c10t3d0   /dev/rdsk/c10t3d0

root@hedkandi:(root)> ioscan -fnH 0/2/1/0/4/0.1.1.0.0.3.0
Class     I  H/W Path       Driver   S/W State   H/W Type     Description
==========================================================================
disk     66  0/2/1/0/4/0.1.1.0.0.3.0  sdisk    CLAIMED     DEVICE       HP      HSV200
                           /dev/dsk/c12t3d0   /dev/rdsk/c12t3d0


From now on, it will be the LVM management, and we start off by adding the disk to the VG vgprddata

1. Create the PV first

pvcreate /dev/rdsk/c14t3d0

2. Extend the VG of vgprddata

vgextend vgprddata /dev/dsk/c14t3d0

3. Check the extended VG size

vgdisplay vgprddata

4. Create the LV

lvcreate -L 84992 -n lvol11 -s y /dev/vgprddata

5. Check the lvdisplay, LV changes must be added

lvdisplay /dev/vgprddata/lvol11

6. Make the directory

mkdir /oracle/PRD/sapdata11

7. Make the FS

newfs -F vxfs -o largefiles /dev/vgprddata/rlvol10

8. Mount the LV to the FS

mount /dev/vgprddata/lvol11 /oracle/PRD/sapdata11

9. Change the ownership of the FS

chown -R oraprd:dba /oracle/PRD/sapdata11

10. Edit the Bad Block Allocation to ON if it is set to NONE. Our VG has the setting to ON

lvchange -r N /dev/VG/LV----------this is only to edit if the Bad Block is set to None, change it to ON

11. We then make changes to the control file of the cluster server hedkandi to add the new FS into it. To know which files to edit, run the cmviewcl command 

root@hedkandi:(root)> cmviewcl

CLUSTER        STATUS       
kandi            up           
  
  NODE           STATUS       STATE        
  hedkandi        up           running      

    PACKAGE        STATUS       STATE        AUTO_RUN     NODE        
    prdms          up           running      enabled      hedkandi--------->this one to edit   
  
  NODE           STATUS       STATE        
  bluekandi        up           running


Therefore...

root@hedkandi:(root)> find / -name *.cntl
/etc/cmcluster/PRD/sapdbci.cntl
/etc/cmcluster/PRD/prdms.cntl-------------->this one to edit
/etc/cmcluster/prdms.old/prdms.cntl
/opt/samba/HA/samba.cntl
/opt/cmcluster/sap/LC/saplc.cntl
/opt/cmcluster/sap/SID/sapwas.cntl
/opt/cmcluster/sap/contrib/J2EE620/sap620Engine.cntl

12. Go to the FS

cd /etc/cmcluster/PRD/prdms.cntl

13. Make a backup copy of the file

cp -p prdms.cntl prdms.cntl.21012010

14. Run the more command to have a look at the VGs and the LVs in the server hedkandi

root@hedkandi:(PRD)> more /etc/cmcluster/PRD/prdms.cntl | grep vgprddata
VG[0]="vgprddata"
LV[13]="/dev/vgprddata/lvol1";  FS[13]="/oracle/PRD/sapdata1";  FS_MOUNT_OPT[13]="-o delaylog,largefiles"; FS_FSCK_OPT[13]=""; FS_TYPE[13]="vxfs"
LV[14]="/dev/vgprddata/lvol2";  FS[14]="/oracle/PRD/sapdata2";  FS_MOUNT_OPT[14]="-o delaylog,largefiles"; FS_FSCK_OPT[14]=""; FS_TYPE[14]="vxfs"
LV[15]="/dev/vgprddata/lvol3";  FS[15]="/oracle/PRD/sapdata3";  FS_MOUNT_OPT[15]="-o delaylog,largefiles"; FS_FSCK_OPT[15]=""; FS_TYPE[15]="vxfs"
LV[16]="/dev/vgprddata/lvol4";  FS[16]="/oracle/PRD/sapdata4";  FS_MOUNT_OPT[16]="-o delaylog,largefiles"; FS_FSCK_OPT[16]=""; FS_TYPE[16]="vxfs"
LV[17]="/dev/vgprddata/lvol5";  FS[17]="/oracle/PRD/sapdata5";  FS_MOUNT_OPT[17]="-o delaylog,largefiles"; FS_FSCK_OPT[17]=""; FS_TYPE[17]="vxfs"
LV[18]="/dev/vgprddata/lvol6";  FS[18]="/oracle/PRD/sapdata6";  FS_MOUNT_OPT[18]="-o delaylog,largefiles"; FS_FSCK_OPT[18]=""; FS_TYPE[18]="vxfs"
LV[20]="/dev/vgprddata/lvol7";  FS[20]="/oracle/PRD/sapdata7";  FS_MOUNT_OPT[20]="-o delaylog,largefiles"; FS_FSCK_OPT[20]=""; FS_TYPE[20]="vxfs"
LV[22]="/dev/vgprddata/lvol8";  FS[22]="/oracle/PRD/sapdata8";  FS_MOUNT_OPT[22]="-o delaylog,largefiles"; FS_FSCK_OPT[22]=""; FS_TYPE[22]="vxfs"
LV[23]="/dev/vgprddata/lvol9";  FS[23]="/oracle/PRD/sapdata9";  FS_MOUNT_OPT[23]="-o delaylog,largefiles"; FS_FSCK_OPT[23]=""; FS_TYPE[23]="vxfs"
LV[24]="/dev/vgprddata/lvol10";  FS[24]="/oracle/PRD/sapdata10";  FS_MOUNT_OPT[24]="-o delaylog,largefiles"; FS_FSCK_OPT[24]=""; FS_TYPE[24]="vxfs"


15. From here, we can see that the last LV runs at number 24, so you will need to add another line under it for the FS recently created

LV[25]="/dev/vgprddata/lvol11";  FS[25]="/oracle/PRD/sapdata11";  FS_MOUNT_OPT[25]="-o delaylog,largefiles"; FS_FSCK_OPT[25]=""; FS_TYPE[24]="vxfs"

16. Run cmviewcl command again, and make sure prdms package is UP

# cmviewcl 

17. Disable autorun

# cmmodpkg -d prdms

18.Run cmviewcl command again, and make sure the prdms package status is DOWN and autorun is disabled

# cmviewcl 

18. Now, we make a note of the ownership and permissions for VG map file of VG vgprddata

root@hedkandi:(PRD)> ll /etc/lvmconf/vgprddata.mapfile
-rw-r--r--   1 root       sys            104 Jan 21 10:30 /etc/lvmconf/vgprddata.mapfile

19. Do a tail on the file

root@hedkandi:(PRD)> tail  /etc/lvmconf/vgprddata.mapfile
1 lvol1
2 lvol2
3 lvol3
4 lvol4
5 lvol5
6 lvol6
7 lvol7
8 lvol8
9 lvol9
10 lvol10

20. Run vgexport to export the LV into the VG mapping

# vgexport -m /etc/lvmconf/vgprddata.mapfile -pvs /dev/vgprddata

21. Tail the file again, and will see the new LV added into the VG mapping of VG vgprddata

22. Now FTP the mapfile and the cntl files to the cluster server bluekandi

# scp /etc/lvmconf/vgprddata.mapfile root@bluekandi:/tmp

# scp /etc/cmcluster/PRD/prdms.cntl root@bluekandi:/tmp

23. Before we go about on server bluekandi, we make sure that the new LV has been added to the LVM conf file

root@hedkandi:(lvmconf)> ls -l /dev/vgprddata | grep lvol1*
brw-r-----   1 root       sys         64 0x040001 Nov 16  2008 lvol1
brw-r-----   1 root       sys         64 0x04000a Sep  8 12:58 lvol10
brw-r-----   1 root       sys         64 0x040002 Nov 16  2008 lvol2
brw-r-----   1 root       sys         64 0x040003 Nov 16  2008 lvol3
brw-r-----   1 root       sys         64 0x040004 Nov 16  2008 lvol4
brw-r-----   1 root       sys         64 0x040005 Nov 16  2008 lvol5
brw-r-----   1 root       sys         64 0x040006 Nov 16  2008 lvol6
brw-r-----   1 root       sys         64 0x040007 Nov 16  2008 lvol7
brw-r-----   1 root       sys         64 0x040008 Nov 16  2008 lvol8
brw-r-----   1 root       sys         64 0x040009 Dec 22  2008 lvol9
crw-r-----   1 root       sys         64 0x040001 Nov 16  2008 rlvol1
crw-r-----   1 root       sys         64 0x04000a Sep  8 12:58 rlvol10
crw-r-----   1 root       sys         64 0x040002 Nov 16  2008 rlvol2
crw-r-----   1 root       sys         64 0x040003 Nov 16  2008 rlvol3
crw-r-----   1 root       sys         64 0x040004 Nov 16  2008 rlvol4
crw-r-----   1 root       sys         64 0x040005 Nov 16  2008 rlvol5
crw-r-----   1 root       sys         64 0x040006 Nov 16  2008 rlvol6
crw-r-----   1 root       sys         64 0x040007 Nov 16  2008 rlvol7
crw-r-----   1 root       sys         64 0x040008 Nov 16  2008 rlvol8
crw-r-----   1 root       sys         64 0x040009 Dec 22  2008 rlvol9

Now, we can go to server bluekandi and import the updated VG, LV and FS from server hedkandi. Because both servers are clustered, you will need to run the hardware scan commands

# ioscan -fn

# insf -e


1. We make a backup of the the cntl file on server bluekandi

root@bluekandi:(root)> cd /etc/cmcluster/PRD/

root@bluekandi:(PRD)> ls -l 
total 1600
-rwxr-xr-x   1 root       sys          13018 Oct 18  2006 hanfs.sh
-rwx------   1 root       sys          60199 Dec 14 16:17 prdms.cntl-------------bkp this one
-rwx------   1 root       sys          59574 Oct 28  2006 prdms.cntl-28Nov2006
-rwx------   1 root       sys          59692 Mar 27  2007 prdms.cntl.20070609
-rwx------   1 root       sys          59758 Mar 20  2008 prdms.cntl.20080320
-rwx------   1 root       sys          59904 Mar 20  2008 prdms.cntl.20081222
-rwx------   1 root       sys          60050 Dec 22  2008 prdms.cntl.20090908
-rw-r--r--   1 root       sys          56557 Nov 16  2008 prdms.cntl.log
-rw-r--r--   1 root       root         18727 Oct 28  2006 prdms.cntl.log_1
-rw-r--r--   1 root       root         80408 Jun 17  2007 prdms.cntl.log_2
-rwx------   1 root       sys          60199 Jan 20 20:31 prdms.cntl_new
-rwx------   1 root       sys          59779 Jun  9  2007 prdms.cntl_pJun07
-rwx------   1 root       sys          59571 Oct 27  2006 prdms.cntl_save
-rwx------   1 root       sys          16325 Jun 18  2007 prdms.conf
-rwxr-xr-x   1 root       sys          17481 Jun 14  2007 sap.conf
-rwxr-xr-x   1 root       sys           3839 Dec 17  2002 sapdbci.cntl

2. Remember the cntl and mapfile files in tmp earlier that we SCP from hedkandi to bluekandi? We move them to current directory and change it's ownership and permissions if we have to

# cp -p /tmp/prdms.cntl /etc/cmcluster/PRD/prdms.cntl

3. On cluster server bluekandi, we make a backup of the VG map file

cd /etc/lvmconf/

root@bluekandi:(lvmconf)> ls -lrt vgprddata.mapfile*
-rw-r--r--   1 root       sys            104 Jan 21 11:02 vgprddata.mapfile
root@bluekandi:(lvmconf)>

cp -p vgprddata.mapfile vgprddata.mapfile.date

4. Then, we move the files from /tmp and replace them with the VG map file

# cp /tmp/vgprddata.mapfile vgprddata.mapfile

# chown root:sys vgprddata.mapfile

5. We do one final check of the map and control files

# ll /etc/lvmconf/vgprddata.mapfile
# ll /etc/cmcluster/PRD/prdms.cntl

6. Finally, we can export the VG into the cluster server bluekandi

# vgexport -v vgprddata
# ls -l /dev/vgprddata

We go back to cluster server hedkandi to import the VG

# vgimport -m /etc/lvmconf/vgprddata.mapfile -vs /dev/vgprddata

# tail /etc/lvmconf/vgprddata.mapfile

7. We make sure the minor number is same on both servers

hedkandi

ls -l /dev/*/group | grep vgprddata

bluekandi

ls -l /dev/*/group | grep vgprddata

We go to cluster server bluekandi and set the autopath of the disk to SST

# autopath set-l "LUN_WWN" -b SST

We then enable the cluster package prdms

# cmmodpkg-e prdms

# cmviewcl

Done!