Archive

Archive for the ‘Storage’ Category

QNAP: How to add a static route or add commands that should be executed during startup/reboot

Problem description:

You need to add an additional static route to your QNAP storage system. “route add” works fine through ssh, but does not persist a reboot.

 
Problem resolution:

Unfortunately the configuration of routes is not possible via the web interface. You will have to connect via ssh and edit a script that will be executed on every startup.

Just follow the next steps:

  1. Connect via ssh

    mbp:~ matt $ ssh admin@qnap15
    admin@qnap15's password: 
    [~] #

     

  2. Mount of the QNAP-Configuration device

    The name of the actual device to mount depends on your model number. In general for x86-based systems this should be /dev/sdx6. For Marvell ARM based models it should be /dev/mtdblock5 or /dev/mtdblock4. In this case we are working on higher model based on x86 and will mount /dev/sdx6:

    [~] # mount -t ext2 /dev/sdx6 /tmp/config
    [~] # df -h
    Filesystem                Size      Used Available Use% Mounted on
    /dev/ramdisk            139.5M    111.9M     27.5M  80% /
    tmpfs                    64.0M    164.0k     63.8M   0% /tmp
    /dev/sda4               310.0M    220.0M     90.0M  71% /mnt/ext
    /dev/md9                509.5M     88.8M    420.6M  17% /mnt/HDA_ROOT
    /dev/md0                  7.2T    402.7G      6.8T   5% /share/MD0_DATA
    tmpfs                    32.0M         0     32.0M   0% /.eaccelerator.tmp
    /dev/sdx6                 3.9M     44.0k      3.8M   1% /tmp/config
    [~] # 
    [~] # ls -al /tmp/config/
    drwxr-xr-x    3 admin    administ     1024 May 11 17:43 ./
    drwxrwxrwx    8 admin    administ     1160 May 11 17:43 ../
    -rw-r--r--    1 admin    administ        1 May  3 17:30 BOOT_COUNT
    -rw-r--r--    1 admin    administ       27 Mar  6 12:21 customise.conf
    drwx------    2 admin    administ    12288 Jan 11  2010 lost+found/
    -rw-r--r--    1 admin    administ     3036 May  3 17:28 smb.conf
    -rw-r--r--    1 admin    administ       11 May  3 17:28 smb.conf.cksum
    -rw-r--r--    1 admin    administ       37 May  3 17:28 system.map.key
    -rw-r--r--    1 admin    administ     4552 Mar  8 11:48 uLinux.conf
    [~] #

     

  3. Create or adapt autorun.sh

    A shell script called “autorun.sh” will be executed by your QNAP storage system on every startup. If you have a freshly installed QNAP and never created/adapted “autorun.sh”, it might be possible that file currently does not exist.
    In this example we created the “autorun.sh” and implemented the following commands:

    [~] # vi /tmp/config/autorun.sh
    [~] # 
    [~] # cat /tmp/config/autorun.sh
    #!/bin/sh
     
    route add -net 192.168.35.0 netmask 255.255.255.0 gw 192.168.91.254
     
    [~] #

     

  4. Make sure that autorun.sh is executable

    In order to be executable during startup, we will have to set the executable flag for “autorun.sh”:

    [~] # ls -al /tmp/config/autorun.sh   
    -rw-r--r--    1 admin    administ       60 May 11 17:43 /tmp/config/autorun.sh*
    [~] # 
    [~] # chmod +x /tmp/config/autorun.sh 
    [~] # 
    [~] # ls -al /tmp/config/autorun.sh   
    -rwxr-xr-x    1 admin    administ       60 May 11 17:43 /tmp/config/autorun.sh*
    [~] #
  5. Test the command or autorun.sh

    Test the actual added content of your autorun.sh or if it is newly created test the execution of the whole script. In this case we have created a completely new autorun.sh and therefore can test the whole execution:

    [~] # /tmp/config/autorun.sh
    [~] # netstat -rn
    Kernel IP routing table
    Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
    ...
    192.168.35.0    192.168.91.254  255.255.255.0   UG    0      0        0 eth0
    ...
    [~] #
  6. Unmount /tmp/config

    If everything went well, unmount /tmp/config:

    [~] # umount /tmp/config/
    [~] # df -h
    Filesystem                Size      Used Available Use% Mounted on
    /dev/ramdisk            139.5M    111.9M     27.5M  80% /
    tmpfs                    64.0M    156.0k     63.8M   0% /tmp
    /dev/sda4               310.0M    219.9M     90.1M  71% /mnt/ext
    /dev/md9                509.5M     88.8M    420.6M  17% /mnt/HDA_ROOT
    /dev/md0                  7.2T    400.2G      6.8T   5% /share/MD0_DATA
    tmpfs                    32.0M         0     32.0M   0% /.eaccelerator.tmp
    [~] #
Categories: QNAP Tags:

EMC: Netto / Raw Capacity of an EMC disk

October 20th, 2011 Matthias Pölzinger No comments

Problem description:

As a DBA, System Archtitect or Engineer you are sometimes faced to create the storage layout for database systems. In order to get a detailed picture on how much capacity is available for your system, you have to calculate the netto capacity for your RAID-Level. Unfortunately EMC disks do not provide the same capacity as they are labeled with (independent of the used RAID-Level). In order to calculate the right capacity for your RAID-Level, you have to calculate with the raw capacity of an EMC disk.

 
Problem resolution:

Below you can find a table of EMC Clariion disks and their actual raw capacity:

Disk Size Raw / Usable Capacity
73 GB 66.61 GB
146 GB 133.65 GB
250 GB 230.13 GB
300 GB 268.37 GB
320 GB 297.01 GB
400 GB 366.76 GB
450 GB 402.57 GB
500 GB 458.56 GB
750 GB 687.86 GB
1 TB 917.15 GB
Categories: Clariion, EMC, Storage Tags:

Linux: How to query the WWPN of Fibre Channel HBA ports

October 8th, 2011 Matthias Pölzinger No comments

Problem description:

The WWPNs (World Wide Port Numbers) of your HBAs are required e.g. in order to configure Fibre Channel LUN Access. You want to use Linux instruments to access this information in order to avoid rebooting the servers and access the information from BIOS or any other utility.

 
Problem resolution:

Current Linux kernel versions provide relevant information about FC HBAs through the sysfs Filesystem mounted at /sys. The location for detailed information depends on your Linux OS version. For CentOS / Red Hat Enterprise Linux / Oracle Linux 5 the path is /sys/class/scsi_host/host*/device/fc_host*/, but for Version 6 it changed to /sys/class/fc_host/host*/ which makes it easier to separate between SCSI controllers and FC HBAs.

Example for Centos / RHEL / Oracle Linux 5:

[root@initso01 ~]# ls -al /sys/class/scsi_host/host5/device/fc_host:host5/
total 0
drwxr-xr-x 4 root root    0 Oct  5 17:03 .
drwxr-xr-x 6 root root    0 Oct  5 17:03 ..
lrwxrwxrwx 1 root root    0 Oct  5 17:03 device -> ../../../devices/pci0000:00/0000:00:03.0/0000:15:00.0/host5
-r--r--r-- 1 root root 4096 Oct  7 10:07 fabric_name
--w------- 1 root root 4096 Oct  7 10:07 issue_lip
-r--r--r-- 1 root root 4096 Oct  7 10:07 max_npiv_vports
-r--r--r-- 1 root root 4096 Oct  7 10:07 node_name
-r--r--r-- 1 root root 4096 Oct  7 10:07 npiv_vports_inuse
-r--r--r-- 1 root root 4096 Oct  7 10:07 port_id
-r--r--r-- 1 root root 4096 Oct  7 10:07 port_name
-r--r--r-- 1 root root 4096 Oct  7 10:07 port_state
-r--r--r-- 1 root root 4096 Oct  7 10:07 port_type
drwxr-xr-x 2 root root    0 Oct  7 10:07 power
-r--r--r-- 1 root root 4096 Oct  7 10:07 speed
drwxr-xr-x 2 root root    0 Oct  7 10:07 statistics
lrwxrwxrwx 1 root root    0 Oct  5 17:04 subsystem -> ../../fc_host
-r--r--r-- 1 root root 4096 Oct  7 10:07 supported_classes
-r--r--r-- 1 root root 4096 Oct  7 10:07 supported_speeds
-r--r--r-- 1 root root 4096 Oct  7 10:07 symbolic_name
-rw-r--r-- 1 root root 4096 Oct  7 10:07 system_hostname
-rw-r--r-- 1 root root 4096 Oct  7 10:07 tgtid_bind_type
-rw-r--r-- 1 root root 4096 Oct  5 17:03 uevent
--w------- 1 root root 4096 Oct  7 10:07 vport_create
--w------- 1 root root 4096 Oct  7 10:07 vport_delete
[root@initso01 ~]#

 
These fc_host directories can be used to to determine the port speed:

[root@initso01 ~]# for i in `ls /sys/class/scsi_host/host*/device/fc_host*/speed`; do echo $i; echo "==============="; cat $i; done
/sys/class/scsi_host/host5/device/fc_host:host5/speed
===============
unknown
/sys/class/scsi_host/host6/device/fc_host:host6/speed
===============
4 Gbit
/sys/class/scsi_host/host7/device/fc_host:host7/speed
===============
unknown
/sys/class/scsi_host/host8/device/fc_host:host8/speed
===============
4 Gbit
[root@initso01 ~]#

 
or to query the WWPN of each port:

[root@initso01 ~]# for i in `ls /sys/class/scsi_host/host*/device/fc_host*/port_name`; do echo $i; echo "==============="; cat $i; done
/sys/class/scsi_host/host5/device/fc_host:host5/port_name
===============
0x21000024ff2e30ce
/sys/class/scsi_host/host6/device/fc_host:host6/port_name
===============
0x21000024ff2e30cf
/sys/class/scsi_host/host7/device/fc_host:host7/port_name
===============
0x21000024ff2e30cc
/sys/class/scsi_host/host8/device/fc_host:host8/port_name
===============
0x21000024ff2e30cd
[root@initso01 ~]#

 
Just omit the hexidecimal prefix “0x” and you have the WWPN for each Fibre Channel HBA port.

 
Example for Centos / RHEL / Oracle Linux 6:

[root@initso02 ~]# ls -al /sys/class/fc_host/host1/
total 0
drwxr-xr-x. 4 root root    0 Aug 11 07:11 .
drwxr-xr-x. 3 root root    0 Aug 11 07:11 ..
-rw-r--r--. 1 root root 4096 Aug 13 10:29 dev_loss_tmo
lrwxrwxrwx. 1 root root    0 Aug 13 10:29 device -> ../../../host1
-r--r--r--. 1 root root 4096 Aug 13 10:29 fabric_name
--w-------. 1 root root 4096 Aug 13 10:29 issue_lip
-r--r--r--. 1 root root 4096 Aug 13 10:29 max_npiv_vports
-r--r--r--. 1 root root 4096 Aug 13 10:29 node_name
-r--r--r--. 1 root root 4096 Aug 13 10:29 npiv_vports_inuse
-r--r--r--. 1 root root 4096 Aug 13 10:29 port_id
-r--r--r--. 1 root root 4096 Aug 13 10:14 port_name
-r--r--r--. 1 root root 4096 Aug 13 10:29 port_state
-r--r--r--. 1 root root 4096 Aug 13 10:29 port_type
drwxr-xr-x. 2 root root    0 Aug 13 10:29 power
-r--r--r--. 1 root root 4096 Aug 13 10:29 speed
drwxr-xr-x. 2 root root    0 Aug 13 10:29 statistics
lrwxrwxrwx. 1 root root    0 Aug 11 07:11 subsystem -> ../../../../../../../class/fc_host
-r--r--r--. 1 root root 4096 Aug 13 10:29 supported_classes
-r--r--r--. 1 root root 4096 Aug 13 10:29 supported_speeds
-r--r--r--. 1 root root 4096 Aug 13 10:29 symbolic_name
-rw-r--r--. 1 root root 4096 Aug 13 10:29 system_hostname
-rw-r--r--. 1 root root 4096 Aug 13 10:29 tgtid_bind_type
-rw-r--r--. 1 root root 4096 Aug 11 07:11 uevent
--w-------. 1 root root 4096 Aug 13 10:29 vport_create
--w-------. 1 root root 4096 Aug 13 10:29 vport_delete
[root@initso02 ~]#

 
As with Version 5 these fc_host directories can be used to to determine the port speed:

[root@initso02 ~]# for i in `ls /sys/class/fc_host/host*/speed`; do echo $i; echo "==============="; cat $i; done
/sys/class/fc_host/host1/speed
===============
4 Gbit
/sys/class/fc_host/host2/speed
===============
4 Gbit
[root@initso02 ~]#

 
or to query the WWPN of each port:

[root@initso02 ~]# for i in `ls /sys/class/fc_host/host*/port_name`; do echo $i; echo "==============="; cat $i; done
/sys/class/fc_host/host1/port_name
===============
0x24000024ee09a545
/sys/class/fc_host/host2/port_name
===============
0x24000024ee09a578
[root@initso02 ~]#

 
Just omit the hexidecimal prefix "0x" as with Version 5 and you have the WWPN for each Fibre Channel HBA port.

NetApp: “No FCP Target Adapters are present in this system.” when trying to perform FC configuration

September 24th, 2011 Matthias Pölzinger No comments

Problem description:

You have installed FC modules in your NetApp storage system and want to configure Fibre Channel for LUN provisioning, but the fcp command always responds with “No FCP Target Adapters are present in this system.”:

netapp01> fcp config
fcp: No FCP Target Adapters are present in this system.
netapp01> fcp status
fcp: No FCP Target Adapters are present in this system.
netapp01>

 
Altough the “storage show adapter” command correctly displays your FC Host Adapters:

netapp01> storage show adapter
 
Slot:            0a
Description:     Fibre Channel Host Adapter 0a (QLogic 2432 rev. 2)
Firmware Rev:    4.4.0
FC Node Name:    9:00a:098088:25dxyz
FC Packet Size:  2048
Link Data Rate:  4 Gbit
SRAM Parity:     Yes
External GBIC:   No
State:           Disabled
In Use:          No
Redundant:       Yes
 
Slot:            0b
Description:     Fibre Channel Host Adapter 0b (QLogic 2432 rev. 2)
Firmware Rev:    4.4.0
FC Node Name:    9:00a:098188:25dxyz
FC Packet Size:  2048
Link Data Rate:  4 Gbit
SRAM Parity:     Yes
External GBIC:   No
State:           Disabled
In Use:          No
Redundant:       Yes
 
Slot:            0c
Description:     SAS Host Adapter 0c (LSI Logic 1068E rev. B2)
Firmware Rev:    1.26.03.00
Base WWN:        a:060xyz:2000888:00
State:           Enabled
In Use:          Yes
Redundant:       No
Phy State:       [0] Enabled, 3.0Gb/s (9)
                 [1] Enabled, 3.0Gb/s (9)
                 [2] Enabled, 3.0Gb/s (9)
                 [3] Enabled, 3.0Gb/s (9)
Slot:            0e
Description:     IDE Host Adapter 0e
netapp01>

 
Cause:

Newly installed or already installed FC modules are always per default in the so called “initiator mode”:

netapp01> fcadmin config             
 
                  Local   
Adapter Type      State                  Status
---------------------------------------------------
  0a   initiator  CONFIGURED.            online
  0b   initiator  CONFIGURED.            online
 
netapp01>

 
If you want to provide LUNs via Fibre Channel, you have to switch to “target mode”.

 
Problem resolution:

You can change the mode of an adapter via the “fcadmin” command. In this example we will switch both adapters to “target mode”:

netapp01> fcadmin config             
 
                  Local   
Adapter Type      State                  Status
---------------------------------------------------
  0a   initiator  CONFIGURED.            online
  0b   initiator  CONFIGURED.            online
 
netapp01> fcadmin config -d 0a
Mon Sep 19 15:59:38 CEST [netapp01: fci.adapter.offlining:info]: Offlining Fibre Channel adapter 0a.
Mon Sep 19 15:59:38 CEST [netapp01: fci.adapter.offline:info]: Fibre Channel adapter 0a is now offline.
 
netapp01> fcadmin config -t target 0a
Mon Sep 19 15:59:54 CEST [netapp01: fci.config.state:info]: Fibre channel initiator adapter 0a is in the PENDING (target) state.
A reboot is required for the new adapter configuration to take effect.
netapp01>
netapp01> fcadmin config -d 0b
Mon Sep 19 16:00:00 CEST [netapp01: fci.adapter.offlining:info]: Offlining Fibre Channel adapter 0b.
Mon Sep 19 16:00:01 CEST [netapp01: fci.adapter.offline:info]: Fibre Channel adapter 0b is now offline.
 
netapp01> fcadmin config -t target 0b
Mon Sep 19 16:00:40 CEST [netapp01: fci.config.state:info]: Fibre channel initiator adapter 0b is in the PENDING (target) state.
A reboot is required for the new adapter configuration to take effect.
netapp01>
netapp01> fcadmin config             
 
                  Local   
Adapter Type      State                  Status
---------------------------------------------------
  0a   initiator  PENDING (target)       offline
  0b   initiator  PENDING (target)       offline
 
netapp01>

 
In order to complete the configuration, you will have to reboot. After a reboot adapters will be in target mode and you will be able to configure Fibre Channel for storing your servers data.

Categories: NetApp, Storage Tags:

Navisphere: hostname.domain: is not a Managed Host, unable to retrieve Remote Agent Configuration.

Problem description:

You have installed the Navisphere Host Agent and configured the agent appropriately. Navisphere on storage side still has problems to detect the host and displays messages like “hostname.domain: is not a Managed Host, unable to retrieve Remote Agent Configuration.”:



Cause:
Navisphere has problems to contact the host agent. This can have several causes like:

  • Blocked firewall ports
  • Multiple NICs

Problem resolution:
In my case, the problem was caused by multiple installed network interfaces. In this scenario Navisphere Host Agent does not work properly without creating a special configuration file.

If you have multiple NICs, you have to create the agentID.txt file containing the hostname and IP-address to use. Example:

[root@linux01 /]# cat /agentID.txt
linux01.initso.local
192.168.100.101
[root@linux01 /]#

Afterwards restart the Navisphere Host Agent:

[root@linux01 bin]# service naviagent restart
Shutting down Naviagent:                                   [  OK  ]
Starting Navisphere agent:                                 [  OK  ]
[root@linux01 bin]#
Categories: Clariion, EMC, Linux, Navisphere, Storage Tags: