DB2 Version 10.1 for Linux, UNIX, and Windows

Installation prerequisites for DB2 pureScale Feature (Linux)

Before you install IBM® DB2® pureScale® Feature, you must ensure that your system meets the following network, hardware, firmware, storage, and software requirements.

You can use the db2prereqcheck command to check the software and firmware prerequisites of a specific DB2 version. Be sure to run the db2prereqcheck command on all the computers you plan to use in your DB2 pureScale Feature environment.

Important: For the most up-to-date installation requirements for DB2 database products, you must start using the System requirements for IBM DB2 for Linux, UNIX, and Windows and System requirements for IBM DB2 Connect™ technotes. These technotes use IBM Software Product Compatibility Reports (SPCR). With the SPCR tool, you can locate and find complete lists of supported operating systems, system requirements, prerequisites, and optional supported software for DB2 database products. This DB2 Information Centre topic might be removed in a future release or fix pack.

Network prerequisites

Two networks are required, one Ethernet network and one high speed communication network. The high speed communication network must be an InfiniBand (IB) network or a 10 Gigabit Ethernet (10GE) network, a mixture of these two networks is not supported.
Note: Although a single Ethernet adapter is required for a DB2 pureScale Feature environment, you should set up Ethernet bonding for the network if you have two Ethernet adapters. Ethernet bonding (also known as channel bonding) is a setup where two or more network interfaces are combined. Ethernet bonding provides redundancy and better resilience in the event of Ethernet network adapter failures. Refer to your Ethernet adapter documentation for instructions on configuring Ethernet bonding. Bonding high speed communication network is not supported.
Table 1. High speed communication adapter requirements rack mounted servers
Communication adapter type Switch IBM Validated Switch Cabling
InfiniBand (IB) QDR IB Mellanox part number MIS5030Q-1SFC QSFP cables
10 Gigabit Ethernet (10GE) 10GE
  1. Blade Network Technologies® RackSwitch™ G8124
  2. Cisco Nexus 5596 Unified Ports Switch
Small Form-factor Pluggable Plus (SFP+) cables
  1. DB2 pureScale environments with Linux systems and InfiniBand communication adapter require FabricIT EFM switch based fabric management software. For communication adapter port support on CF servers, the minimum required fabric manager software image that must be installed on the switch is: image-PPC_M405EX-EFM_1.1.2500.img. The switch might not support a direct upgrade path to the minimum version, in which case multiple upgrades are required. For instructions on upgrading the fabric manager software on a specific Mellanox switch, see the Mellanox website: http://www.mellanox.com/content/pages.php?pg=ib_fabricit_efm_management&menu_section=55. Enabling subnet manager (SM) on the switch is mandatory for InfiniBand networks. To create a DB2 pureScale environment with multiple switches, you must have communication adapter on CF servers and configure switch failover on the switches. To support switch failover, see the Mellanox website for instructions on setting up the subnet manager for a high availability domain.
  2. Cable considerations:
    • On InfiniBand networks: The QSFP 4 x 4 QDR cables are used to connect hosts to the switch, and for inter-switch links, too. If using two switches, two or more inter-switch links are required. The maximum number of inter-switch links required can be determined by using half of the total communication adapter ports connected from CFs and members to the switches. For example, in a two switch DB2 pureScale environment where the primary and secondary CF each have four communication adapter ports, and there are four members, the maximum number of inter-switch links required is 6 (6 = (2 * 4 + 4 )/2).
    • On a 10GE network, the maximum number of ISLs can be further limited by the number of ports supported by the Link Aggregate Communication Protocol (LACP) which is one of the setup required for switch failover. As this value can differ in different switch vendors, refer to the switch manual for any such limitation. For example, the Blade Network Technologies G8124 24 port switch with Blade OS 6.3.2.0, has a limitation of maximum 8 ports in each LACP trunk between the two switches effectively cap the maximum of ISLs to four (4 ports on each switch).
  3. In general, any 10GE switch that supports global pause flow control, as specified by IEEE 802.3x is also supported. However, the exact setup instructions might differ from what is documented in the switch section, which is based on the IBM validated switches. Refer to the switch user manual for details.
Table 2. High speed communication adapter requirements for BladeCenter HS22 servers
Communication adapter type Switch Cabling
InfiniBand (IB) Voltaire 40 Gb InfiniBand Switch1, for example part number 46M6005 QSFP cables 2
10 Gigabit Ethernet (10GE) 3 BNT® Virtual Fabric 10 Gb Switch Module for IBM BladeCenter®, for example part number 46C7191  
  1. To create a DB2 pureScale environment with multiple switches, set up communication adapter for the CF hosts.
  2. Cable considerations:
    • On InfiniBand networks: The QSFP 4 x 4 QDR cables are used to connect hosts to the switch, and for inter-switch links, too. If using two switches, two or more inter-switch links are required. The maximum number of inter-switch links required can be determined by using half of the total communication adapter ports connected from CFs and members to the switches. For example, in a two switch DB2 pureScale environment where the primary and secondary CF each have four communication adapter ports, and there are four members, the maximum number of inter-switch links required is 6 (6 = (2 * 4 + 4 )/2). On a 10GE network, the maximum number of ISLs can be further limited by the number of ports supported by the Link Aggregate Communication Protocol (LACP) which is one of the setup required for switch failover. As this value can differ in different switch vendors, refer to the switch manual for any such limitation. For example, the Blade Network Technologies G8124 24 port switch with Blade OS 6.3.2.0, has a limitation of maximum 8 ports in each LACP trunk between the two switches effectively cap the maximum of ISLs to four (4 ports on each switch).
  3. For more information about using DB2 pureScale Feature with application cluster transparency in BladeCenter, see this developerWorks® article: http://www.ibm.com/developerworks/data/library/techarticle/dm-1110purescalebladecenter/.
Note: If a member exists on the same host as a cluster caching facility (CF), the cluster interconnect netname in db2nodes.cfg for the member and CF must be the same.

Hardware and firmware prerequisites

In Version 10.1 Fix Pack 2 and later fix packs, the DB2 pureScale Feature is supported on any x86 Intel compatible rack mounted server which supports these InfiniBand QDR or Ethernet RoCE adapters:
  • Mellanox ConnectX-2 generation card supporting RDMA over converged Ethernet (RoCE) or InfiniBand
  • Mellanox ConnectX-3 generation card supporting RDMA over converged Ethernet (RoCE) or InfiniBand
IBM has validated these adapters which are configurable options on IBM xSeries servers:
  • Mellanox ConnectX-2 Dual Port 10GbE Adapter for IBM System x® (81Y9990)
  • Mellanox ConnectX-2 Dual-port QSFP QDR IB Adapter for IBM System x (95Y3750)
  • Mellanox ConnectX-3 FDR VPI IB/E Adapter for IBM System x (00D9550)
  • Mellanox ConnectX-3 10 GbE Adapter for IBM System x (00D9690)
Note: Given the widely varying nature of such systems, IBM cannot practically guarantee to have tested on all possible systems or variations of systems. In the event of problem reports for which IBM deems reproduction necessary, IBM reserves the right to attempt problem reproduction on a system that may not match the system on which the problem was reported.
Additionally, these server configurations with any of the specified network adapter types are also supported:
Table 3. Additional IBM validated server configurations
Server 10 Gigabit Ethernet (10GE) adapter Minimum 10GE network adapter firmware version InfiniBand (IB) Host Channel Adapter (HCA) Minimum IB HCA firmware version
BladeCenter HS22 System x blades Mellanox 2-port 10 Gb Ethernet Expansion Card with RoCE, for example part number 90Y3570 2.9.1000 2-port 40 Gb InfiniBand Card (CFFh), for example part number 46M6001 2.9.1000
BladeCenter HS23 System x blades Mellanox 2-port 10 Gb Ethernet Expansion Card (CFFh) with RoCE, part number 90Y3570 2.9.1000 2-port 40 Gb InfiniBand Expansion Card (CFFh) - part number 46M6001 2.9.1000
KVM Virtual Machine Mellanox ConnectX-2 EN 10 Gb Ethernet Adapters with RoCE 2.9.1200 Not supported N/A

IBM Flex System x240
IBM Flex System x440

IBM Flex System EN4132 2-port 10Gb RoCE Adapter

2.10.2324 + uEFI Fix 4.0.320  

   
Note:
  1. Install the latest supported firmware for your System x server from http://www.ibm.com/support/us/en/.
  2. KVM-hosted environments for DB2 pureScale are supported on rack-mounted servers only.
  3. Geographically dispersed DB2 pureScale clusters (GDPC) supports only IBM System x (x64) servers that support remote direct memory access (RDMA) over converged Ethernet (RoCE) network adapter types, including:
    • Mellanox ConnectX-2 generation card supporting RDMA over converged Ethernet (RoCE)
    • Mellanox ConnectX-3 generation card supporting RDMA over converged Ethernet (RoCE)
  4. Availability of specific hardware or firmware can vary over time and region. Check availability with your supplier.

Storage hardware requirements

DB2 pureScale Feature supports all storage area network (SAN) and directly attached shared block storage. Configuring DB2 cluster services managed shared storage is recommended for better resiliency. For more information about DB2 cluster services support, see the “Shared storage considerations” topic. The following storage hardware requirements must be met for DB2 pureScale Feature support.
  • The following local free disk space on each host:
    • 3 GB to extract the installation
    • 6 GB for the Installation path
    • 5 GB for the /tmp directory
    • 5 GB for the instance home directory
    • 5 GB for the /var directory.
  • A minimum of three shared file systems are required, each on a separate physical disk. A fourth shared disk is recommended to configure as the DB2 cluster services tiebreaker disk.

    The following minimum shared disk space must be free for each file system:
    • Instance shared files: 10 GB1
    • Data: dependent on your specific application needs
    • Logs: dependent on the expectant number of transactions and the applications logging requirements
Note: If the host memory is not enough, you can install the product but cannot start the database instance. Memory requirement is different based on the total number of the database or instances existing on the same host.

Software prerequisites

The libraries and additional packages, listed for each specific Linux distribution in the following table are required on the cluster caching facilities and members. The DB2 pureScale Feature does not support Linux virtual machines. Update hosts with the required software before installing DB2 pureScale Feature or updating to the latest fix pack.
Table 4. Minimum Linux software requirements
Linux distribution Kernel version level Required packages OpenFabrics Enterprise Distribution (OFED) package
Red Hat Enterprise Linux (RHEL) 5.6 1 2.6.18-194.26.1.el5

libstdc++ (both 32-bit and 64-bit libraries)
glibc++ (both 32-bit and 64-bit libraries)
cpp
gcc
gcc-c++
kernel-headers
kernel-devel
binutilsOpenSSH
sg3_utils
ntp-4.2.2p1-15.el5

To install OFED on RHEL 5.6 and higher, run a group installation of "OpenFabrics Enterprise Distribution".

Red Hat Enterprise Linux (RHEL) 6.14 2.6.32-131.0.15.el6

For InfiniBand network type (both 32-bit and 64-bit libraries unless specified) :
libibcm
dapl (64-bit libraries only)
ibsim (64-bit libraries only)
ibutils (64-bit libraries only)
libibverbs
librdmacm
libcxgb3
libibmad
libibumad
libipathverbs (64-bit libraries only)
libmlx4
libmthca
libnes (64-bit libraries only)
libmlx4
rdma (no architecture)

For 10GE network type (both 32-bit and 64-bit libraries unless specified) :
ibibcm
dapl (64-bit libraries only)
ibsim (64-bit libraries only)
ibutils (64-bit libraries only)
libibverbs-rocee
librdmacm
libcxgb3
libibmad
libibumad
libipathverbs (64-bit libraries only)
libmlx4-rocee
libmthca
libnes (64-bit libraries only)
rdma (no architecture)

ntp-4.2.4p8-2.el6.x86_64/ntpdate-4.2.4p8-2.el6.x86_64

libstdc++-4.4.5-6.el6.x86_64
libstdc++-4.4.5-6.el6.i686
glibc-2.12-1.25.el6.x86_64
glibc-2.12-1.25.el6.i686
gcc-c++-4.4.5-6.el6.x86_64
gcc-4.4.5-6.el6.x86_64
kernel-2.6.32-131.0.15.el6.x86_64
kernel-devel-2.6.32-131.0.15.el6.x86_64
kernel-headers-2.6.32-131.0.15.el6.x86_64
kernel-firmware-2.6.32-131.0.15.el6.noarch
ntp-4.2.4p8-2.el6.x86_64
ntpdate-4.2.4p8-2.el6.x86_64
sg3_utils-1.28-3.el6.x86_64
sg3_utils-libs-1.28-3.el6.x86_64
binutils-2.20.51.0.2-5.20.el6.x86_64
binutils-devel-2.20.51.0.2-5.20.el6.x86_64
openssh-5.3p1-52.el6.x86_64

cpp-4.4.5-6.el6.x86_64

ksh-20100621-16.el6.x86_64

For InfiniBand network type, run a group installation of "InfiniBand Support" package.

For 10GE network type, subscribe to the Red Hat High Performance Network, then run a group install of "InfiniBand Support" package. This automatically installs the "RHEL server High Performance Networking" package which is mandatory for RDMA over Ethernet support on 10GE network.

SUSE Linux Enterprise Server (SLES) 10 2 Service Pack (SP) 3 2.6.16.60-0.69.1-smp 3

libstdc++ (both 32-bit and 64-bit libraries)
glibc++ (both 32-bit and 64-bit libraries)
cpp
gcc
gcc-c++
kernel-source
binutils
OpenSSH
scsi*.rpm
ntp-4.2.4p8-1.3.28

For SLES 10 SP3 3, to acquire and install the required OFED packages, see technote #1455818

For SLES 10 SP4 and later service packs, you must install OFED packages from the maintenance repository with additional packages that OFED depends on. For more information about installing OFED on SLES 10, see Configuring the network settings of hosts for a DB2 pureScale environment on an InfiniBand network (Linux).

SUSE Linux Enterprise Server (SLES) 11 Service Pack 1 2.6.32.36-0.5 3

libstdc++ (both 32-bit and 64-bit libraries)
glibc++ (both 32-bit and 64-bit libraries)
cpp
gcc
gcc-c++
kernel-default
kernel-default-devel
kernel-default-base
kernel-source
kernel-syms
binutils
OpenSSH
sg3_utils
ntp-4.2.4p8-1.3.28

For information about installing the OFED package and the packages that it depends on, see installing OFED on SLES 11, see Configuring the network settings of hosts for a DB2 pureScale environment on an InfiniBand network (Linux).
  1. On Red Hat Linux:
    • For single communication adapter ports at CFs on InfiniBand network, the minimum support level is RHEL 5.6.
    • For multiple communication adapter ports on InfiniBand network and single or multiple communication adapter port at CFs on 10GE network, the minimum support level is RHEL 6.1.

      i686 which is 32-bit packages might not get installed by default when installing x86_64 server. Make sure that all the 32-bit dependencies are explicitly installed. For example:

      libstdc++-4.4.5-6.el6.i686, pam-1.1.1-8.el6.i686, pam_krb5-2.3.11-6.el6.i686, 
      pam-devel-1.1.1-8.el6.i686, pam_pkcs11-0.6.2-11.1.el6.i686, 
      pam_ldap-185-8.el6.i686
      Alternatively, run the yum command after creating a source from local DVD or after registering to RHN:
      yum install *.i686
  2. On SLES 10 Service Pack 4, the minimum supported kernel version level is the default kernel (2.6.16.60-0.85.1-smp).
  3. On SLES 11 SP1, the default kernel (version 2.6.32.12-0.7-default) must be upgraded to version 2.6.32.36-0.5, which requires that the following kernel packages be installed from the SLES maintenance software repository:
    kernel-default-2.6.32.36-0.5.2
    kernel-default-devel-2.6.32.36-0.5.2
    kernel-default-base-2.6.32.36-0.5.2
    kernel-source-2.6.32.36-0.5.2
    kernel-syms-2.6.32.36-0.5.2
  4. In some installations, if Intel TCO WatchDog Timer Driver modules are loaded by default, they should be blacklisted, so that they do not start automatically or conflict with RSCT. To blacklist the modules, edit the following files:
    1. To verify if the modules are loaded
      lsmod | grep -i iTCO_wdt; lsmod | grep -i iTCO_vendor_support
    2. Edit the configuration files:
      • On RHEL 5.x and RHEL 6.1, edit file /etc/modprobe.d/blacklist.conf:
        # RSCT hatsd
        blacklist iTCO_wdt
        blacklist iTCO_vendor_support
      • On SLES, edit file /etc/modprobe.d/blacklist:
        add
        blacklist iTCO_wdt 
        blacklist iTCO_vendor_support 
Note: The minimum supported host and guest operating system level for KVM Virtualization is RHEL 6.2. Fibre Channel adapters and 10 GE adapters are required by the virtual machines via PCI Passthrough. For instructions on setting up PCI Passthrough of devices for guest VMs, see the Red Hat website: https://access.redhat.com/knowledge/docs/en-US/Red_Hat_Enterprise_Linux/6/html/Virtualization_Host_Configuration_and_Guest_Installation_Guide/chap-Virtualization_Host_Configuration_and_Guest_Installation_Guide-PCI_Device_Config.html
Note:
  • Version 10.1 Fix Pack 4 (and higher fix packs):
    • GPFS: If you have IBM General Parallel File System ( GPFS) installed, it must be GPFS 3.5.0.17. If you need to upgrade to this GPFS level or install the GPFS fixes, the required files are found on the DB2 pureScale installation image in the /db2/aix/gpfs/efix directory.
    • Tivoli® SA MP:If you have IBM Tivoli System Automation for Multiplatforms (Tivoli SA MP), it must be Version 3.2.2.5 . The special fixes are installed by running the installSAM command from the db2/linuxamd64/tsamp directory of the DB2 pureScale installation image.
  • Version 10.1 Fix Pack 3:
    • GPFS:If you have GPFS installed, it must be GPFS 3.5.0.7 (TL2) with efix10. If you need to upgrade to this GPFS level or install the GPFS fixes, the required files are found on the DB2 pureScale installation image in the /db2/aix/gpfs/efix directory.
    • Tivoli SA MP:If you have Tivoli SA MP, it must be Version 3.2.2.5. The special fixes are installed by running the installSAM command from the db2/linuxamd64/tsamp directory of the DB2 pureScale installation image.
  • Version 10.1 Fix Pack 2 (and earlier fix packs):
    • GPFS: If you have GPFS installed, it must be GPFS3.5 PTF4 + efix 13. If you need to upgrade GPFS or install the GPFS fixes, the required files are found on the DB2 pureScale installation image in the db2/linuxamd64/gpfs directory.
    • Tivoli SA MP: If you have Tivoli SA MP installed, it must be Tivoli SA MP 3.2.2.1 + efix 2 + RSCT Joplin PTF7. This version can be installed by running the installSAM command from the db2/linuxamd64/tsamp directory of the DB2 pureScale installation image.
  • Version 10.1 Fix Pack 1:
    • GPFS: If you have GPFS installed, it must be GPFS3.4.0.14 + efix 4. If you need to upgrade GPFS or install the GPFS fixes, the required files are found on the DB2 pureScale installation image in the db2/linuxamd64/gpfs directory.
    • Tivoli SA MP: If you have Tivoli SA MP installed, it must be Tivoli SA MP 3.2.2.1 with RSCT Joplin PTF 5. This version can be installed by running the installSAM command from the db2/linuxamd64/tsamp directory of the DB2 pureScale installation image.
  • Version 10.1 General Availability (GA):
    • GPFS: If you have GPFS installed, it must be GPFS3.4.0.11. If you need to upgrade GPFS or install the GPFS fixes, the required files are found on the DB2 pureScale installation image in the db2/linuxamd64/gpfs directory.
    • Tivoli SA MP: If you have Tivoli SA MP installed, it must be Tivoli SA MP 3.2.2.1 (integrated) with RSCT 3.1.2.1 (Joplin PTF 2). This version can be installed by running the installSAM command from the db2/linuxamd64/tsamp directory of the DB2 pureScale installation image.
1 For better I/O performance, create a separate GPFS™ file system to hold your database and specify this shared disk on the create database command.