Pre-installation Inspection
System Disk Partitioning Requirements
System installation and operation take up hard disk space, and the operating system hard disk must have 300GB of free space before installation. Disk partitioning should comply with the following requirements:
- At least have a swap partition and a system partition mounted on “/”.
- 200GB~300GB is recommended for system partition, and such partition shall be mounted to /Directory.
- No less than 200GB of free disk space on a data disk for TxSQL is required and please set the data dir of TxSQL to a directory on that data disk (e.g., /mnt/disk1/txsqldata/).
- It is recommended to mount each physical disk to a different mount point at /mnt/disknn (nn is a 1–2-digit number). We recommend using EXT4 file system. Each of these directories will be automatically configured to HDFS DataNode data directory by the Manager Node.
- The data directory of HDFS DataNode cannot be placed in the system partition to avoid insufficient space and IO competition. It is also recommended not to put the data partition and the system partition on the same disk to avoid IO competition. Do not create a data partition on the disk where the system partition is located unless the entire HDFS plan is short of space.
- In order to ensure the stability of Docker operation, users need to assign a disk partition specifically for Docker, the recommended partition size is 100~300GB. To ensure high availability, it is recommended to reserve an empty partition of the same size on the non-Docker disk as a spare partition to allow for quick service recovery in the event of a Docker disk failure.
- To ensure smooth operation, please try to divide the Docker volume group under the data disk. If there are more data disks and excess storage space, it is recommended to use a separate data disk as a Docker partition. Otherwise, it is recommended to use one partition of a data disk as a Docker partition.
Kindly note
If users are using a RedHat or CentOS system, the Docker partition must use the XFS file system. For other disks or partitions, EXT4 file system is recommended. For more details, please refer to Official Docker documentation: https://docs.docker.com/storage/storagedriver/overlayfs-driver.
The docker partition needs to be formatted prior to installation:
Redhat/CentOS
On Redhat/CentOS, the docker partition must be in XFS format, which is implemented as follows:
- Create a directory /var/lib/docker
mkdir -p /var/lib/docker
- Formatting partitions with xfs
mkfs.xfs -f -n ftype=1 /dev/<p_name>
- Mount partition
mount /dev/<p_name> /var/lib/docker
- Check if formatting is successful
xfs_info /dev/<p_name> | grep ftype=1
If the statement returns a result with the word “ftype=1”, the formatting was successful.
5. Configure /etc/fstabCheck the UUID by executing the following command:
blkid /dev/<p_name>
Add the found UUID value to /etc/fstab:
UUID=<UUID> /var/lib/docker xfs defaults,uquota,pquota 0 0
SUSE
On SUSE, the docker partition shall be in ext4 format, which is implemented as follows:
- Create a directory /var/lib/docker
mkdir -p /var/lib/docker
- Formatting partitions with ext4
mkfs.ext4 /dev/<p_name>
- Mount partition
mount /dev/<p_name> /var/lib/docker
Configure /etc/fstab
Check the UUID by executing the following command:
blkid /dev/<p_name>
Add the found UUID value to /etc/fstab:
UUID=<UUID> /var/lib/docker ext4 defaults 0 0
A machine has two 600 GB hard disks, and their partitions and mount directories are listed as below, with /dev/sda1 used as system partition:
FILE SYSTEM | SIZE | MOUNT DIRECTORY | TYPE OF FILE SYSTEM |
---|---|---|---|
/dev/sda1 | 368GB | / | EXT4 |
/dev/sda2 | 32GB | swap | |
/dev/sda3 | 100GB | /var/log | EXT4 |
/dev/sda4 | 100GB | (Empty, Docker spare partition) | XFS |
/dev/sdb1 | 250GB | /mnt/disk1 | EXT4 |
/dev/sdb2 | 250GB | /mnt/disk2 | EXT4 |
/dev/sdb3 | 100GB | /var/lib/docker | XFS |
Table 3-1 Partitions and Mount Directories 1
Kindly note
This is a plan in the case of insufficient disk resources. If the disk resources are abundant, it is recommended that the operating system be installed on a separate disk to prevent data read/write competition between the data partition and the system partition, as the example shown below.
A machine has six 600 GB hard disks, and their partitions and mount directories are listed as below, with one disk used as system partition:
FILE SYSTEM | SIZE | MOUNT DIRECTORY | TYPE OF FILE SYSTEM |
---|---|---|---|
/dev/sda1 | 400GB | / | EXT4 |
/dev/sda2 | 32GB | swap | |
/dev/sda3 | 168GB | /var/log | EXT4 |
/dev/sdb1 | 600GB | /mnt/disk1 | EXT4 |
/dev/sdc1 | 600GB | /mnt/disk2 | EXT4 |
/dev/sdd1 | 400GB | /mnt/disk3 | EXT4 |
/dev/sdd2 | 200GB | /var/lib/docker | XFS |
/dev/sde1 | 400GB | /mnt/disk4 | EXT4 |
/dev/sde2 | 200GB | (Empty, Docker spare partition) | XFS |
Table 3-2 Partitions and Mounted Directories 2
Disk Directory Planning Requirements
Since NameNode, JournalNode, Ganglia, TCOS Master's etcd, TxSQL, Guardian ApacheDS, Zookeeper and other services or roles have high disk IO requirements, it is required that these services or roles directories should not be on the same disk, and not all of them should be on the system disk. If possible, they should not be installed in the system root directory.
The disk configurations or paths used by the above services or roles are listed below. Please plan users’ disk directories according to the paths below and check that the values of the relevant configuration items are the correct paths during installation.
SERVICE | ROLE | CONFIGURATION ITEM/PATH |
---|---|---|
TOS | TOS Master (etcd) | /var/etcd/data/ |
Ganglia | Gmetad | /var/lib/ganglia/rrds/ |
Zookeeper | Zookeeper Server | /var/<service_id>/, such as /var/Zookeeper1/ |
HDFS | Name Node | dfs.namenode.name.dir |
Data Node | dfs.DataNode.data.dir | |
Journal Node | Configuration item dfs.journalnode.edits.dir in hdfs-site.xml, /hadoop/journal by default | |
TxSQL | TxSQL Server | data.dir and log.dir |
Guardian | ApacheDS | guardian.apacheds.data.dir |
TxSQL Server | data.dir | |
Manager (when HA is on) | TxSQL Server | /var/lib/transwarp-manager/master/data/txsql/ |
Table 3-3 Disk Configurations or Paths Used by Services
Memory Capacity Requirements
Each node requires at least 64GB of RAM. Depending on the Data Hub service installed on the node, the node may require more than 64GB of RAM. The following table lists the additional memory required by a node when different services are running on that node.
SERVICE | REQUIREMENT |
---|---|
Management Server | 8GB |
HDFS NameNode | 32GB |
HDFS Standby NameNode | 32GB |
HDFS DataNode | 4GB |
Inceptor Server | 8GB |
Inceptor executor | 32GB |
YARN ResourceManager | 4GB |
YARN NodeManager | 4GB |
Number of computing resources assigned by NodeManager to Container | User specified |
Zookeeper | 4GB |
HBase Master | 4GB |
Table 3-4 Memory Capacity Requirements
The calculation steps of memory required by specific nodes are as follows:
- Confirm all TDH services that will be running on the node.
- Confirm the memory capacity required for each service.
- Add all memory requirements.
- If the added memory requirement is less than 64GB, the minimum memory requirement is 64GB. If the added memory requirement is greater than 64GB, the minimum memory requirement is the added sum.
For example, if a node hosts the following services:
- HDFS DataNode
- YARN ResourceManager
- Hyperbase RegionServer
- YARN NodeManager assigns 32G to Inceptor executor
Then the node should be equipped with (the actual memory used in production environment should be combined with specific application scenarios): 4GB+4GB+32GB+32GB=72GB
Network Settings
The installation of Data Hub requires at least Gigabit Ethernet network. When there are multiple network adapters on a machine, then user can bind configuration before installing Data Hub.
Cluster and Network Topology Requirements
- Decide the number of nodes in the cluster.
- Decide the number of cabinets in the cluster and the name of each cabinet.
- Decide the number of nodes in each cabinet.
- Decide the subnet(s) of each node.
- Decide the host name and IP address of each node.
- Decide which machine will be the Manager Node.
- Decide which machine(s) will be NameNode(s).
- Decide which machines are clients, which machines run the TDH service, or both.
- Once the hostname is assigned to NameNode, the hostname cannot be changed.
- Ensure users have the root password of each node to be added to the TDH cluster.
- The Manager Node must belong to the same subnet as other nodes in the cluster.
- Decide which components to use in the cluster.
- Decide network bandwidth and switch backplane bandwidth. Determine the switch model.
- Decide how to connect to the switch. It is necessary to know which Ethernet ports need to be used and whether they need to be bound.
- Decide the IP address and host name of each machine. Decide how to assign IP address (using DHCP or static assignment). Decide how to resolve the host name (using DNS or /etc/hosts). If /etc/hosts is adopted, the Manager Node will be responsible for updating each machine’s /etc/hosts in the cluster.
NTP Service Settings
Decide how to synchronize time. The Manager Node will be responsible for time synchronization on all servers in the cluster, but users need to decide whether to use an external NTP service. If users do not use an external NTP service, the time of all servers in the cluster is the same, but this time may not be the standard time, which may lead to errors when the cluster has external connections.
Security Settings
Disable SELinux and iptables (Manager will automatically disable SELinux and iptables).
Recommended System Settings
The following recommended configurations help ensure performance optimization and manageability of the TDH cluster.
- Host name resolution of the node. Note that the hostname can only be composed of English letters, numbers and "-", otherwise problems will occur in subsequent installations.
- Add a group of nodes to the cluster at the same time.
- In order to reduce network latency, all nodes in the cluster must belong to the same subnet.
- Each node should be equipped with a 10GE NIC, which is used for communications between nodes and those tasks that need network connections in the cluster.
- If the node does not use a 10GE NIC, users can use NIC binding to combine multiple NICs to improve network flow. The bound NIC must be in working mode 6.
- Each node is recommended to have a minimum system partition of at least 300GB.
- Each node should have at least 6T of available disk space for HDFS.
- There should be at least 5 DataNodes in the cluster.
- If possible, avoid multiple logical partitions on physical disks. Except for system partition, each physical disk should have only one partition and the partition should contain the entire physical disk.
- Use physical machines instead of virtual machines, which may lead to significant HDFS I/O latency.
- Subnet(s) where nodes reside should not have other machines.
- Physical and virtual machines should not coexist in a cluster.
- To ensure that no machine in cluster becomes the bottleneck of performance and I/O, all machines must have similar hardware and software configurations, including RAM, CPU, and disk space.
- Each node should have at least 64GB of memory.
- On the Manager Node, a large amount of data can be written to /var/lib/ganglia due to the system monitoring performed by the nodes in the cluster. It is recommended to assign 200GB of disk space (Given that Ganglia consumes hard disk resource very much, it is recommended to adjust the value according to the number of nodes) to the partition where /var/lib/ganglia is located.
- Since services may generate massive logs, it is recommended to put /var/log in a different logical partition to prevent logs filling up the root partition.
- In order to speed up the reading of the local file system, users can use the noatime option to mount the disks, which means file access time will not be written back.
Updated 9 months ago