Finally today I had implemented NIC bounding (bind both NIC so that it works as a single device).My idea is to improve performance by pumping out more data from both NIC without using any other method.
Linux allows binding multiple network interfaces into a single channel/NIC using special kernel module called bonding. "The Linux bonding driver provides a method for aggregating multiple network interfaces into a single logical "bonded" interface. The behavior of the bonded interfaces depends upon the mode; generally speaking, modes provide either hot standby or load balancing services. Additionally, link integrity monitoring may be performed."
Note:-What is bonding?
Bonding allows you to aggregate multiple ports into a single group, effectively combining the bandwidth into a single connection. Bonding also allows you to create multi-gigabit pipes to transport traffic through the highest traffic areas of your network. For example, you can aggregate three megabits ports (1 mb each) into a three-megabits trunk port. That is equivalent with having one interface with three megabits speed.
Setting up bounding is easy with RHEL v5.0.and above
Step #1:
Create a bond0 configuration file
Red Hat Linux stores network configuration in /etc/sysconfig/network-scripts/ directory. First, you need to create bond0 config file:
Code:
# vi /etc/sysconfig/network-scripts/ifcfg-bond0
Append following lines to it:
DEVICE=bond0
IPADDR=192.168.1.59
NETWORK=192.168.1.0
NETMASK=255.255.255.0
USERCTL=no
BOOTPROTO=none
ONBOOT=yes
Note:Replace above IP address with your actual IP address. Save file and exit to shell prompt
Step #2:
Modify eth0 and eth1 config files:
Open both configuration using vi text editor and make sure file read as follows for eth0 interface
# vi /etc/sysconfig/network-scripts/ifcfg-eth0
Modify/append directive as follows:
DEVICE=eth0
USERCTL=no
ONBOOT=yes
MASTER=bond0
SLAVE=yes
BOOTPROTO=none
Open eth1 configuration file using vi text editor:
# vi /etc/sysconfig/network-scripts/ifcfg-eth1
Make sure file read as follows for eth1 interface:
DEVICE=eth1
USERCTL=no
ONBOOT=yes
MASTER=bond0
SLAVE=yes
BOOTPROTO=none
Save file and exit to shell prompt
Step # 3:
Load bond driver/module
Make sure bonding module is loaded when the channel-bonding interface (bond0) is brought up. You need to modify kernel modules configuration file:
# vi /etc/modprobe.conf
Append following two lines:
alias bond0 bonding
options bond0 mode=balance-alb miimon=100
Note:-Save file and exit to shell prompt. You can learn more about all bounding options at the end of this document
Step # 4:
Test configuration
First, load the bonding module:
# modprobe bonding
Restart networking service in order to bring up bond0 interface:
# service network restart
Verify everything is working:
# less /proc/net/bonding/bond0
Output:
Bonding Mode: load balancing (round-robin)
MII Status: up
MII Polling Interval (ms): 0
Up Delay (ms): 0
Down Delay (ms): 0
Slave Interface: eth0
MII Status: up
Link Failure Count: 0
Permanent HW addr: 00:0c:29:XX:XX:X1
Slave Interface: eth1
MII Status: up
Link Failure Count: 0
Permanent HW addr: 00:0c:29:XX:XX:X2
List all interfaces:
# ifconfig
Output:
bond0 Link encap:Ethernet HWaddr 00:0C:29:XX:XX:XX
inet addr:192.168.1.59 Bcast:192.168.1.255 Mask:255.255.255.0
inet6 addr: fe80::200:ff:fe00:0/64 Scope:Link
UP BROADCAST RUNNING MASTER MULTICAST MTU:1500 Metric:1
RX packets:2804 errors:0 dropped:0 overruns:0 frame:0
TX packets:1879 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:250825 (244.9 KiB) TX bytes:244683 (238.9 KiB)
eth0 Link encap:Ethernet HWaddr 00:0C:29:XX:XX:XX
inet addr:192.168.1.59 Bcast:192.168.1.255 Mask:255.255.255.0
inet6 addr: fe80::20c:29ff:fec6:be59/64 Scope:Link
UP BROADCAST RUNNING SLAVE MULTICAST MTU:1500 Metric:1
RX packets:2809 errors:0 dropped:0 overruns:0 frame:0
TX packets:1390 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:251161 (245.2 KiB) TX bytes:180289 (176.0 KiB)
Interrupt:11 Base address:0x1400
eth1 Link encap:Ethernet HWaddr 00:0C:29:XX:XX:XX
inet addr:192.168.1.59 Bcast:192.168.1.255 Mask:255.255.255.0
inet6 addr: fe80::20c:29ff:fec6:be59/64 Scope:Link
UP BROADCAST RUNNING SLAVE MULTICAST MTU:1500 Metric:1
RX packets:4 errors:0 dropped:0 overruns:0 frame:0
TX packets:502 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:258 (258.0 b) TX bytes:66516 (64.9 KiB)
Interrupt:10 Base address:0x1480
Note:-If the administration tools of your distribution do not support master/slave
notation in configuration of network interfaces, you will need to configure
the bonding device with the following commands manually:
# /sbin/ifconfig bond0 192.168.1.59 up
# /sbin/ifenslave bond0 eth0
# /sbin/ifenslave bond0 eth1
Que:-What are the other MODE options in modprobe .conf file
Ans:-You can set up your bond interface according to your needs. Changing one parameters (mode=X) you can have the following bonding types:
mode=0 (balance-rr)
Round-robin policy: Transmit packets in sequential order from the first available slave through the last. This mode provides load balancing and fault tolerance.
mode=1 (active-backup)
Active-backup policy: Only one slave in the bond is active. A different slave becomes active if, and only if, the active slave fails. The bond's MAC address is externally visible on only one port (network adapter) to avoid confusing the switch. This mode provides fault tolerance. The primary option affects the behavior of this mode.
mode=2 (balance-xor)
XOR policy: Transmit based on [(source MAC address XOR'd with destination MAC address) modulo slave count]. This selects the same slave for each destination MAC address. This mode provides load balancing and fault tolerance.
mode=3 (broadcast)
Broadcast policy: transmits everything on all slave interfaces. This mode provides fault tolerance.
mode=4 (802.3ad)
IEEE 802.3ad Dynamic link aggregation. Creates aggregation groups that share the same speed and duplex settings. Utilizes all slaves in the active aggregator according to the 802.3ad specification.
mode=5 (balance-tlb)
Adaptive transmit load balancing: channel bonding that does not require any special switch support. The outgoing traffic is distributed according to the current load (computed relative to the speed) on each slave. Incoming traffic is received by the current slave. If the receiving slave fails, another slave takes over the MAC address of the failed receiving slave.
mode=6 (balance-alb)
Adaptive load balancing: includes balance-tlb plus receive load balancing (rlb) for IPV4 traffic, and does not require any special switch support. The receive load balancing is achieved by ARP negotiation. The bonding driver intercepts the ARP Replies sent by the local system on their way out and overwrites the source hardware address with the unique hardware address of one of the slaves in the bond such that different peers use different hardware addresses for the server.
No comments:
Post a Comment