网络管理

网卡配置文件

Ubuntu

# /etc/netplan/01-custom-netcfg.yaml
network:
  version: 2
  ethernets:
    ens33:
      dhcp4: false
      addresses:
        - 172.16.0.223/18 
      routes:
        - to: default
          via: 172.16.0.1
      nameservers:
        addresses: [223.5.5.5, 223.6.6.6, 114.114.114.114] 

注意事项:

  • 在 Ubuntu 22.04.5 中,自定义网卡配置文件前,需手动在 /etc/cloud/cloud.cfg.d/99-disable-network-config.cfg 文件中添加 network: {config: disabled},并删除/etc/netplan/50-cloud-init.yaml文件,否则自定义网卡配置文件将无法生效。
echo 'network: {config: disabled}' > /etc/cloud/cloud.cfg.d/99-disable-network-config.cfg

rm -f /etc/netplan/50-cloud-init.yaml

cat > /etc/netplan/01-custom-netcfg.yaml <<EOF
network:
  version: 2
  ethernets:
    ens33: # 网卡名按实际情况填写
      dhcp4: false
      addresses:
        - 172.16.0.122/18 
      routes:
        - to: default
          via: 172.16.0.1
      nameservers:
        addresses: [223.5.5.5, 223.6.6.6, 114.114.114.114] 
EOF

chmod 600 /etc/netplan/01-custom-netcfg.yaml

netplan apply

arp

ip n 命令是 Linux 系统中 ip 工具的一部分,用于显示和管理网络邻居(即 ARP 缓存或 NDP 缓存)的条目。以下是 ip n 命令的输出的一些常见字段及其含义:

  1. IP 地址:表示邻居设备的 IP 地址。
  2. LLADDR(MAC 地址):表示邻居设备的链路层地址,即 MAC 地址。
  3. 状态:表示邻居条目的状态,常见状态有:
    • REACHABLE:表示邻居是可达的,并且最近有通信。
    • STALE:表示邻居是可达的,但最近没有通信。
    • DELAY:表示邻居是可达的,但还没有确认接收到的数据包。
    • PROBE:表示系统正在验证邻居的可达性。
    • FAILED:表示邻居不可达。
  4. 设备(dev):表示通过哪个网络设备与邻居通信。

示例输出:

192.168.1.1 dev eth0 lladdr 00:1a:2b:3c:4d:5e REACHABLE

以上输出表示 IP 地址为 192.168.1.1 的邻居设备通过网络接口 eth0 通信,其 MAC 地址为 00:1a:2b:3c:4d:5e,当前状态为可达(REACHABLE)。

路由

ip r 命令是 Linux 系统中用来显示路由表的命令。它的完整形式是 ip route。路由表是操作系统用来决定数据包应该被发送到哪个网络接口或网关的表。下面是 ip r 命令的输出示例以及每个字段的含义:

# 默认路由,所有未明确指向其他路由的流量将通过 `192.168.1.1` 网关,并使用 `eth0` 接口发送。
default via 192.168.1.1 dev eth0 proto dhcp src 192.168.1.100 metric 100
  1. default: 默认路由。这意味着如果没有找到更具体的路由条目,数据包将通过该路由发送。

  2. via 192.168.1.1: 网关地址,即数据包应该发送到的下一跳地址。

  3. dev eth0: 出接口,即通过哪个网络接口发送数据包。在这个例子中,是通过 eth0 网络接口。

  4. proto dhcp: 路由条目的来源协议。

    • proto dhcp 表示该路由是通过 DHCP 配置的;

    • proto kernel 表示该路由是由内核自动生成的,通常根据网络接口配置生成;

    • proto static 手动配置的静态路由。

  5. src 192.168.1.100: 源地址,表示该路由条目适用于发送源地址为 192.168.1.100 的数据包。

  6. metric 100: 路由的度量值,用来决定路由的优先级。值越小,优先级越高。

#  192.168.1.0/24 网络的流量将直接通过 eth0 接口发送,无需经过网关。
192.168.1.0/24 dev eth0 proto kernel scope link src 192.168.1.100
  1. 192.168.1.0/24: 目标网络地址和子网掩码。
  2. dev eth0: 出接口,通过 eth0 网络接口发送数据包。
  3. proto kernel: 路由条目是由内核自动生成的。
  4. scope link: 路由的作用范围是本地网络链路。
  5. src 192.168.1.100: 适用于发送源地址为 192.168.1.100 的数据包。

netns

示例一

以下是创建一个网络命名空间(netns),在其中创建一个虚拟以太网对(veth pair),并将其中一端移动到该命名空间并分配IP地址的步骤和命令:

# 创建一个网络命名空间:
ip netns add mynetns

# 创建一个虚拟以太网对(veth pair)
ip link add veth0 type veth peer name veth1

# 将一端(例如veth1)移动到网络命名空间
ip link set veth1 netns mynetns

# 在默认命名空间中设置veth0并启用它
ip addr add 10.0.0.3/24 dev veth0
ip link set veth0 up

# 在网络命名空间中设置veth1并分配IP地址,然后启用它
ip netns exec mynetns ip addr add 10.0.0.126/24 dev veth1
sudo ip netns exec mynetns ip link set veth1 up
sudo ip netns exec mynetns ip link set lo up

执行这些命令后,mynetns命名空间中将有一个网卡veth1,其IP地址为10.0.0.126/24

网络配置:

CentOS Net Config

  • 官方文档:https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/configuring_and_managing_networking/index

配置文件说明

  • 说明参考:/usr/share/doc/initscripts/sysconfig.txt

  • 路径:/etc/sysconfig/network-scripts/ifcfg-网卡名

  • **文件说明:***为必填项

设置 说明
TYPE 接口类型;常见有的Ethernet, Bridge
*NAME 此配置文件应用到的设备
*DEVICE 设备名(指定针对哪个网卡)
HWADDR 对应的设备的MAC地址(和DEVICE功能一样,二选一即可)
UUID 设备的惟一标识
*BOOTPROTO: 激活此设备时使用的地址配置协议,常用的dhcp, static, none, bootp
*IPADDR 指明IP地址
NETMASK 子网掩码,如:255.255.255.0
*PREFIX 网络ID的位数, 如:24(等价于NETMASK,二选一即可)
*GATEWAY 默认网关(下一跳,临近主机的路由器地址,跨网络通讯必须配网关)
*DNS1 第一个DNS服务器地址(223.6.6.6,223.5.5.5,180.76.76.76)
*DNS2 第二个DNS服务器地址(容错,设置的DNS记录在/etc/resolv.conf)
DOMAIN 主机不完整时,自动搜索的域名后缀
*ONBOOT 在系统引导时是否激活此设备(开始是否自动开启,=yes即可)
USERCTL 普通用户是否可控制此设备
PEERDNS 如果BOOTPROTO的值为“dhcp”,YES将允许dhcp server分配的dns服务器信息直接覆盖至/etc/resolv.conf文件,NO不允许修改resolv.conf
NM_CONTROLLED NM是NetworkManager的简写,此网卡是否接受NM控制

路由

配置文件:

  • /etc/sysconfig/network-scripts/route-IFACE

语法格式:

  • 风格1:TARGET via GW

    • 如:

      # /etc/sysconfig/network-scripts/route-eth0
      10.0.0.0/8 via 172.16.0.1
      192.0.2.0/24 via 198.51.100.1 dev enp0s1
  • 风格2:

    • 如:

      # /etc/sysconfig/network-scripts/route-eth0
      ADDRESS0=192.0.2.0
      NETMASK0=255.255.255.0
      GATEWAY0=198.51.100.1
    • 注意:如果添加多个静态路由,请增加变量名称中的数字。请注意,每条路线的变量必须按顺序编号。例如,ADDRESS0ADDRESS1ADDRESS3等。

注意:定义单独的路由配置文件 就不要在网卡配置文件中指定网关信息了,否则会因为网卡配置文件优先级高于路由配置文件 从而导致路由配置文件无法生效

范例:

# /etc/sysconfig/network-scripts/route-eth0
default via 10.0.0.2   #默认路由
10.23.0.0/16 via 0.0.0.0 #网络路由

#使其生效
# nmcli connection reload
# nmcli connection up eth0

#最终效果
# route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         10.0.0.2        0.0.0.0         UG    102    0        0 eth0
10.23.0.0       0.0.0.0         255.255.0.0     U     102    0        0 eth0

普通网卡多IP地址绑定

方法一:直接添加IP

#config
# /etc/sysconfig/network-scripts/ifcfg-eth0 
DEVICE=eth0
NAME=eth0
IPADDR=10.0.0.8
IPADDR1=10.0.0.100
IPADDR2=10.0.0.200
PREFIX=16
PREFIX1=16
PREFIX2=32
BOOTPROTO=static
ONBOOT=yes


#reload
# nmcli connection reload 
# nmcli connection up eth0 


#result
# ip addr show eth0
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 00:0c:29:cc:63:39 brd ff:ff:ff:ff:ff:ff
    inet 10.0.0.8/16 brd 10.0.255.255 scope global noprefixroute eth0
       valid_lft forever preferred_lft forever
    inet 10.0.0.200/32 scope global noprefixroute eth0
       valid_lft forever preferred_lft forever
    inet 10.0.0.100/16 brd 10.0.255.255 scope global secondary noprefixroute eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ff:fecc:6339/64 scope link 
       valid_lft forever preferred_lft forever

方法二:网卡设别名

  • 通过打标签的方式实现多IP地址绑定在同一块网卡上
#原有网卡配置
# /etc/sysconfig/network-scripts/ifcfg-eth0 
DEVICE=eth0
NAME=eth0
IPADDR=10.0.0.8
PREFIX=16
BOOTPROTO=static
ONBOOT=yes


#拷贝原有网卡配置文件,并设置别名标签
# cp /etc/sysconfig/network-scripts/ifcfg-eth0 /etc/sysconfig/network-scripts/ifcfg-eth0:1
# vim /etc/sysconfig/network-scripts/ifcfg-eth0:1
DEVICE=eth0:1
NAME=eth0:1
IPADDR=10.0.0.88
PREFIX=16
BOOTPROTO=static
ONBOOT=yes


#reload
# nmcli connection reload 
# nmcli connection up eth0 


#result
# ip addr show eth0
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 00:0c:29:cc:63:39 brd ff:ff:ff:ff:ff:ff
    inet 10.0.0.8/16 brd 10.0.255.255 scope global noprefixroute eth0
       valid_lft forever preferred_lft forever
    inet 10.0.0.88/16 brd 10.0.255.255 scope global secondary noprefixroute eth0:1 #打的label
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ff:fecc:6339/64 scope link 
       valid_lft forever preferred_lft forever

lo网卡多IP地址绑定

  • 只能写 /etc/rc.local ?

nmcli命令也不支持

# nmcli device 
DEVICE  TYPE      STATE      CONNECTION  
eth0    ethernet  connected  test_static 
lo      loopback  unmanaged  --        

# nmcli device connect lo 
Error: Failed to add/activate new connection: Device class NMDeviceGeneric had no complete_connection method #设备类NMDeviceGeneric没有完整的连接方法

#类型中也没有loop设备
# nmcli con add con-name lo_lvs ifname lo autoconnect yes ipv4.addresses 10.0.0.123/32 type 
6lowpan           bond              ethernet          macvlan           team              vxlan             
802-11-olpc-mesh  bond-slave        generic           olpc-mesh         team-slave        wifi              
802-11-wireless   bridge            gsm               ovs-bridge        tun               wifi-p2p          
802-3-ethernet    bridge-slave      infiniband        ovs-interface     vlan              wimax             
adsl              cdma              ip-tunnel         ovs-port          vpn               wireguard         
bluetooth         dummy             macsec            pppoe             vrf               wpan        

在一个网卡上设置一个静态地址和一个动态地址

  • 实现方式:拷贝一份网卡配置文件,主配置文件设置DHCP动态获取地址,网卡别名使用静态地址
  • 注意网卡别名必须使用静态地址

网桥

#1创建网桥
nmcli con add type bridge con-name br0 ifname br0
nmcli connection modify br0 ipv4.addresses 192.168.0.100/24 ipv4.method manual
nmcli con up br0

#2加入物理网卡
nmcli con add type bridge-slave con-name br0-port0 ifname eth0 master br0
nmcli con add type bridge-slave con-name br0-port1 ifname eth1 master br0
nmcli con up br0-port0
nmcli con up br0-port1

配置文件范例

  • nmcli con add con-name mybr0 type bridge ifname br0
# cat ifcfg-mybr0 
STP=yes
BRIDGING_OPTS=priority=32768
TYPE=Bridge
PROXY_METHOD=none
BROWSER_ONLY=no
BOOTPROTO=dhcp
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
IPV6_ADDR_GEN_MODE=stable-privacy
NAME=mybr0
UUID=ec5945b2-4ea3-4893-b109-686236e5fc75
DEVICE=br0
ONBOOT=yes
  • nmcli con modify mybr0 ipv4.addresses 192.168.0.100/24 ipv4.method manual
# cat ifcfg-mybr0 
STP=yes
BRIDGING_OPTS=priority=32768
TYPE=Bridge
PROXY_METHOD=none
BROWSER_ONLY=no
BOOTPROTO=none
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
IPV6_ADDR_GEN_MODE=stable-privacy
NAME=mybr0
UUID=ec5945b2-4ea3-4893-b109-686236e5fc75
DEVICE=br0
ONBOOT=yes
IPADDR=192.168.0.100
PREFIX=24
  • nmcli con add con-name br0-port0 type bridge-slave ifname eth0 master br0
# cat ifcfg-br0-port0 
TYPE=Ethernet
NAME=br0-port0
UUID=bec6d1f3-80dc-4a09-b7c3-043322486793
DEVICE=eth0
ONBOOT=yes
BRIDGE=br0

使其配置文件生效

  • 重启网卡方法:
# centos6、7,Centos6、7需要安装network-scripts才可以使用
systemctl restart network

# centos8
nmcli c reload [网卡NAME] #重新加载网卡,c是connection的缩写
nmcli c up 网卡NAME  #启用网卡

Ubuntu Net Config

  • 官方文档:https://ubuntu.com/server/docs/network-configuration
  • 帮助:man 5 netplan
  • 注意:Ubuntu的网卡配置文件为yaml格式,yaml格式的缩进非常严格,缩进检查方法:cat -A
  • 配置文件以 .yaml 结尾即可

DHCP自动获取

# /etc/netplan/eth1-netcfg.yaml
network:
  version: 2
  renderer: networkd
  ethernets:
    eth1:
      dhcp4: yes

单网卡静态IP

  • 方法一:常用,条例更清晰
# /etc/netplan/eth1-netcfg.yaml
network:
  version: 2
  renderer: networkd
  ethernets:
    eth1:
      addresses:
        - 10.0.0.10/24
        - 192.168.8.10/24
      gateway4: 10.0.0.8
      nameservers:
        addresses: [180.76.76.76, 223.5.5.5, 223.6.6.6]   
        search: [xiangzheng.vip]                                                
  • 方法二:
# /etc/netplan/eth1-netcfg.yaml
network:
  version: 2
  renderer: networkd
  ethernets:
    eth1:
      addresses: [10.0.0.100/24,192.168.8.10/24]
      gateway4: 10.0.0.8
      nameservers:
        addresses: [180.76.76.76, 223.5.5.5, 223.6.6.6]   
        search: [xiangzheng.vip]  

多网卡静态IP和路由

# cat netcfg.yaml
network:
  version: 2
  renderer: networkd
  ethernets:
    eth0:
      addresses:
        - 10.0.0.100/24
        - 10.0.0.110/24
        - 10.0.0.111/24
      gateway4: 10.0.0.2
      nameservers:
        addresses: [180.76.76.76, 223.5.5.5, 223.6.6.6]
        search: [xiangzheng.vip]
      routes:
        - to: 10.30.0.0/16
          via: 0.0.0.0
        - to: 10.60.0.0/18
          via: 0.0.0.0
        - to: 10.80.0.0/28
          via: 0.0.0.0
    eth1:
      addresses: 
        - 10.0.0.123/24
        
#使其生效
netplan apply

#最终效果
# route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         10.0.0.2        0.0.0.0         UG    0      0        0 eth0
10.0.0.0        0.0.0.0         255.255.255.0   U     0      0        0 eth1
10.0.0.0        0.0.0.0         255.255.255.0   U     0      0        0 eth0
10.30.0.0       0.0.0.0         255.255.0.0     U     0      0        0 eth0
10.60.0.0       0.0.0.0         255.255.192.0   U     0      0        0 eth0
10.80.0.0       0.0.0.0         255.255.255.240 U     0      0        0 eth0
# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 00:0c:29:cd:b5:7f brd ff:ff:ff:ff:ff:ff
    inet 10.0.0.100/24 brd 10.0.0.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet 10.0.0.110/24 brd 10.0.0.255 scope global secondary eth0
       valid_lft forever preferred_lft forever
    inet 10.0.0.111/24 brd 10.0.0.255 scope global secondary eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ff:fecd:b57f/64 scope link 
       valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 00:0c:29:cd:b5:89 brd ff:ff:ff:ff:ff:ff
    inet 10.0.0.123/24 brd 10.0.0.255 scope global eth1
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ff:fecd:b589/64 scope link 
       valid_lft forever preferred_lft forever

每个网卡一个配置文件

# cat eth0-netcfg.yaml 
network:
  version: 2
  renderer: networkd
  ethernets:
    eth0:
      addresses:
        - 10.0.0.100/24
        - 10.0.0.110/24
        - 10.0.0.111/24
      gateway4: 10.0.0.2
      nameservers:
        addresses: [180.76.76.76, 223.5.5.5, 223.6.6.6]
        search: [xiangzheng.vip]
      routes:
        - to: 10.30.0.0/16
          via: 10.0.0.66

# cat eth1-netcfg.yaml 
network:
  version: 2
  renderer: networkd
  ethernets:
    eth1:
      addresses: 
        - 10.0.0.123/24



#使其生效
netplan apply

#最终效果
# route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         10.0.0.2        0.0.0.0         UG    0      0        0 eth0
10.0.0.0        0.0.0.0         255.255.255.0   U     0      0        0 eth1
10.0.0.0        0.0.0.0         255.255.255.0   U     0      0        0 eth0
10.30.0.0       10.0.0.66       255.255.0.0     UG    0      0        0 eth0
# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 00:0c:29:cd:b5:7f brd ff:ff:ff:ff:ff:ff
    inet 10.0.0.100/24 brd 10.0.0.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet 10.0.0.110/24 brd 10.0.0.255 scope global secondary eth0
       valid_lft forever preferred_lft forever
    inet 10.0.0.111/24 brd 10.0.0.255 scope global secondary eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ff:fecd:b57f/64 scope link 
       valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 00:0c:29:cd:b5:89 brd ff:ff:ff:ff:ff:ff
    inet 10.0.0.123/24 brd 10.0.0.255 scope global eth1
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ff:fecd:b589/64 scope link 
       valid_lft forever preferred_lft forever

单网卡桥接

# cat netcfg.yaml 
network:
  version: 2
  renderer: networkd
  ethernets:
    eth0:
      dhcp4: no
      dhcp6: no
    eth1:
      dhcp4: no
      dhcp6: no
  bridges:
    br0:
      dhcp4: no
      dhcp6: no
      addresses:
        - 10.0.0.100/24
      gateway4: 10.0.0.2
      nameservers:
        addresses: [180.76.76.76, 223.5.5.5, 223.6.6.6]
        search: [xiangzheng.vip]
      routes:
        - to: 10.30.0.0/16
          via: 10.0.0.66
      interfaces:
        - eth0
        - eth1

#使其生效
netplan apply

#最终效果
# route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         10.0.0.2        0.0.0.0         UG    0      0        0 br0
10.0.0.0        0.0.0.0         255.255.255.0   U     0      0        0 br0
10.30.0.0       10.0.0.66       255.255.0.0     UG    0      0        0 br0
# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel master br0 state UP group default qlen 1000
    link/ether 00:0c:29:cd:b5:7f brd ff:ff:ff:ff:ff:ff
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel master br0 state UP group default qlen 1000
    link/ether 00:0c:29:cd:b5:89 brd ff:ff:ff:ff:ff:ff
4: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 00:0c:29:cd:b5:7f brd ff:ff:ff:ff:ff:ff
    inet 10.0.0.100/24 brd 10.0.0.255 scope global br0
       valid_lft forever preferred_lft forever
    inet6 fe80::b460:d6ff:fe0c:ae23/64 scope link 
       valid_lft forever preferred_lft forever
# ping www.baidu.com
PING www.a.shifen.com (39.156.66.18) 56(84) bytes of data.
64 bytes from 39.156.66.18 (39.156.66.18): icmp_seq=1 ttl=128 time=34.2 ms
64 bytes from 39.156.66.18 (39.156.66.18): icmp_seq=1 ttl=128 time=43.6 ms (DUP!)
64 bytes from 39.156.66.18 (39.156.66.18): icmp_seq=2 ttl=128 time=34.3 ms
64 bytes from 39.156.66.18 (39.156.66.18): icmp_seq=2 ttl=128 time=42.6 ms (DUP!)
64 bytes from 39.156.66.18 (39.156.66.18): icmp_seq=3 ttl=128 time=32.1 ms
64 bytes from 39.156.66.18 (39.156.66.18): icmp_seq=3 ttl=128 time=45.7 ms (DUP!)
# brctl show
bridge name	bridge id		STP enabled	interfaces
br0		8000.000c29cdb57f	no		      eth0
							              eth1

多网卡桥接

# cat /etc/netplan/netcfg.yaml 
network:
  version: 2
  renderer: networkd
  ethernets:
    eth0:
      dhcp4: no
    eth1:
      dhcp4: no
  bridges:
    br0:
      dhcp4: no
      addresses:
        - 10.0.0.100/24
      gateway4: 10.0.0.2
      nameservers:
        addresses: [180.76.76.76, 223.5.5.5, 223.6.6.6]
        search: [xiangzheng.vip]
      routes:
        - to: 10.30.0.0/16
          via: 10.0.0.66
      interfaces:
        - eth0
    br1:
      dhcp4: no
      addresses:
        - 10.0.0.111/24
      nameservers:
        addresses: [180.76.76.76, 223.5.5.5, 223.6.6.6]
        search: [xiangzheng.vip]
      routes:
        - to: 10.30.0.0/16
          via: 10.0.0.66
        - to: 10.80.0.0/16
          via: 10.0.0.88
      interfaces:
        - eth1
# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel master br0 state UP group default qlen 1000
    link/ether 00:0c:29:cd:b5:7f brd ff:ff:ff:ff:ff:ff
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel master br1 state UP group default qlen 1000
    link/ether 00:0c:29:cd:b5:89 brd ff:ff:ff:ff:ff:ff
4: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 00:0c:29:cd:b5:7f brd ff:ff:ff:ff:ff:ff
    inet 10.0.0.100/24 brd 10.0.0.255 scope global br0
       valid_lft forever preferred_lft forever
    inet6 fe80::789e:4eff:feab:58be/64 scope link 
       valid_lft forever preferred_lft forever
5: br1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 00:0c:29:cd:b5:89 brd ff:ff:ff:ff:ff:ff
    inet 10.0.0.111/24 brd 10.0.0.255 scope global br1
       valid_lft forever preferred_lft forever
    inet6 fe80::e86b:bdff:fe02:c1a6/64 scope link 
       valid_lft forever preferred_lft forever


#使其生效
netplan apply


#最终效果
# route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         10.0.0.2        0.0.0.0         UG    0      0        0 br0
10.0.0.0        0.0.0.0         255.255.255.0   U     0      0        0 br1
10.0.0.0        0.0.0.0         255.255.255.0   U     0      0        0 br0
10.30.0.0       10.0.0.66       255.255.0.0     UG    0      0        0 br1
10.30.0.0       10.0.0.66       255.255.0.0     UG    0      0        0 br0
10.80.0.0       10.0.0.88       255.255.0.0     UG    0      0        0 br1
# brctl show
bridge name	bridge id		STP enabled	interfaces
br0		8000.000c29cdb57f	no		eth0
br1		8000.000c29cdb589	no		eth1

双网卡绑定

  • 下面以 active-backup 主备模式 举例

配置文件

# cat /etc/netplan/netcfg.yaml 
network:
  version: 2
  renderer: networkd
  ethernets:
    eth0:
      dhcp4: no
      dhcp6: no
    eth1:
      dhcp4: no
      dhcp6: no
  bonds:
    bond0:
      addresses:
        - 10.0.0.100/24
      gateway4: 10.0.0.2  
      nameservers:
        addresses: [180.76.76.76, 223.5.5.5, 223.6.6.6]
        search: [xiangzheng.vip]
      routes:
        - to: 10.30.0.0/16
          via: 10.0.0.66
        - to: 10.80.0.0/16
          via: 10.0.0.88
      interfaces:
        - eth0
        - eth1
      parameters:
        mode: active-backup
        mii-monitor-interval: 100

使其生效

netplan apply

最终配置

# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc fq_codel master bond0 state UP group default qlen 1000
    link/ether f6:ad:84:c8:7b:bc brd ff:ff:ff:ff:ff:ff
3: eth1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc fq_codel master bond0 state UP group default qlen 1000
    link/ether f6:ad:84:c8:7b:bc brd ff:ff:ff:ff:ff:ff
4: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether f6:ad:84:c8:7b:bc brd ff:ff:ff:ff:ff:ff
    inet 10.0.0.100/24 brd 10.0.0.255 scope global bond0
       valid_lft forever preferred_lft forever
    inet6 fe80::f4ad:84ff:fec8:7bbc/64 scope link 
       valid_lft forever preferred_lft forever

# route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         10.0.0.2        0.0.0.0         UG    0      0        0 bond0
10.0.0.0        0.0.0.0         255.255.255.0   U     0      0        0 bond0
10.30.0.0       10.0.0.66       255.255.0.0     UG    0      0        0 bond0
10.80.0.0       10.0.0.88       255.255.0.0     UG    0      0        0 bond0

# cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)

Bonding Mode: fault-tolerance (active-backup)
Primary Slave: None
Currently Active Slave: eth1 #目前活跃的网卡
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0
Peer Notification Delay (ms): 0

Slave Interface: eth1 #成员
MII Status: up #状态
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 00:0c:29:cd:b5:89
Slave queue ID: 0

Slave Interface: eth0 #成员
MII Status: up #状态
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 00:0c:29:cd:b5:7f
Slave queue ID: 0

测试

# cat /proc/net/bonding/bond0
...
Currently Active Slave: eth1 #目前eth1网卡处于活跃状态
...

#其他主机ping无问题
# ping 10.0.0.100
PING 10.0.0.100 (10.0.0.100) 56(84) bytes of data.
64 bytes from 10.0.0.100: icmp_seq=1 ttl=64 time=0.637 ms
...


#断掉eth1网卡
过程省略...

# cat /proc/net/bonding/bond0
...
Currently Active Slave: eth0 #目前eth0网卡处于活跃状态
...


#其他主机ping无问题
# ping 10.0.0.100
PING 10.0.0.100 (10.0.0.100) 56(84) bytes of data.
64 bytes from 10.0.0.100: icmp_seq=53 ttl=64 time=0.361 ms

双网卡绑定+桥接

重启网卡

ifdown eth0 #关闭网卡
ifup eth0 #开启网卡 

使其配置文件生效

  • 执行完后使用 resolvectl status 命令查看更改是否生效,不要参阅 /etc/resolv.conf 文件
netplan apply

网卡统一命名

  • 统一命名成 eth0 系列

修改配置文件

非交互式修改

sed -ri.bak '/^GRUB_CMDLINE_LINUX/s/(.*)(")$/\1 net.ifnames=0\2/' /etc/default/grub 


sed -i.bak '/^GRUB_CMDLINE_LINUX=/s#"$#net.ifnames=0"#' /etc/default/grub

交互式修改

# vim /etc/default/grub
...
GRUB_CMDLINE_LINUX="net.ifnames=0"
...

使其生效

centos

grub2-mkconfig -o /boot/grub2/grub.cfg

nmcli connection reload

nmcli connection up eth0

--------------------------------------------------------------------------------------

#如果事先生成了默认的网卡配置文件,还需修改网卡配置文件的网卡名
# vim /etc/sysconfig/network-scripts/ifcfg-eth0
DEVICE=eth0
NAME=eth0
...

Ubuntu

grub-mkconfig -o /boot/grub/grub.cfg

update-grub

lo 网卡

lo 网卡说明

回环网卡(Loopback adaptor),是一种特殊的网络接口,不与任何实际设备连接,而是完全由软件实现。与回环地址(127.0.0.0/8 或 ::1/128)不同,回环网卡对系统“显示”为一块硬件。任何发送到该网卡上的数据都将立刻被同一网卡接收到。例子有 Linux 下的 lo 接口和 Windows 下的 Microsoft Loopback Interface 网卡。

因为回环网卡并不和硬件设备直接挂钩 而是由软件实现的,所以不会被硬件网卡的故障等原因所影响,某些特殊场景 为了保障IP地址的高可靠性 一般会将IP地址绑定在回环网卡上,如:LVS

lo 网卡特点

lo网卡添加的IP不会生成路由记录

#将地址绑定在lo网卡上
# ip addr add 8.8.8.8/32 dev lo label lo:1
# ip addr show dev lo
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet 8.8.8.8/32 scope global lo:1
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever

#不会生成路由记录
# route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
10.0.0.0        0.0.0.0         255.255.255.0   U     0      0        0 eth0

lo网卡添加的地址认为是一个网段

#本机的lo网卡信息
# ip addr show lo 
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet 6.6.6.6/24 scope global lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever

#访问测试
# ping 127.0.0.1
64 bytes from 127.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms
# ping 127.1.2.3
64 bytes from 127.1.2.3: icmp_seq=1 ttl=64 time=0.097 ms
# ping 127.6.7.8
64 bytes from 127.6.7.8: icmp_seq=1 ttl=64 time=0.039 ms
# ping 6.6.6.6
64 bytes from 6.6.6.6: icmp_seq=1 ttl=64 time=0.027 ms
# ping 6.6.6.8
64 bytes from 6.6.6.8: icmp_seq=1 ttl=64 time=0.084 ms
# ping 6.6.8.8
ping: connect: Network is unreachable

IP地址绑定至lo网卡

  • 因为lo网卡在添加后不会生成路由记录,所以需要手动添加路由记录才能实现远程访问
  • 应用场景
    • LVS

添加IP至lo网卡

  • 注意此方式重启失效

  • 虽然不写netmask即表示32位,但是新版本的ip命令说后期这种写法会删除,所以最好还是写完整的ip/mask

# ip addr add 8.8.8.8/32 dev lo label lo:1
# ip addr show dev lo
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet 8.8.8.8/32 scope global lo:1
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever

手动添加路由表

# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet 8.8.8.8/32 scope global lo:1 #8.8.8.8/32
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 00:0c:29:cd:b5:7f brd ff:ff:ff:ff:ff:ff
    inet 10.0.0.100/24 brd 10.0.0.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ff:fecd:b57f/64 scope link 
       valid_lft forever preferred_lft forever
# route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
10.0.0.0        0.0.0.0         255.255.255.0   U     0      0        0 eth0


----------------------------------------------------------------------------------

# route add -host 8.8.8.8/32 gw 10.0.0.101 dev eth0

# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 00:0c:29:0d:63:cd brd ff:ff:ff:ff:ff:ff
    inet 10.0.0.101/24 brd 10.0.0.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ff:fe0d:63cd/64 scope link 
       valid_lft forever preferred_lft forever
# route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
8.8.8.8         10.0.0.101      255.255.255.255 UGH   0      0        0 eth0
10.0.0.0        0.0.0.0         255.255.255.0   U     0      0        0 eth0
# ip route 
8.8.8.8 via 10.0.0.101 dev eth0 scope link 
10.0.0.0/24 dev eth0 proto kernel scope link src 10.0.0.101 

网卡生效范围

Linux的IP地址信息是存放在内核中的,假设有多块网卡,那么数据包无论从哪块网卡进来,只要本机有这个IP地址 就能够给予响应,但前提是IP工作范围必须配置为global模式

  • Linux 网卡IP的工作范围 scope
    • global 全局可用,默认值,参考上方说明
    • link 仅链接可用,表示绑定的IP地址只能由此网卡所响应
    • host 本机可用,只能在本机使用,如:lo网卡默认的127.0.0.1/8

范例:

# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo #为host,所以只能在本机使用
       valid_lft forever preferred_lft forever
    inet 8.8.8.8/32 scope global lo:1 #为global,所以全局可用
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 00:0c:29:cd:b5:7f brd ff:ff:ff:ff:ff:ff
    inet 10.0.0.100/24 brd 10.0.0.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ff:fecd:b57f/64 scope link #为link,所以只能在某个链路使用
       valid_lft forever preferred_lft forever

CentOS 多网卡绑定

说明

  • 将多块网卡绑定同一IP地址对外提供服务,可以实现高可用或者负载均衡。直接给两块网卡设置同一IP地址是不可以的。但是可以通过 bonding,虚拟一块网卡对外提供连接,物理网卡的被修改为相同的MAC地址

help

/usr/share/doc/kernel-doc- version/Documentation/networking/bonding.txt

https://www.kernel.org/doc/Documentation/networking/bonding.txt

工作模式

Mode 0 (balance-rr) 轮询

  • 轮询(Round-robin)策略,从头到尾顺序的在每一个slave接口上面发送数据包。本模式提供负载均衡和容错的能力

Mode 1 (active-backup) 主备

  • 只有一个slave被激活,当且仅当活动的slave接口失败时才会激活其他slave.为了避免交换机发生混乱此时绑定的MAC地址只有一个外部端口上可见

Mode 3 (broadcast) 广播策略

  • 在所有的slave接口上传送所有的报文,提供容错能力

其他模式

  • 省略…

说明:

  • active-backup、balance-tlb 和 balance-alb 模式不需要交换机的任何特殊配置。其他绑定模式需要配置交换机以便整合链接。如:Cisco 交换机需要在模式 0、2 和 3 中使用 EtherChannel,但在模式4中需要 LACP和EtherChannel

相关命令

查看bond0状态

/proc/net/bonding/bond0

删除bond0

ifconfig bond0 down

rmmod bonding

bonding 范例:

环境

系统 IP 网卡 网络模式
centos8 10.0.0.8 eth0、eth1 NAT

bonding配置文件配置:

  • bond配置文件必须以ifcfg-bond开头
# /etc/sysconfig/network-scripts/ifcfg-bond0
TYPE=bond
DEVICE=bond0
BOOTPROTO=none
IPADDR=10.0.0.100 #绑定的对外IP地址
PREFIX=8
DNS1=180.76.76.76
BONDING_OPTS="mode=1 miimon=100" #mode=N 指定工作模式,miimon指定链路监测时间间隔。如果miimon=100,那么系统每100ms 监测一次链路连接状态,如果有一条线路不通就转入另一条线路

eth0网卡配置:

# /etc/sysconfig/network-scripts/ifcfg-eth0
DEVICE=eth0
BOOTPROTO=none
MASTER=bond0
SLAVE=yes
ONBOOT=yes

eth1网卡配置:

# /etc/sysconfig/network-scripts/ifcfg-eth1
DEVICE=eth1
BOOTPROTO=none
MASTER=bond0
SLAVE=yes
ONBOOT=yes

配置结果

  • 可以重启使其生效
# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc fq_codel master bond0 state UP group default qlen 1000
    link/ether 00:0c:29:cc:63:2f brd ff:ff:ff:ff:ff:ff
3: eth1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc fq_codel master bond0 state UP group default qlen 1000
    link/ether 00:0c:29:cc:63:2f brd ff:ff:ff:ff:ff:ff
4: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 00:0c:29:cc:63:2f brd ff:ff:ff:ff:ff:ff
    inet 10.0.0.100/8 brd 10.255.255.255 scope global noprefixroute bond0
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ff:fecc:632f/64 scope link 
       valid_lft forever preferred_lft forever



# cat /etc/resolv.conf 
# Generated by NetworkManager
nameserver 180.76.76.76



# cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)

Bonding Mode: fault-tolerance (active-backup)
Primary Slave: None
Currently Active Slave: eth0 #eth0网卡目前为提供服务的网卡
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0
Peer Notification Delay (ms): 0

Slave Interface: eth0
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 00:0c:29:cc:63:2f
Slave queue ID: 0

Slave Interface: eth1
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 00:0c:29:cc:63:39
Slave queue ID: 0

断开活动的eth0网卡

  • 最后可以实现断开活动的网卡也可以正常对外服务
# cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)

Bonding Mode: fault-tolerance (active-backup)
Primary Slave: None
Currently Active Slave: eth1 #eth1网卡成为主网卡
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0
Peer Notification Delay (ms): 0

Slave Interface: eth0
MII Status: down #eth0网卡down掉
Speed: Unknown
Duplex: Unknown
Link Failure Count: 1
Permanent HW addr: 00:0c:29:0b:14:8b
Slave queue ID: 0

Slave Interface: eth1
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 00:0c:29:0b:14:95
Slave queue ID: 0

Ubuntu 多网卡绑定

Ubuntu 多网卡绑定模式

Ubuntu支持七种多网卡绑定模式

mod=0

  • 即(balance-rr) Round-robin policy(平衡抡循环策略)
  • 特点:
    • 传输数据包顺序是依次传输(即:第1个包走eth0,下一个包就走eth1…一直循环下去,直到最后一个传输完毕)
  • 此模式提供负载平衡和容错能力。

mod=1

  • 即 (active-backup) Active-backup policy(主-备份策略)
  • 特点:
    • 只有一个设备处于活动状态,当一个宕掉另一个马上由备份转换为主设备。
    • mac地址是外部可见得,从外面看来,bond的MAC地址是唯一的,以避免switch(交换机)发生混乱。
  • 此模式只提供了容错能力;由此可见此算法的优点是可以提供网络连接的可用性,但是它的资源利用率较低,只有一个接口处于工作状态,在有N 个网络接⼝的情况下,资源利用率为1/N。

mod=2

  • 即(balance-xor) XOR policy(平衡策略)
  • 特点:
    • 基于指定的传输HASH策略传输数据包。缺省的策略是:(源MAC地址 XOR 目标MAC地址) % slave数量。其他的传输策略可以通过xmit_hash_policy选项指定
  • 此模式提供负载平衡和容错能力。

mod=3

  • 即broadcast(广播策略)
  • 特点:
    • 在每个slave接口上传输每个数据包
  • 此模式提供了容错能力。

mod=4

  • 即(802.3ad) IEEE 802.3ad Dynamic link aggregation(IEEE 802.3ad 动态链接聚合)
  • 特点:
    • 创建一个聚合组,它们共享同样的速率和双工设定。
    • 根据802.3ad规范将多个slave工作在同一个激活的聚合体下。
  • 必要条件:
    • 条件1:ethtool持获取每个slave的速率和双工设定。
    • 条件2:switch(交换机)支持IEEE 802.3ad Dynamic link aggregation。
    • 条件3:多数switch(交换机)需要经过特定配置才能支持802.3ad模式。

mod=5

  • 即(balance-tlb) Adaptive transmit load balancing(适配器传输负载均衡)
  • 特点:
    • 不需要任何特别的switch(交换机)支持的通道bonding。在每个slave上根据当前的负(根据速度计算)分配外出流量。如果正在接受数据的slave出故障了,另一个slave接管失败的slave的MAC地址。
  • 该模式的必要条件:
    • ethtool支持获取每个slave的速率

mod=6

  • 即(balance-alb) Adaptive load balancing(适配器适应性负载均衡)
  • 特点:
    • 该模式包含了balance-tlb模式,同时加上针对IPV4流量的接收负载均衡(receive loadbalance,rlb),而且不需要任何switch(交换机)的支持。

Ubuntu 实现多网卡绑定

  • 下面以 active-backup 主备模式 举例

配置文件

# cat /etc/netplan/netcfg.yaml 
network:
  version: 2
  renderer: networkd
  ethernets:
    eth0:
      dhcp4: no
      dhcp6: no
    eth1:
      dhcp4: no
      dhcp6: no
  bonds:
    bond0:
      addresses:
        - 10.0.0.100/24
      gateway4: 10.0.0.2  
      nameservers:
        addresses: [180.76.76.76, 223.5.5.5, 223.6.6.6]
        search: [xiangzheng.vip]
      routes:
        - to: 10.30.0.0/16
          via: 10.0.0.66
        - to: 10.80.0.0/16
          via: 10.0.0.88
      interfaces:
        - eth0
        - eth1
      parameters:
        mode: active-backup
        mii-monitor-interval: 100

使其生效

netplan apply

最终配置

# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc fq_codel master bond0 state UP group default qlen 1000
    link/ether f6:ad:84:c8:7b:bc brd ff:ff:ff:ff:ff:ff
3: eth1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc fq_codel master bond0 state UP group default qlen 1000
    link/ether f6:ad:84:c8:7b:bc brd ff:ff:ff:ff:ff:ff
4: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether f6:ad:84:c8:7b:bc brd ff:ff:ff:ff:ff:ff
    inet 10.0.0.100/24 brd 10.0.0.255 scope global bond0
       valid_lft forever preferred_lft forever
    inet6 fe80::f4ad:84ff:fec8:7bbc/64 scope link 
       valid_lft forever preferred_lft forever

# route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         10.0.0.2        0.0.0.0         UG    0      0        0 bond0
10.0.0.0        0.0.0.0         255.255.255.0   U     0      0        0 bond0
10.30.0.0       10.0.0.66       255.255.0.0     UG    0      0        0 bond0
10.80.0.0       10.0.0.88       255.255.0.0     UG    0      0        0 bond0

# cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)

Bonding Mode: fault-tolerance (active-backup)
Primary Slave: None
Currently Active Slave: eth1 #目前活跃的网卡
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0
Peer Notification Delay (ms): 0

Slave Interface: eth1 #成员
MII Status: up #状态
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 00:0c:29:cd:b5:89
Slave queue ID: 0

Slave Interface: eth0 #成员
MII Status: up #状态
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 00:0c:29:cd:b5:7f
Slave queue ID: 0

测试

# cat /proc/net/bonding/bond0
...
Currently Active Slave: eth1 #目前eth1网卡处于活跃状态
...

#其他主机ping无问题
# ping 10.0.0.100
PING 10.0.0.100 (10.0.0.100) 56(84) bytes of data.
64 bytes from 10.0.0.100: icmp_seq=1 ttl=64 time=0.637 ms
...


#断掉eth1网卡
过程省略...

# cat /proc/net/bonding/bond0
...
Currently Active Slave: eth0 #目前eth0网卡处于活跃状态
...


#其他主机ping无问题
# ping 10.0.0.100
PING 10.0.0.100 (10.0.0.100) 56(84) bytes of data.
64 bytes from 10.0.0.100: icmp_seq=53 ttl=64 time=0.361 ms

基于网络组实现网卡冗余

  • Network Teaming

  • 网络组:是将多个网卡聚合在一起方法,从而实现冗错和提高吞吐量

  • 网络组不同于旧版中bonding技术,提供更好的性能和扩展性

  • 网络组由内核驱动和teamd守护进程实现

  • 多种方式 runner:

    • broadcast
    • roundrobin
    • activebackup
    • loadbalance
    • lacp (implements the 802.3ad Link Aggregation Control Protocol)
  • 网络组特点

    • 启动网络组接口不会自动启动网络组中的port接口

    • 启动网络组接口中的port接口总会自动启动网络组接口

    • 禁用网络组接口会自动禁用网络组中的port接口

    • 没有port接口的网络组接口可以启动静态IP连接

    • 启用DHCP连接时,没有port接口的网络组会等待port接口的加入

#创建网络组接口
nmcli con add type team con-name CNAME ifname INAME [config JSON]
CNAME 连接名
INAME 接口名
JSON 指定runner方式,格式:'{"runner": {"name": "METHOD"}}'
METHOD 可以是broadcast, roundrobin, activebackup, loadbalance, lacp

#创建port接口
nmcli con add type team-slave con-name CNAME ifname INAME master TEAM
CNAME 连接名,连接名若不指定,默认为team-slave-IFACE
INAME 网络接口名
TEAM 网络组接口名

#断开和启动
nmcli dev dis INAME
nmcli con up CNAME
INAME 设备名 CNAME 网络组接口名或port接口

网络组范例:

nmcli con add type team con-name myteam0 ifname team0 config '{"runner": 
{"name": "loadbalance"}}' ipv4.addresses 192.168.1.100/24 ipv4.method manual
nmcli con add con-name team0-eth1 type team-slave ifname eth1 master team0
nmcli con add con-name team0-eth2 type team-slave ifname eth2 master team0
nmcli con up myteam0
nmcli con up team0-eth1
nmcli con up team0-eth2

teamdctl team0 state
ping -I team0 192.168.0.254
nmcli dev dis eth1
teamdctl team0 state
nmcli con up team0-port1
nmcli dev dis eth2
teamdctl team0 state
nmcli con up team0-port2
teamdctl team0 state

管理网络组配置文件

# /etc/sysconfig/network-scripts/ifcfg-team0
DEVICE=team0
DEVICETYPE=Team
TEAM_CONFIG="{\"runner\": {\"name\": \"broadcast\"}}"
BOOTPROTO=none
IPADDR0=172.16.0.100
PREFIX0=24
NAME=team0
ONBOOT=yes

管理网络组配置文件

# /etc/sysconfig/network-scripts/ifcfg-team0-eth1
DEVICE=eth1
DEVICETYPE=TeamPort
TEAM_MASTER=team0
NAME=team0-eth1
ONBOOT=yes

删除网络组

nmcli connection down team0
teamdctl team0 state
nmcli connection show
nmcli connectioni delete team0-eth0
nmcli connectioni delete team0-eth1
nmcli connection show

lo 网卡

lo 网卡说明

回环网卡(Loopback adaptor),是一种特殊的网络接口,不与任何实际设备连接,而是完全由软件实现。与回环地址(127.0.0.0/8 或 ::1/128)不同,回环网卡对系统“显示”为一块硬件。任何发送到该网卡上的数据都将立刻被同一网卡接收到。例子有 Linux 下的 lo 接口和 Windows 下的 Microsoft Loopback Interface 网卡。

因为回环网卡并不和硬件设备直接挂钩 而是由软件实现的,所以不会被硬件网卡的故障等原因所影响,某些特殊场景 为了保障IP地址的高可靠性 一般会将IP地址绑定在回环网卡上,如:LVS

lo 网卡特点

lo网卡添加的IP不会生成路由记录

#将地址绑定在lo网卡上
# ip addr add 8.8.8.8/32 dev lo label lo:1
# ip addr show dev lo
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet 8.8.8.8/32 scope global lo:1
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever

#不会生成路由记录
# route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
10.0.0.0        0.0.0.0         255.255.255.0   U     0      0        0 eth0

lo网卡添加的地址认为是一个网段

#本机的lo网卡信息
# ip addr show lo 
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet 6.6.6.6/24 scope global lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever

#访问测试
# ping 127.0.0.1
64 bytes from 127.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms
# ping 127.1.2.3
64 bytes from 127.1.2.3: icmp_seq=1 ttl=64 time=0.097 ms
# ping 127.6.7.8
64 bytes from 127.6.7.8: icmp_seq=1 ttl=64 time=0.039 ms
# ping 6.6.6.6
64 bytes from 6.6.6.6: icmp_seq=1 ttl=64 time=0.027 ms
# ping 6.6.6.8
64 bytes from 6.6.6.8: icmp_seq=1 ttl=64 time=0.084 ms
# ping 6.6.8.8
ping: connect: Network is unreachable

IP地址绑定至lo网卡

  • 因为lo网卡在添加后不会生成路由记录,所以需要手动添加路由记录才能实现远程访问
  • 应用场景
    • LVS

添加IP至lo网卡

  • 注意此方式重启失效

  • 虽然不写netmask即表示32位,但是新版本的ip命令说后期这种写法会删除,所以最好还是写完整的ip/mask

# ip addr add 8.8.8.8/32 dev lo label lo:1
# ip addr show dev lo
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet 8.8.8.8/32 scope global lo:1
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever

手动添加路由表

# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet 8.8.8.8/32 scope global lo:1 #8.8.8.8/32
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 00:0c:29:cd:b5:7f brd ff:ff:ff:ff:ff:ff
    inet 10.0.0.100/24 brd 10.0.0.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ff:fecd:b57f/64 scope link 
       valid_lft forever preferred_lft forever
# route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
10.0.0.0        0.0.0.0         255.255.255.0   U     0      0        0 eth0


----------------------------------------------------------------------------------

# route add -host 8.8.8.8/32 gw 10.0.0.101 dev eth0

# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 00:0c:29:0d:63:cd brd ff:ff:ff:ff:ff:ff
    inet 10.0.0.101/24 brd 10.0.0.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ff:fe0d:63cd/64 scope link 
       valid_lft forever preferred_lft forever
# route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
8.8.8.8         10.0.0.101      255.255.255.255 UGH   0      0        0 eth0
10.0.0.0        0.0.0.0         255.255.255.0   U     0      0        0 eth0
# ip route 
8.8.8.8 via 10.0.0.101 dev eth0 scope link 
10.0.0.0/24 dev eth0 proto kernel scope link src 10.0.0.101 

Linux 网卡补充说明

Linux的IP地址信息是存放在内核中的,假设有多块网卡,那么数据包无论从哪块网卡进来,只要本机有这个IP地址 就能够给予响应,但前提是IP工作范围必须配置为global模式

  • Linux 网卡IP的工作范围 scope
    • global 全局可用,默认值,参考上方说明
    • link 仅链接可用,表示绑定的IP地址只能由此网卡所响应
    • host 本机可用,只能在本机使用,如:lo网卡默认的127.0.0.1/8

范例:

# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo #为host,所以只能在本机使用
       valid_lft forever preferred_lft forever
    inet 8.8.8.8/32 scope global lo:1 #为global,所以全局可用
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 00:0c:29:cd:b5:7f brd ff:ff:ff:ff:ff:ff
    inet 10.0.0.100/24 brd 10.0.0.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ff:fecd:b57f/64 scope link #为link,所以只能在某个链路使用
       valid_lft forever preferred_lft forever