[转帖](译文)Macvlan vs Ipvlan

译文,macvlan,vs,ipvlan · 浏览次数 : 0

小编点评

| Ipvlan L2Ipvlan L2 mode | Ipvlan L3Ipvlan L3 mode | |---|---| | Switch between sub-interfaces and parent interface | Switch between sub-interfaces and parent interface | | Distribute broadcasts/multicasts to all sub-interfaces | Routes packets between all sub-interfaces | | Parent interface充当父接口与子接口之间的交换机 | Parent interface充当父接口与子接口之间的交换机 | | Each sub-interface has to be configured with a different subnet | Each sub-interface has to be configured with a different subnet | | Broadcasts are limited to a Layer 2 domain | Broadcasts are fully routed across the network | | Supports multicast | Does not support multicast | | Supports routing protocols | Does not support routing protocols | | For production, you should consider swapping your NIC for a better one and use macvlans. | For production, you should consider swapping your NIC for a better one and use macvlans. |

正文

https://www.jianshu.com/p/9b8c370baca1

 

Macvlan vs Ipvlan

I’ve covered macvlans in the Bridge vs Macvlan post. If you are new to macvlan concept, go ahead and read it first.
在 Bridge vs Macvlan 一文中已经讲解了macvlans。如果对macvlan比较陌生,可以先阅读该文Bridge vs Macvlan 。

Macvlan |Macvlan

To recap: Macvlan allows you to configure sub-interfaces (also termed slave devices) of a parent, physical Ethernet interface (also termed upper device), each with its own unique MAC address, and consequently its own IP address.
Macvlan可以在物理网卡构成的父接口(也叫做主接口)上添加子接口(也叫做从接口),每个子接口都拥有独立的MAC地址和IP 地址。
Applications, VMs and containers can then bind to a specific sub-interface to connect directly to the physical network, using their own MAC and IP address.
应用程序,虚拟机,容器通过绑定至子接口,拥有各自的MAC,IP,并连接到物理网络上。

 
Linux Macvlan

Macvlan is a near-ideal solution to natively connect VMs and containers to a physical network, but it has its shortcomings:
Macvlan是一个近乎理想的连接虚拟机、容器至物理网络的方案,但它仍有如下不足之处:

  • The switch the host is connected to may have a policy that limits the number of different MAC addresses on a physical port. Although you should really work with your network administrator to change the policy, there are times when this might not be possible (or you just need to set up a quick PoC).

  • 服务器连接的交换机端口对支持的MAC地址会存在安全策略限制。尽管可以通过网络管理员更改这些安全策略,但并不是时时刻刻都可以这样做(比如你仅仅是要快速的建立一个PoC环境)。

  • Many NICs have a limit on the number of MAC addresses they support in hardware. Exceeding the limit may affect the performance.

  • 许多网卡在硬件层面对支持的MAC地址数量存在限制。超过该限制会导致性能的下降。

  • IEEE 802.11 doesn’t like multiple MAC addresses on a single client. It is likely macvlan sub-interfaces will be blocked by your wireless interface driver, AP or both. There are somehow complex ways around that limitation, but why not stick to a simple solution?

  • IEEE 802.11并不支持一个客户端多个MAC地址。无线网卡驱动、AP不支持macvlan子接口。尽管有多种复杂的手段绕过这些限制,但为何不转向一个更简单的方案呢?


Ipvlan |Ipvlan

Ipvlan is very similar to macvlan, with an important difference. Ipvlan does not assign unique MAC addresses to created sub-interfaces. All sub-interfaces share parent’s interface MAC address, but use distinct IP addresses.
Ipvlan与macvlan非常相似,但又存在显著不同。Ipvlan的子接口上并不拥有独立的MAC地址。所有共享父接口MAC地址的子接口拥有各自独立的IP。

 
Linux Ipvlan

Because all VMs or containers on a single parent interface use the same MAC address, ipvlan also has some shortcomings:
由于同一个父接口上连接的虚拟机和容器使用相同的MAC地址,ipvlan也存在如下局限:

  • Shared MAC address can affect DHCP operations. If your VMs or containers use DHCP to acquire network settings, make sure they use unique ClientID in the DHCP request and ensure your DHCP server assigns IP addresses based on ClientID, not client’s MAC address.
  • 共享MAC地址会影响DHCP相关的操作。如果虚拟机、容器需通过DHCP获取网络配置,请确保它们在DHCP请求中使用各自独立的ClientID;DHCP服务器会根据请求中的ClientID而非MAC地址来分配IP地址。
  • Autoconfigured EUI-64 IPv6 addresses are based on MAC address. All VMs or containers sharing the same parent interface will auto-generate the same IPv6 address. Ensure that your VMs or containers use static IPv6 addresses or IPv6 privacy addresses and disable SLAAC.
  • EUI-64 IPv6地址是根据MAC地址自动配置的。同一父接口下的虚拟机和容器因此会生成相同的IPv6地址。所以在这种情况下虚拟机和容器请使用静态IPv6地址,或使用IPv6私有地址段并禁止SLAAC功能。

Ipvlan modes | Ipvlan工作模式

Ipvlan has two modes of operation. Only one of the two modes can be selected on a single parent interface. All sub-interfaces operate in the selected mode.
Ipvlan有2种工作模式,同一个父接口上只能使用其中的一种,所有子接口根据父接口的工作模式运转。

Ipvlan L2 | Ipvlan L2

Ipvlan L2 or Layer 2 mode is analogue to the macvlan bridge mode.
Ipvlan L2 模式与macvlan的bridge模式类似。

 
Linux Ipvlan - L2 mode

Parent interface acts as a switch between the sub-interfaces and the parent interface. All VMs or containers connected to the same parent Ipvlan interface and in the same subnet can communicate with each other directly through the parent interface. Traffic destined to other subnets is sent out through the parent interface to default gateway (a physical router). Ipvlan in L2 mode distributes broadcasts/multicasts to all sub-interfaces.
父接口充当父接口与子接口之间的交换机。同一父接口下相同子网的虚拟机和容器能够通过父接口直接进行通信。去往不同子网的流量将流出父接口发送至默认网关(物理路由器)。工作在L2模式下的Ipvlan会将广播包/组播包发送至所有子接口上。

Ipvlan L3 | Ipvlan L3

Ipvlan L2 mode acts as a bridge or switch between the sub-interfaces and parent interface. As name suggests, Ipvlan L3 or Layer 3 mode acts as a Layer 3 device (router) between the sub-interfaces and parent interface.
工作在L2模式下的Ipvlan充当了父接口与子接口之间的交换机。以此类推。工作在L3模式下的Ipvlan充当父接口和子接口之间的三层设备(路由器)。

 
Linux Ipvlan - L3 mode

Ipvlan L3 mode routes the packets between all sub-interfaces, thus providing full Layer 3 connectivity. Each sub-interface has to be configured with a different subnet, i.e. you cannot configure 10.10.40.0/24 on both interfaces.
L3模式下的Ipvlan在子接口之间路由报文,提供3层网络之间的全连接。每个子接口都必须配置不同的子网网段。例如不可以在多个子接口上配置相同的10.10.40.0/24网段。

Broadcasts are limited to a Layer 2 domain, so they cannot pass from one sub-interface to another. Ipvlan L3 mode does not support multicast.
因为广播报文是二层域的,所以他们不能在子接口上相互传递。Ipvlan也不支持多播。

Ipvlan L3 mode does not support routing protocols, so it cannot notify the physical network router of the subnets it connects to. You need to configure static routes on the physical router pointing to the Host’s physical interface for all subnets on the sub-interfaces.
Ipvlan L3不支持路由协议,因此也无法将连接的子网网段通知物理网络上的路由器。需要人为在物理路由器上为子接口上的子网网段配置指向父接口IP的静态路由。

Ipvlan L3 mode behaves like a router – it forwards the IP packets between different subnets, however it does not reduce the TTL value of the passing packets. Thus, you will not see the Ipvlan “router” in the path when doing traceroute.
虽然IPvlan L3想路由器一样在不同子网间传递报文,但它并不削减通过报文的TTL值。当你使用Traceroute时也无法探测到链路中充当“router”的Ipvlan L3。

Ipvlan L3 can be used in conjunction with VM or Container ran BGP, used as a service advertisement protocol to advertise service availability into the network. This advanced scenario exceeds the purpose of this post.

Ipvlan与运行BGP着虚拟机、容器联合工作的话,可以在网络上注册服务。但这种高级场景不在本文的主题范围内


Macvlan vs Ipvlan

Macvlan and ipvlan cannot be used on the same parent interface at the same time.
同一个父接口上无法同时使用macvlan和ipvlan。

Use Ipvlan when: |如下场景使用Ipvlan:

  • Parent interface is wireless.
  • 父接口是无线接口。
  • Your parent interface performance drops because you have exceeded the number of different MAC addresses. For production, you should consider swapping your NIC for a better one and use macvlans.
  • 父接口上的MAC地址过多会降低父接口的性能。生产环境下,当使用macvlan时需考虑更换性能更好的网卡。
  • Physical switch limits the number of MAC addresses allowed on a port (Port Security). For production, you should solve this policy issue with your network administrator and use macvlans.
  • 物理交换机的端口安全(端口安全)限制了端口上的MAC地址时。生产环境中使用macvlan,需要网络管理员来解决交换机上的安全策略问题。
  • You run an advanced network scenario, such as advertising the service you run in the VM or container with the BGP daemon running in the same VM or container.
  • 高级网络场景比如你需要通过虚拟机或容器中的BGP进程注册服务。

Use Macvlan: |如下场景使用Macvlan:

  • In every other scenario.
  • 其他所有场景

译自:https://hicu.be/macvlan-vs-ipvlan

与[转帖](译文)Macvlan vs Ipvlan相似的内容:

[转帖](译文)Macvlan vs Ipvlan

https://www.jianshu.com/p/9b8c370baca1 Macvlan vs Ipvlan I’ve covered macvlans in the Bridge vs Macvlan post. If you are new to macvlan concept, go ah

[转帖](译文)Bridge vs Macvlan

https://www.jianshu.com/p/ee7d9c8b4492 Bridge | Bridge A bridge is a Layer 2 device that connects two Layer 2 (i.e. Ethernet) segments together.“桥”是一个

[转帖](译文)Path MTU discovery in practice | 链路MTU探测实践

https://www.jianshu.com/p/765476290f29 Last week, a very small number of our users who are using IP tunnels (primarily tunneling IPv6 over IPv4) were

[转帖]【译文】使用BPF控制内核的ops结构体

https://zhuanlan.zhihu.com/p/105814639 Linux内核5.6版本的众多令人惊喜的功能之一是:TCP拥塞控制算法(congestion control algorithm)可作为用户空间的BPF(Berkeley Packet Filter)程序进行加载和执行。

[转帖][译] NAT - 网络地址转换(2016)

http://arthurchiao.art/blog/nat-zh/ 译者序 本文翻译自 2016 年的一篇英文博客 NAT - Network Address Translation 。 由于译者水平有限,本文不免存在遗漏或错误之处。如有疑问,请查阅原文。 以下是译文。 译者序 1 绪论 2 网

[转帖][译] Cilium 未来数据平面:支撑 100Gbit/s k8s 集群(KubeCon, 2022)

http://arthurchiao.art/blog/cilium-tomorrow-networking-data-plane-zh/ 作者写的非常好呢 基础支持的确非常重要呢. Published at 2022-11-12 | Last Update 2022-11-12 译者序 本文翻译自

[转帖][译] Cilium:基于 BPF+EDT+FQ+BBR 实现更好的带宽管理(KubeCon, 2022)

http://arthurchiao.art/blog/better-bandwidth-management-with-ebpf-zh/ Published at 2022-10-30 | Last Update 2022-10-30 译者序 本文翻译自 KubeCon+CloudNativeCo

[转帖][译] Linux Socket Filtering (LSF, aka BPF)(KernelDoc,2021)

http://arthurchiao.art/blog/linux-socket-filtering-aka-bpf-zh/ 译者序 本文翻译自 2021 年 Linux 5.10 内核文档: Linux Socket Filtering aka Berkeley Packet Filter (BP

[转帖][译] 流量控制(TC)五十年:从基于缓冲队列(Queue)到基于时间(EDT)的演进(Google, 2018)

http://arthurchiao.art/blog/traffic-control-from-queue-to-edt-zh/ 译者序 本文组合翻译了 Google 2018 年两篇分享中的技术部分,二者讲的同一件事情,但层次侧重不同: Netdev 2018: Evolving from AF

[转帖][译] Linux 网络栈监控和调优:接收数据(2016)

http://arthurchiao.art/blog/tuning-stack-rx-zh/ 注意:本文内容已经太老,基于 kernel 3.13 和 1Gbps 网卡驱动 igb,建议移步 kernel 5.10 + 25Gbps 驱动版: Linux 网络栈原理、监控与调优:前言 Linux