Netavark and Aardvark-dns v1.14 are out. Thanks to our contributors, the Netavark release is on the bigger side and did get quite a few features. So let’s have a look; all the features assume that you are also using Podman v5.4
DHCP Hostname
Netavark now sends the container hostname as part of the DHCP request. As such, your DHCP server knows the hostname and can show that to you in its interface. Thus, it is easy to reference the IP address to your container hostname, assuming you set a custom hostname. If you do not wish to use a custom hostname for each container, a new option, container_name_as_hostname
, was added to the [containers]
section of the containers.conf file. If no hostname is set, it will default to the container name instead of the first 12 characters of the container ID.
Unmanaged bridge option
The bridge network driver received a few new features. It now supports a new option called mode which can be set to managed
(default and current behavior) or unmanaged
in which case the bridge driver expects the bridge interface to already exist on the host, and then it will only add the Virtual Ethernet Device (veth) pair and IP addresses based on your configuration. It will not configure any sysctl options or create any firewall rules. The mode allows users to avoid the Network Address Translation(NAT) for the address and expose the containers directly to their LAN via the bridge. This is similar to how the macvlan driver works. However, with macvlan, the host namespace is bypassed, and no connection to the host is possible. With the bridge, this will work, and your host side firewall rules will apply.
Here is an example of how to use it; I use the bridge with the name br0
[root@fedora ~]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host noprefixroute
valid_lft forever preferred_lft forever
2: enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel master br0 state UP group default qlen 1000
link/ether 52:57:00:72:2c:5c brd ff:ff:ff:ff:ff:ff
altname enx525700722c5c
3: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 52:57:00:72:2c:5c brd ff:ff:ff:ff:ff:ff
inet 192.168.122.18/24 brd 192.168.122.255 scope global dynamic noprefixroute br0
valid_lft 1842sec preferred_lft 1842sec
inet6 fe80::e819:63e0:d546:ce4d/64 scope link noprefixroute
valid_lft forever preferred_lft forever
[root@fedora ~]# podman network create --interface-name br0 --opt mode=unmanaged --disable-dns unmanaged-br0
unmanaged-br0
[root@fedora ~]# podman run --network unmanaged-br0 --name c1 -d quay.io/libpod/testimage:20241011 top
d1c3623a0257c306c11f54dd343eb2224846d61af644bb552af445ca9af85f91
[root@fedora ~]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host noprefixroute
valid_lft forever preferred_lft forever
2: enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel master br0 state UP group default qlen 1000
link/ether 52:57:00:72:2c:5c brd ff:ff:ff:ff:ff:ff
altname enx525700722c5c
3: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 52:57:00:72:2c:5c brd ff:ff:ff:ff:ff:ff
inet 192.168.122.18/24 brd 192.168.122.255 scope global dynamic noprefixroute br0
valid_lft 3552sec preferred_lft 3552sec
inet6 fe80::e819:63e0:d546:ce4d/64 scope link noprefixroute
valid_lft forever preferred_lft forever
4: veth0@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br0 state UP group default qlen 1000
link/ether 72:f6:6c:e0:28:bc brd ff:ff:ff:ff:ff:ff link-netns netns-ffab1a5f-7d3f-8211-caa8-1acceea8d53c
inet6 fe80::8cb9:e7ff:fe90:3354/64 scope link proto kernel_ll
valid_lft forever preferred_lft forever
[root@fedora ~]# podman exec c1 ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host proto kernel_lo
valid_lft forever preferred_lft forever
2: eth0@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether fe:d6:a4:f0:74:11 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 10.89.0.2/24 brd 10.89.0.255 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::fcd6:a4ff:fef0:7411/64 scope link proto kernel_ll
valid_lft forever preferred_lft forever
Note that there are still two issues when creating a network in unmanaged mode. First, if you try to specify a subnet for the network which is already used on the host, it will fail. One work around can be to manually fix the subnet in the network config file after the network is created (https://github.com/containers/common/issues/2322). Second, aardvark-dns cannot be used with it right now if the gateway IP of the network is not assigned to the bridge, so you have to use `–disable-dns` when creating the network (https://github.com/containers/netavark/issues/1177). I plan on fixing these problems in a future release.
DHCP with the bridge driver
Until now, one could only use the DHCP with the macvlan driver. Now, it will be possible to use it with the bridge driver when combined with the unmanaged mode.
First make sure you have the netavark-dhcp-proxy socket enabled/started like it is needed with macvlan.
[root@fedora ~]# systemctl start netavark-dhcp-proxy.socket
[root@fedora ~]# podman network create --interface-name br0 --opt mode=unmanaged --ipam-driver dhcp --disable-dns dhcp-br0
dhcp-br0
[root@fedora ~]# podman run --network dhcp-br0 --name c1 quay.io/libpod/testimage:20241011 ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host proto kernel_lo
valid_lft forever preferred_lft forever
2: eth0@if6: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 4a:5d:ee:60:48:fd brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 192.168.122.159/24 brd 192.168.122.255 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::485d:eeff:fe60:48fd/64 scope link proto kernel_ll
valid_lft forever preferred_lft forever
[root@fedora ~]# podman start --attach c1
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host proto kernel_lo
valid_lft forever preferred_lft forever
2: eth0@if7: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether a6:54:f3:5e:78:29 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 192.168.122.159/24 brd 192.168.122.255 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::a454:f3ff:fe5e:7829/64 scope link tentative proto kernel_ll
valid_lft forever preferred_lft forever
As seen, the container was started twice, and in both cases, the same IP address was assigned even though the mac address was different. This is because we now use the container ID as DHCP client identifier. Thus, the DHCP server keeps assigning the same IP address even for the same container.
VLAN support with the bridge driver
In more complicated network setups you might have a bridge interface that makes use of vlan’s for better network isolation, in my case I use br-vlan
.
[root@fedora ~]# podman network create --interface-name br-vlan --opt mode=unmanaged --opt vlan=10 --disable-dns vlan10
vlan10
[root@fedora ~]# podman network create --interface-name br-vlan --opt mode=unmanaged --opt vlan=20 --disable-dns vlan20
vlan20
[root@fedora ~]# podman run --network vlan10 -d quay.io/libpod/testimage:20241011 top
1c342a2419a7dbdfccccd0a7d08856653bbf09214e8f535a2f1b18932bb6025e
[root@fedora ~]# podman run --network vlan20 -d quay.io/libpod/testimage:20241011 top
a18795cde2810ed35374be51511d49c84555e1149d385ada03e9865fd624144a
[root@fedora ~]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host noprefixroute
valid_lft forever preferred_lft forever
2: enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel master br0 state UP group default qlen 1000
link/ether 52:57:00:72:2c:5c brd ff:ff:ff:ff:ff:ff
altname enx525700722c5c
3: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 52:57:00:72:2c:5c brd ff:ff:ff:ff:ff:ff
inet 192.168.122.18/24 brd 192.168.122.255 scope global dynamic noprefixroute br0
valid_lft 2133sec preferred_lft 2133sec
inet6 fe80::e819:63e0:d546:ce4d/64 scope link noprefixroute
valid_lft forever preferred_lft forever
9: br-vlan: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 1e:3e:f7:25:9b:8a brd ff:ff:ff:ff:ff:ff
inet6 fe80::1c3e:f7ff:fe25:9b8a/64 scope link tentative proto kernel_ll
valid_lft forever preferred_lft forever
11: veth0@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-vlan state UP group default qlen 1000
link/ether 72:f6:6c:e0:28:bc brd ff:ff:ff:ff:ff:ff link-netns netns-5ad070e5-f1c2-e238-0fa9-db65111ef764
inet6 fe80::2c65:4bff:fec8:f91d/64 scope link proto kernel_ll
valid_lft forever preferred_lft forever
12: veth1@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-vlan state UP group default qlen 1000
link/ether 1e:3e:f7:25:9b:8a brd ff:ff:ff:ff:ff:ff link-netns netns-e8b58acc-1cf9-ecd6-c190-1cee21b4c7cf
inet6 fe80::b01d:e3ff:febd:e868/64 scope link proto kernel_ll
valid_lft forever preferred_lft forever
[root@fedora ~]# bridge vlan show
port vlan-id
enp1s0 1 PVID Egress Untagged
br-vlan 1 PVID Egress Untagged
veth0 1 Egress Untagged
10 PVID Egress Untagged
veth1 1 Egress Untagged
20 PVID Egress Untagged
As you can see veth0 has been assigned vlan 10 on the bridge and veth1 for the second container vlan 20. They also stay attached to vlan 1 by default, that is because the kernel always assigns the default pvid to an interface attached to the bridge. That can be avoided by setting the default pvid to 0, e.g. ip link set br-vlan type bridge vlan_default_pvid 0
Firewalld
The firewalld driver was improved and major outstanding bugs were addressed but is still considered experimental. A new man page netavark-firewalld(7) has been added to document some of the firewalld interactions.
Wrap up
The new mode=unmanaged bridge option allows for better integration into existing network setups that already have a bridge interface. It does not create firewall rules, which means an admin can configure rules however they like without worrying about podman bypassing the firewall rules. With the option to use DHCP and VLAN it can be integrated even further into the network setup.
If you encounter any problems or have other ideas on what we can do, don’t hesitate to reach out on our GitHub https://github.com/containers/netavark/issues (for bugs and feature requests) and https://github.com/containers/netavark/discussions (for general questions)
Leave a Reply