I've moved to having containers be first-class citizens on my home network, so any local machine (laptop, phone,tablet) can communicate directly with them all, but they're not (by default) exposed to the wider Internet. Here's why, and how.
After I moved containers from docker to Podman and systemd, it became much more convenient to run web apps on my home server, but the default approach to networking (each container gets an address on a private network between the host server and containers) meant tedious work (maintaining and reconfiguring a HTTP reverse proxy) to make them reachable by other devices. A more attractive arrangement would be if each container received an IP from the range used by my home LAN, and were automatically addressable from any device on it.
To make the containers first-class citizens on my home LAN, first I needed to
configure a Linux network bridge and attach the host machine's interface to it
(I've done that many times before);
then define a new Podman network, of type "bridge". podman-network-create
serves as reference, but the blog post Exposing Podman containers fully on the
is an easier read (skip past the
I've opted to choose IP addresses for each container by hand. The Podman network is narrowly defined to a range of IPs that are within the subnet that my ISP-provided router uses, but outside the range of IPs that it allocates.
When I start up a container by hand for the first time, I choose a free IP from
the sub-range by hand and add a line to
/etc/avahi/hosts on the parent
I then start the container specifying that address, e.g.
podman run --rm -d --name octoprint \
--network bridge_local --ip 192.168.1.33 \
I can now access that container from any device in my house (laptop, phone,
Although it's not a huge burden, it would be nice to not need to statically
define the addresses in
/etc/avahi/hosts (perhaps via "IPAM"). I've also been
(which should be the subject of a future blog post) and combining this with
that would be worthwhile.