Springe zum Hauptinhalt

Big Blue Button, ufw und container

Since a few weeks, I rented a server for running Big Blue Button. Besides Jitsi Meet, Big Blue Button is one of the most feature rich free software video conferencing solutions.

Also, Big Blue Button is a bolted mess of many components running barely in any other environment than the one the developers intended. Actually, it's barely running. 🙂

To be fair, there are a lot of components involved. And there aren't many developers having the necessary insights, so progress is slow. Until a few weeks ago BBB had to be run on Ubuntu 16.04 Xenial, but they now have their install script adapted to Ubuntu 18.04 Bionic. Well, we are in 2021 already and a lot of people expect to be able to install BBB on Ubuntu 20.04 Focal. But as I said, progress is slow.

As there are many components, the team strongly recommends to firewall the server. Everyone does that, doesn't they, mh? Well, they decided to use ufw for that. That wouldn't be a problem if I didn't have the idea to setup some additional containers on the machine to make it a little bit more useful.

When mentioning containers almost everyone thinks of docker containers but there are at least two more technologies available on Linux. The aptly named lxc (Linux Containers) and systemd's approach nspawn. As the whole world uses docker and one of my favourite (ex) colleagues prefers lxc I decided to have some fun with nspawn. But I guess many of the problems I had would also appear with lxc or docker.

Actually, BBB makes use of some docker containers so that might have increased my problems but I'm not sure about that as no solution or workaround needed touching their config. I guess, their presence just made my problems seem more complex than they actually were.

The symptoms I observed were simple: My containers didn't get any IP. They had no network connectivity.

The main culprit of course was ufw. BBB's default firewall rules only allow ssh, http, https and a range of ports needed for WebRTC. ufw blocked all traffic from the containers, including DHCP. So I needed to allow DHCP. I wrote an application profile for that but that's mostly unnecessary as we only need to open TCP ports 67 and 68.

ufw allow 67:68/tcp

And my containers got their IP addresses!

But they still couldn't do anything. Okay, I could ping other IP addresses.

ping 8.8.8.8

Call me spoiled, but I'd expect ICMP to be blocked if default policy is to reject. But hey, I was thankful to see that something works at least. But name resolution didn't work. Allowing port 53 analoguosly didn't help. I could tcpdump my packets not reaching the bridge interface. The solution was to allow forwarding packets to this port:

ufw route allow 53

And now, name resolution worked. I left out the steps inbetween (I changed the default forward policy and reset it afterwards). That means I could ping by providing a hostname. Progress! But I couldn't curl. Not only because I had to install the curl package but also because the only open ports for my containers were … let me count … oh, port 53.

ufw route allow 80,443/tcp

And now I can curl.

Of course, there's room for improvement. I could limit the rules to my container subnet or even to single containers. But this last idea is doomed to fail because of DHCP. So I could still limit the forwarding rules to my subnet. As they are only valid for forwarding and I want to forward only to my containers there doesn't seem to any benefit in adding more complexity.

I guess I got all the search terms into this entry I tried while searching for a solution to my problem without getting any concrete results. Maybe someone in the same situation as me might find this helpful.

Kommentare