Run LXD container apps on separate X server

In an earlier post I went through how to create an LXD container that has graphics access through X11 forwarding. This is a simple method but if your goal for using containers is to sandbox applications and decrease the attack surface from malicious parties, X11 forwarding isn’t really a good idea. X11 is infamous for it’s lack of security.

X11 security

Essentially any application on an X server has access to all other applications on the same server which makes it really easy to log key strokes and doing a lot of other fishy stuff. The combination of allowing access over the network could be devastating. It’s a bit unjust to blame all this on the X11 protocol designers, since the protocol wasn’t developed for the environment that it’s operating in these days but nevertheless, it’s a fact that needs to be addressed. X11 has some security extensions that mitigate problems a bit but it seems hard to get the control you need for truly sandboxed applications. So, a way to address this is to use separate X servers to make sure that apps do not interact in unwanted ways. One can use VNC, Xephyr etc for this. I have chosen to use Xpra, since it can make “remote” applications blend in like native ones, still using a separate X server.

Container setup

Create a fresh container, and do basic container setup. We will run xterm for demonstration purposes but any X application will work of course.

$ lxc launch images:ubuntu/trusty/amd64 xpra
$ lxc exec xpra /bin/bash
root@xpra:~# apt-get install xterm openssh-server
root@xpra:~# adduser xpra 
root@xpra:~# exit

Get Xpra

This will setup Xpra on Ubuntu Trusty. Unfortunately the Xpra package on Trusty is really old and buggy so manual installation is needed. Get the latest xpra package and python-reencode dependency from https://xpra.org/dists/trusty/main/ and install. If you are on a newer Ubuntu, the distro supplied packages may work fine.

Xpra needs to be installed both on the host and in the container so for both parties do:

# cd /tmp
# wget https://xpra.org/dists/trusty/main/binary-amd64/python-rencode_1.0.3-1_amd64.deb
# dpkg -i python-rencode_1.0.3-1_amd64.deb
# apt-get install python-gtkglext1 python-opengl python-lzo python-appindicator libswscale2 libwebp5 libx264-142 libxkbfile1 x11-xserver-utils xvfb python-numpy python-imaging
# wget https://xpra.org/dists/trusty/main/binary-amd64/xpra_0.15.10-1_amd64.deb
# dpkg -i xpra_0.15.10-1_amd64.deb
# cd -

Run xterm

Find the ip address of the container:

$ lxc list | grep xpra
| xpra          | RUNNING | 10.0.3.75 (eth0)  |      | PERSISTENT | 0         |

Invoke xpra on the host, launching a remote app via ssh.

$ xpra start ssh:xpra@10.0.3.75:100 --start-child=xterm --exit-with-children --exit-with-client=yes

There are a lot of options when it comes to invoke xpra and start up the app. It’s possible to use a TCP socket instead of ssh for instance and you can choose to keep the application alive, attaching to it for recurring startups etc. Launching an app like this is just as simple as it gets when the packages are installed.

For creating an application script and desktop binding see Creating an LXD container for graphics applications.

Tagged with: , , , , , ,
Posted in linux

Using audio in LXD containers

Pulseaudio can be used to provide audio support to linux containers. The setup is similar to any other pulseaudio setup over the network. Let’s create a container to play with.

lxc launch images:ubuntu/trusty/amd64 pulseaudio
lxc exec pulseaudio /bin/bash
apt-get install openssh-server mpg321 pulseaudio
adduser user
exit

On the host, add to /etc/pulse/system.pa:

load-module module-native-protocol-tcp auth-ip-acl=127.0.0.1;10.0.3.0/24
load-module module-zeroconf-publish

This configures remote access to local pulseaudio server with an ip based access list.

Get ip address of container:

lxc info pulseaudio | grep eth0
  eth0: IPV4    10.0.3.188

Copy some mp3 file:

scp /path/to/cool.mp3 user@10.0.3.188:

Play your mp3 from the container:

ssh user@10.0.3.188 PULSE_SERVER=10.0.3.1 mpg321 cool.mp3

The PULSE_SERVER environment variable is telling the container to use 10.0.3.1, the ip address of the host.

The only thing left is to combine this setup with the setup for graphics support to have a fully functional container. See Creating an LXD container for graphics applications for more info on that topic.

Tagged with: , , , ,
Posted in linux

Creating an LXD container for graphics applications

Setting up LXD containers for headless applications is pretty straight forward. Using the graphics display doesn’t work out of the box without additional setup. There are different ways to provide graphics access to a container and which one to choose depends on the need of the application. Is audio needed? Is hardware acceleration (OpenGL) needed? This all boils down to grant permission to host specific resources that isn’t there by default.

In general there are two means of providing resources to the container, either give access to the host resources through lxd device configurations or use network protocols. For basic graphics that don’t depend on hardware acceleration we can use the X server capability of accepting remote connections. That’s what we are going to use in this post to setup a container for running Firefox that can be launched as any regular application. It’s assumed that you have a basic LXD setup. See Using Linux containers and LXD.

Container setup

First we need a basic container setup.

lxc launch images:ubuntu/trusty/amd64 x11gfx
lxc exec x11gfx /bin/bash
apt-get install openssh-server firefox
adduser firefox
exit

X11 setup

X11 forwarding can be used through ssh by just giving the -X flag when invoking ssh. This will encrypt all X11 traffic which isn’t needed since no traffic will enter the network. Instead, we will enable X remote access on the host and make sure that the container can access it. Most Linux distributions these days doesn’t allow remote access to the X server by default. Depending on the distribution this is configured in different ways. The pure X11 way of setting this up is to modify /etc/X11/xinit/xserverrc. Simply remove the “-nolisten tcp” from this line in that file:

exec /usr/bin/X -nolisten tcp "$@"

Usually, starting the X server is controlled by the display manager of the system so the above will not work in that case. Linux Mint is using the Multi Display Manager (MDM) which is targeted below. Similar setup is done for other display managers, e.g. gdm. Add this to /etc/mdm/mdm.conf:

[security]
DisallowTCP=false

This will remove the “-nolisten tcp” command line flag when starting the X server. We also need to give the container access rights to the X server. Grab the ip adress of the container:

lxc info x11gfx | grep eth0
  eth0:   IPV4    10.0.3.9

Grant access rights with xhost:

xhost +10.0.3.9

Try that the setup works. The host ip address from the container is 10.0.3.1 in this setup:

ssh firefox@10.0.3.9 DISPLAY=10.0.3.1:0 firefox

To make this xhost settings stick after reboot, add the container to the /etc/X<N>.hosts file where <N> is typically 0 for display 0. Add to /etc/X0.hosts:

10.0.3.9

Application script

Create a startup script as follows:

#!/bin/sh

lxc start x11gfx &> /dev/null
ssh firefox@10.0.3.9 DISPLAY=10.0.3.1:0 firefox --no-remote

The --no-remote switch is needed to avoid that firefox opens up a new tab in an existing instance on the same X server, the host in this case. This is a quite surprising behavior which will override the purpose of the sandbox container. Anyway, the switch solves the problem. What’s left is that we don’t want to supply our password every time we start the browser. This can be solved in several ways. One solution is to add your ssh key to the container.

ssh-copy-id firefox@10.0.3.9

Desktop binding

The following desktop file will put your new sandboxed browser in the start menu:

[Desktop Entry]
Exec=/path/to/start_script
Icon=/home/user/icons/coolicon.png
Type=Application
Terminal=false
Comment=Run firefox in an LXD container
Name=Firebox
GenericName=Sandboxed Firefox
StartupNotify=false
Categories=Network;WebBrowser;

Put the above content in the file /usr/share/applications/firebox.desktop. I think this shall work for most distributions/desktop environments. It works on Ubuntu at least.

Next time we will look at audio.

For a follow up post that addresses some of the security issues with this setup, please have a look at Run LXD container apps on separate X server.

Tagged with: , , ,
Posted in linux

The Logjam attack

The Logjam attack was reported during spring and the researchers now have published their paper about it for everybody to dig deeper into the issue. The authors are guessing that this might be a method used by the NSA to tap secure connections over the internet. They don’t really provide any evidence for this, it’s mostly indications and reasoning that this might be a way that would be in line with what has been reported from leaked documents. That might be interesting in itself but the technical aspects of it is just as interesting.

Diffie Hellman

The Logjam attack is an attack related to the Diffie Hellman key exchange algorithm. Diffie Hellman is a method for exchanging keys over a public channel and is one method used for this purpose in TLS. There are different flavors of this algorithm but the common benefit of all of them is referred to as perfect forwarding. This means that the key for a session is not reused and if an attacker manage to break the key for a session it’s not possible to break other sessions. For instance, RSA does not have this property.

  1. If A and B wants to communicate they choose common parameters g and p, where p is a prime.
  2. A has a private key a, computes α = ga mod p and send the result to B.
  3. B has a private key b, computes β = gb mod p and send the result to A.
  4. Both parties (and possible attacker) now have α and β. Only A and B can calculate the common secret S = gab mod p though by applying its own secret to the result it got from the other host. So A does S = βa mod p and B does S = αb mod p and arrive at the same secret.

The problem with the algorithm is that the prime p is very slow to create, which means it will have to be precalculated for all practial purposes.

The two legs of the Logjam attack

The Logjam attack is not a weakness of Diffie Hellman but are based on implementation faults and a flaw in the TLS protocol. There are two parts making this attack possible.

  1. Implementations are reusing a small set of primes (p). Proactive breaking of such primes makes it possible to break a lot of TLS communication based on these primes.
  2. The TLS protocol does not include signing of the chosen exchange algorithm which means that a man in the middle can downgrade to shorter (and weaker) keys.

The researches’ conclusion is that Diffie Hellman with 1024 bit length primes can be broken by a specially crafted (and very expensive) computer within a year. They conclude that such hardware is only feasible to make by national state institutions because of the high cost/benefit ratio. The benefit of actually pursuing this for 1024 length primes is based on that the choice of a few widely used keys will give access to a lot of communication channels. For smaller length primes the cost is much lower but there is still the benefit of a few widely used primes.

Most implementations do not actively choose keys, shorter than 1024 bit but still, both clients and servers supports it for backward compatibility. The flaw in the TLS protocol allows for an attacker to downgrade the chosen key exchange prime to a weaker one without the client or server being able to notice. So even if both the server, and the client choose a 1024 length prime for the exchange they can end up with a weak 512 bit version. The attacker intercepts the public communication between the client and server and rewrites the messages from the server to force the client to choose a weak prime. This is only possible since there is no signing of the chosen cipher. If signing was in place the client would be able to verify that the chosen key was actually chosen by the server and not the attacker.

Mitigation

Since the attack has been published, browsers has been fixing their implementations to disallow the weaker keys than 1024 bit length keys and to check for and disallow downgrading. For the servers, it’s up to the sysadmins to change the common primes for unique ones, or at least less common ones. To check your own server you can go visit https://cipherli.st/. The recommendation would be to migrate to 2048 length primes if possible.

Tagged with: , , ,
Posted in Security

Encrypting system mail for external email address

Unix systems traditionally use local mail to deliver system messages to the users. This is true for Linux as well. The problem is that most of us have external mail these days and if you don’t check system mail regularly you may miss important information about the state of the system. If you are administrating a bunch of servers you might need to check several accounts on multiple hosts unless you set up some forwarding to a central mailbox.

I decided it would be great to have these mails delivered to my regular external mailbox that I check every day. Postfix can relay mail via an external smtp server so that part didn’t look that hard to achieve. There were some requirements I had on such a solution though:

  1. Storing my mail password on the production host don’t seem to be that much of a security risk since it would be readable by root only. I would prefer to have some more protection in case the server is compromised though.
  2. Sending clear text messages about the internal state of your server across the internet seems like a bad idea so encryption is needed.

Set up mailer container

To address my first concern I decided to setup a separate Linux container for the mailing service and relay all mails for the main system through the mailer container. This would add a little more protection to my precious mail password since it would not be directly available if an attacker would be able to get root access to my main server container. In addition, having this central service setup would allow me to relay mails from multiple hosts easily. To learn about the basics of LXD containers, see my earlier post about LXD.

So, lets create an LXD container:

$ lxc launch images:ubuntu/trusty/i386 mailer
$ lxc exec mailer /bin/bash # get a root shell in the container

There are a lot of guides on how to tweak postfix into relaying to an external smtp so I will not describe that here. Just duck for some how-to:s.

Make sure that postfix in the mailer container is listening to the local sub-net. Modify /etc/postfix/main.cf file:

inet_interfaces = all
mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 10.0.3.0/24

The last sub-net corresponds to the sub-net shared among the LXD containers on the host and may be adjusted according to the needs of the setup.

The hosts that will use the mailer container as a relay service need the following /ets/postfix/main.cf file:

mynetworks = 127.0.0.0/8 10.0.3.0/24
relay_domains =
# Forward all mail to the relayhost
relayhost = $IP_OF_MAILER
myorigin = /etc/mailname

Encrypt all outgoing emails

Since all mails will go to the same user (me) they can all be encrypted with the same public key. There is a content filter for Postfix called GPG-Mailgate that can encrypt outgoing mails automatically which fits my use case perfectly. Now, there is a bunch of forks of this filter on Github so make sure you pick the right one. I first started up with a port to Ruby that I discovered was incomplete and not working at all. The fork I chose was the one that seemed to have the latest additions but look through the different versions to decide for yourself what you need.

$ git clone https://github.com/uakfdotb/gpg-mailgate

The instructions in the Github repository is pretty detailed so it’s easy to follow and install. For reference, this is what I did:

$ cd gpg-mailgate
$ cp gpg-mailgate.py /usr/local/bin/
$ cp -r GnuPG/ /usr/lib/python2.7/

Create a user that will run the mailgate script:

$ adduser --system gpgmailer

Since Ubuntu preserves the HOME environment variable during sudo we need to use the GNUPGHOME env variable to point to the gpgmailer gpg key ring. Another option is to modify the sudoers file to change that behavior.

Import the public key(s) for the recipient(s) of the encrypted mails.

$ sudo -u gpgmailer GNUPGHOME=/home/gpgmailer/.gnupg gpg --import user@example.com-public-key.txt

List the keys in the gpgmailer keyring to make sure everything went okay:

$ sudo -u gpgmailer GNUPGHOME=/home/gpgmailer/.gnupg gpg --list-keys

Then add the following to /etc/postfix/master.cf:

#GPG-Mailgate
gpg-mailgate unix -     n       n       -        -      pipe
  flags= user=gpgmailer argv=/usr/local/bin/gpg-mailgate.py $recipient

127.0.0.1:10028 inet    n       -       n       -        10     smtpd
  -o content_filter=
  -o receive_override_options=no_unknown_recipient_checks,no_header_body_checks
  -o smtpd_helo_restrictions=
  -o smtpd_client_restrictions=
  -o smtpd_sender_restrictions=
  -o smtpd_recipient_restrictions=permit_mynetworks,reject
  -o mynetworks=127.0.0.0/8
  -o smtpd_authorized_xforward_hosts=127.0.0.0/8

And the following line to /etc/postfix/main.cf:

content_filter = gpg-mailgate

Restart postfix:

$ service postfix restart

Now everything shall be setup to send encrypted mail to user@example.com so lets check that it works:

$ sendmail user@example.com
From: user@example.com
Subject: Test mail
This shall be encrypted
.

Note that depending on the smtp server, mails might be rejected depending on the value of the From field. Sending mail to yourself is usually allowed so the above should work. If allowed, you probably want something more informative as sender though.

In order to forward mail for local users to the external mail address you need to create aliases in /etc/aliases:

root: user@example.com

The above will forward all mails for the root user to the external user@example.com mail address. When a new alias is added, the postfix alias database needs to be updated:

$ postalias /etc/aliases

The last bits need to be done for every user on every host that you want to use this delivery mechanism for.

That’s it. Works like a charm!

Tagged with: , ,
Posted in linux

Restrict resources for LXD containers

Restricting resource usage is done with profiles in LXD. There is a default profile that could be modified but it’s wise to create new profiles for specific use cases.

Create a new profile:

$ lxc profile create single_core

Restrict memory usage to 500MB:

$ lxc profile set single_core limits.memory 500MB

Note: Early versions of LXD used suffixes without B, i.e. M instead of MB.

Restrict to single core:

$ lxc profile set single_core limits.cpu 1

Note: In early versions of LXD this configuration key was named limits.cpus.

Apply the profile to the container:

$ lxc profile apply trusty-1 single_core
Tagged with: , , , , ,
Posted in linux

Autostarting LXD containers

Autostarting containers in LXD is similar to LXC but the config keys have different names. This feature is new in the latest version of LXD, which as of this writing is 0.16.

There are three config options:

LXD key Correspoding LXC key Values
boot.autostart lxc.start.auto 1 enabled, 0 disabled
boot.autostart.delay lxc.start.delay delay in seconds to wait after starting container
boot.autostart.priority lxc.start.order container priority, higher values means earlier start

The configuration is done as usual with LXD:

$ lxc config set container_name boot.autostart 1
Tagged with: , , , , ,
Posted in linux

Using Linux containers and LXD

Linux containers (LXC) can be considered as form of lightweight VM:s without the hardware emulation part. Containers are run by the the same kernel as the host but are separated into different sandboxed environments to prevent uncontrolled interaction between them and the host. Since there is no hardware emulation, LXC is usually faster than traditional virtualization techniques but has a more limited use case.

LXD

LXD is an initiative made by Canonical that aims to provide a better user experience to the LXC linux containers. It also adds some nifty features, like live migration of containers, remote admin, secure by default and more.

This guide is meant to be an introduction to LXD in order to setup a server in a container that can be moved between hosts. The setup can be done locally on a laptop and then deployed to the production environment when everything is setup. This guide describes the usage for LXD version 0.15.

Install

Since LXD is currently in heavy development it isn’t available in Ubuntu < 15.04. If you are running an older system (not older than 14.04) we need to add the backport ppa:

Update: LXD 2.0 was released and then stable packages are available in trusty-backports.

$ sudo apt -t trusty-backports install lxd

This is the older way of installing lxd which may still work

$ sudo add-apt-repository ppa:ubuntu-lxc/lxd-stable
$ sudo apt-get update
$ sudo apt-get install lxd lxd-client

Get your group membership:

$ newgrp lxd

2.0 Update: In 2.0 you need to configure network for containers by running:

$ sudo dpkg-reconfigure -p medium lxd

Follow the wizard.

Create container

LXD is image based so we need to have some images to work with:

$ lxc remote add images images.linuxcontainers.org
$ lxc image list images:

This will add a new remote called images and list all the available images in that remote.

Then create a local copy of the desired image:

$ lxc image copy images:ubuntu/trusty/i386 local: --alias=trusty

This will copy a 32 bit ubuntu 14.04 image localy (local: target) and call it trusty.

Now we can list our local images:

$ lxc image list

Launch the container with the following command:

$ lxc launch trusty trusty-1
$ lxc list
+-----------+---------+------------+------+-----------+-----------+
|   NAME    |  STATE  |    IPV4    | IPV6 | EPHEMERAL | SNAPSHOTS |
+-----------+---------+------------+------+-----------+-----------+
| trusty-1  | RUNNING | 10.0.3.201 |      | NO        | 0         |
+-----------+---------+------------+------+-----------+-----------+

It is possible to skip downloading the image and launch a container from a remote image directly. The image will be cached on the local machine and will be removed when no new containers has been spawned from it for a while. Once a container is created, the image is no longer needed to run the container.

Containers can be started and stopped:

$ lxc stop trusty-1
$ lxc start trusty-1

Working with the container

To get a command prompt in the container:

$ lxc exec trusty-1 /bin/bash

Add user with sudo priveleges

# adduser me
# adduser me sudo

Inside the container install some software

# apt-get install openssh-server

Press Ctrl-D to get out of the container shell and ssh from host

$ ssh me@1.0.3.201

LXD provide an interface for uploading downloading files via the lxc file sub command. It’s a bit restricted and does not work on directories so I copy the files directly into the container file system instead. Or you can use scp, rsync or whatever you prefer. The container file system is located at /var/lib/lxd/containers/trusty-1/rootfs.

Networking

Setting up network services works the same as for an ordinary LXC container. I use port forwarding with iptables to redirect the wanted services to the container ip.

$ sudo iptables -t nat -A PREROUTING -p tcp -i eth0 --dport 80 -j DNAT --to-destination 10.0.3.201:80
$ sudo iptables -A FORWARD -p tcp -d 10.0.3.201 --dport 80 -m state --state NEW,ESTABLISHED,RELATED -j ACCEPT

This will forward http traffic on eth0 to the container at 10.0.3.201 (trusty-1). Since PREROUTING does not work on the loopback device we need to add an OUTPUT rule to do some local testing:

$ sudo iptables -t nat -A OUTPUT -p tcp -o lo --dport 80 -j DNAT --to-destination 10.0.3.201:80

In order to make the rules persistent you need the iptables-persistent Ubuntu package. Then save the rules:

$ sudo bash -c 'iptables-save > /etc/iptables/rules.v4'

The host networking configuration obviously needs to be done on every host that the container will run on.

Samba

Since the container by default will live on separate network, accessing local samba servers on the LAN will not work. This can be solved in a number of ways:

  1. Bridging the host network to share the IP address between the host and container instead of using the default configuration. This will make samba work as on the host.
  2. Mount the samba share on the host and then give access to the host mount point to the container.

I chose option 2. This was actually a bit tricky to setup to get the appropriate permissions for the container but here we go:

Mount the samba share on the host. This would need something like the following in the /etc/fstab file:

//$IP_OF_SAMBA_SERVER/share /mnt/share cifs uid=root,gid=lxd_guest,username=user,password=secret,file_mode=0770,dir_mode=0770     0      0

The exact options are will vary with the use case. The key here is to set the gid of the mount to a specific group used for sharing with lxd containers and to give the latter the needed permissions.

$ addgroup lxd_guest
$ sudo mkdir /mnt/share
$ sudo mount /mnt/share

Lets assume that this new group lxd_guest has gid=1002. Now we must map this host gid to a gid in the LXD container to give that group access to the share. First we need to allow the host root user to map the gid of the lxd_guest group into a user namespace. This is needed since the LXD daemon is run by root. Add a new line in /etc/subgid:

root:1002:1

This means that root can use a range of size 1 starting at 1002. Now we need to create an id map for the lxd_guest gid to a container gid that is not already mapped into the container. Since the range 100000-165536 is mapped by default we need to chose something outside of that.

$ lxc config set trusty-1 raw.lxc "lxc.id_map = g 200000 1002 1"

The above will map gid 1002 on the host to gid 200000 on in the container. It’s actually a range of size 1. Next step is to mount the host directory from the samba share into the container:

$ lxc config device add trusty-1 share disk source=/mnt/share path=/mnt/samba_share

Restart the container and now in the container we need to create the group with gid 200000:

$ sudo addgroup --gid 200000 lxd_host
$ sudo adduser me lxd_host

Logout and login again into the container and listing the files in the share shall be possible.

Note that the procedure for mapping local host directories is the same except for the samba mounting part.

Copying a container to a remote host

Some setup is needed in order to enable remote access to LXD. First LXD needs to be installed on the target host. Then enabling of the remote access needs to be done on both the source and target machine:

$ lxc config set core.trust_password somepassword
$ lxc config set core.https_address "ip_or_resolvable_hostname"

The setup is needed on both machines since it seems like the target actually pulls the container from the source. Take this with a pinch of salt since I have not confirmed this. In any case the target needs remote access to the source machine for copy to work.

Next step is to setup the remote target on the source machine:

$ lxc remote add target-host https://$IP_OR_HOSTNAME --password='somepassword'

Also the source needs to be setup as a remote on the source machine in order for the target machine to make contact with the source during transfer. This remote needs to be an address (ip or hostname) that is reachable from the target machine.

$ lxc remote add localhost https://$IP_OR_HOSTNAME:8443

Then, we are ready to make the copy:

$ lxc copy localhost:trusty-1 target-host:trusty-1

Note, that this is based on version 0.15 of lxd and in later releases the setup and use of localhost shall not be needed anymore as far as I understand.

Let’s check that the container arrived safely:

$ lxc list target-host:

Now, the iptables rules, samba mounts and other settings done to the source host needs to be done on the target host as well.

Tagged with: , , , , ,
Posted in linux