Linux container LXC on Amazon EC2 server (Cloud inside Cloud)
Amazon AWS announced supporting pvgrub kernel a week ago. So it is possible to run your own kernel with new features like btrfs, cgroup, namespace, high resolution timers. Just be aware the AWS still use a very ancient xen version, so you will need to patch stock kernel to be bootable.
Here is a step by step guide on how to setup a linux container on top of EC2. Since EC2 itself is virtual environment, it is almost impossible to run other vm technology on top of it. You can read these general guide [1] [2] on how to setup a linux container.
Step 1: Host VM
In order to run lxc, the host will need to support cgroup and namespace. Ubuntu 10.4 lucid or newer includes them. I also made two public archlinux AMIs which support all these features, you can find them here.
Mount up /cgroup,
mkdir /cgroup mount -t cgroup none /cgroup
In order for network to work you will need these two packages: iptables and bridge-utils. Ubuntu has lxc package, but on archlinux you will need to build it from aur.
Bring up the virtual network interface, you only need one here for all your lxc.
brctl addbr br0 ifconfig br0 192.168.3.1 up
Of course, you can pick other network address. You should skip the step mentioned in other guide to add your physical network such as “brctl addif br0 eth0″, because amazon will not route your private packet.
Step 2: Filesystem
Lxc installation should already include templates for some popular linux distribution. You can read the guide I mentioned above. For archlinux you can use my chroot script and patch.
I am not sure how to manually setup network for other distribution. You can also setup a dhcpd on host for the container.
On archlinux you can disable the eth0 setup but enable the default route like this in rc.conf,
INTERFACES=() gateway="default gw 192.168.3.1" ROUTES=(gateway)
Here I assume your new root filesystem inside /mnt/mini. You LXC config file should look like this
lxc.utsname = mini lxc.network.type = veth lxc.network.flags = up lxc.network.link = br0 lxc.network.name = eth0 lxc.network.ipv4 = 192.168.3.20/24 lxc.mount.entry = none /mnt/mini/dev/pts devpts newinstance 0 0 lxc.mount.entry = none /mnt/mini/proc proc defaults 0 0 lxc.mount.entry = none /mnt/mini/sys sysfs defaults 0 0 lxc.mount.entry = none /mnt/mini/dev/shm tmpfs defaults 0 0 lxc.rootfs = /mnt/mini lxc.tty = 3 lxc.pts = 1024
Step 3: Container network
For network inside container to work, you still need to do two more things.
cp /etc/resolve.conf /mnt/mini/etc
iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE sysctl -w net.ipv4.ip_forward=1
Now you can start your container.
lxc-create -f /mnt/config -n mini
lxc-start -n mini
If there is no error during container boot, you can proceed to enter your container.
lxc-console -n mini
Login as root with no password.
ping www.google.com
If you are lucky, you should see ping go through. It may take a second to discover the new route inside container.
Step 3: Run service inside container
The main reason for most people to setup a container inside an EC2 is probably for jailing network daemons. But your container only have non reachable private address, so do it home router style using port forwarding with iptables.
For example, start your httpd daemon inside container as usual, then run this on host
iptables -t nat -A PREROUTING -i eth0 -p tcp \ --dport 80 -j DNAT --to-destination 192.168.3.20
Now you should be able to reach your container from public ip.