Linux Fu: docking made easy

0

Most computer operating systems suffer from some version of “DLL hell” – a decidedly Windows term, but the concept applies across the board. Consider doing embedded development which usually requires a few specialized tools. You write the code for your embedded system, ship it, and forget about it for a few years. Then the end user wants a change. Too bad the compiler you used requires a library that has changed and therefore no longer works. Oh, and the device programmer needs an older version of the USB library. Python build tools use Python 2 but your system has evolved. If the tools you need are no longer on the computer, you may have trouble finding the installation media and getting it to work. Even worse if you don’t even have the right kind of computer for it anymore.

One way to solve this problem is to encapsulate all your development projects in a virtual machine. Then you can save the VM and it includes an operating system, all the right libraries, and is basically a snapshot of how the project was that you can rebuild at any time and on almost any computer.

In theory, that’s great, but it’s a lot of work and a lot of storage. You need to install an operating system and all the tools. Sure, you can get a device image, but if you’re working on a lot of projects, you’ll have a bunch of copies of the same thing cluttering things up. You’ll also have to keep all those copies up to date if you need to update things which, okay, is sort of what you’re probably trying to avoid, but sometimes you have to.

Docker is a bit lighter than a virtual machine. You’re still running your system’s normal kernel, but you can essentially have a virtual environment running in an instant on top of that kernel. Also, Docker only stores differences between things. So if you have ten copies of an operating system, you’ll store it only once plus small differences for each instance.

The downside is that it’s a bit tricky to set up. You need to map storage and configure networking, among other things. I recently came across a project called Dock which tries to make common cases easier so that you can quickly launch a docker instance to do some work without real configuration. I made some minor edits and forged the projectbut, for now, the origin has synced with my fork so you can stick to the original link.

Documentation

The documentation on the GitHub pages is a bit sparse, but the author has a good page of instructions and videos. On the other hand, it is very easy to get started. Create a directory and go there (or go to an existing directory). Run “dock” and you will get a launched Docker container named after the directory. The directory itself will be mounted inside the container and you will have an ssh connection to the container.

By default this container will contain some nice tools, but I wanted different tools. No problem; you can install whatever you want. You can also validate an image configuration as you wish and name it default in configuration files. You can also name a specific image file on the command line if you want. This means that it is possible to have several configurations for new machines. You could say you want this directory to have one image set up for Linux development and another for ARM development, for example. Finally, you can also name the container if you don’t want it linked to the current directory.

Pictures

This requires special Docker images that the system knows to install automatically. There are configurations for Ubuntu, Python, Perl, Ruby, Rust, and some network and database development environments. Of course, you can customize any of them and commit them to a new image that you can use as long as you don’t mess up the stuff the tool depends on (i.e. an SSH server, for example).

If you want to change the default image, you can do so in ~/.dockrc. This file also contains a prefix that the system removes from container names. This way a directory named Hackaday will not end up with a container named Hackaday.alw.home, but will simply be Hackaday. For example, since I have all my work in /home/alw/projects, I should use that as a prefix so that I don’t have the word projects in every container name, but – as you can see in the screenshot of attached screen – I don’t know if the container ends like Hackaday.projects.

Options and aliases

You can see the available options on the help page. You can select a user, mount additional storage volumes, set a few container options, and more. I haven’t tried it, but it seems there is also a $DEFAULT_MOUNT_OPTIONS variable to add other directories to all containers.

My fork adds a few extra options that aren’t absolutely necessary. For one, -h will give you a short help screen, while -U will give you a longer help screen. Also, unknown options trigger a help message. I also added a -I option to write a source line that can be added to your shell profile to get the optional aliases.

These optional aliases are useful to you, but Dock doesn’t use them, so you don’t have to install them. These do things like list Docker images or make a commit without having to remember the full Docker syntax. Of course, you can still use the usual Docker commands, if you prefer.

Try it!

To get started, you need to install Docker. Usually, by default, only root can use Docker. Some setups have a particular group that you can join if you want to use it from your own user ID. It’s easy to set up if you want. For instance:

sudo usermod -aG docker $(whoami)
newgrp docker
sudo systemctl unmask docker.service
sudo systemctl unmask docker.socket
sudo systemctl start docker.service

From there follow the setup on the project page and be sure to edit your ~/.dockrc file. You must ensure that the DOCK_PATH, IGNORED_PREFIX_PATHand DEFAULT_IMAGE_NAME the variables are set correctly, among other things.

Once configured, create a test directory, type dock and enjoy your new kind of virtual machine. If you have configured aliases, use dc to show the containers you have. You can use dcs Where dcr to stop or delete a “virtual machine”. If you want to save the current container as an image, try dcom to validate the container.

Sometimes you want to enter the fake machine as root. You can use dock-r as a shortcut for dock -u root assuming you have the aliases installed.

It’s hard to imagine how much easier it could be. Since everything is written as a Bash script, it’s easy to add options (like I did). It seems like it would be quite easy to adapt existing Docker images to be compatible with Dock as well. Remember that you can validate a container to use as a template for future containers.

If you want more information about Docker, Ben James has a good write-up. You can even use Docker to simplify breakback.

Share.

About Author

Comments are closed.