In my last post, I demonstrated creating a snap package for an application available in the archive. I left that application unconfined, which is taboo in the long run if we want our system to be secure. In a few steps, we can add the necessary components to confine our pingus snap.
For reference, this is the original snapcraft.yaml file for creating a pingus snap, except that we’ve updated the confinement property to strict:
snapcraft.yaml
123456789101112131415161718192021222324252627
name:pingusversion:'0.1'summary:Free Lemmings(TM) clonedescription:|Pingus is a free clone of the popular Lemmings game.|Your goal is to guide a horde of penguins through a world full of obstaclesand penguin traps to safety. Although penguins (unlike lemmings) are rathersmart, they sometimes lack the necessary overview and now rely on you tosave them.grade:develconfinement:strictparts:archives:plugin:nilstage-packages:-pingusenv:plugin:dumporganize:pingus.wrapper:usr/bin/pingusapps:pingus:command:pingus
If you’re feeling bold, you can build and install the snap from here, but be warned that this led me into an ncurses nightmare that I had to forcibly kill. That’s largely because pingus depends on X11, which is not available out-of-the-box once we’ve confined our snap. If we want to use X11, we’re going to need to connect to it using the snap-land concept of interfaces. Interfaces allow us to access shared resources and connections provided by the system or other snaps. There’s some terminology to grapple with here, but the bottom line is that a “slot” provides an interface which a “plug” connects to. You can see a big list of available interfaces with descriptions on the wiki. Our pingus app will “plug” into the X11 interface’s “slot”:
snapcraft.yaml
123456
# ...apps:pingus:command:pingusplugs:-x11
You can build and install the new snap with the --dangerous flag for your local confined snap. After that, you can verify the interface connection with the snap interfaces command:
Once again: build, install, and run… et voilà! Is it just me, or was that surprisingly painless? Of course, not all applications live such isolated lives. Make note that the x11 interface is supposed to be a transitional interface, meaning that we would rather our app fully transition to Mir or some alternative. To go a step further with this snap, we could create a snapcraft.yaml to build from source to get the absolute latest version of our app. At this point, we can change our grade property to stable and feel good about something that we could push to the store for review.
If you haven’t heard, snaps are a new, modern packaging format made by the guys at Ubuntu. Snaps give every app a confined environment to live in, making desktops more secure and dependencies less of a hassle. One common way to create a snap is to simply use existing packages from the Ubuntu archives.
Let’s try to create a snap for the game pingus. pingus is a great little Lemmings clone that we can easily convert to a snap. We’ll start by installing the necessary dependencies for snap building (see the snapcraft website for more):
1
$ sudo apt install snapcraft
Now we can initialize a project directory with snapcraft:
snapcraft init creates the following sample file to give us an idea of what we’ll need to provide.
snapcraft.yaml
12345678910111213141516
name:my-snap-name# you probably want to 'snapcraft register <name>'version:'0.1'# just for humans, typically '1.2+git' or '1.3.2'summary:Single-line elevator pitch for your amazing snap# 79 char long summarydescription:|This is my-snap's description. You have a paragraph or two to tell themost important story about your snap. Keep it under 100 words though,we live in tweetspace and your description wants to look good in the snapstore.grade:devel# must be 'stable' to release into candidate/stable channelsconfinement:devmode# use 'strict' once you have the right plugs and slotsparts:my-part:# See 'snapcraft plugins'plugin:nil
Most of these values for our pingus snap should be obvious. The interesting markup here is in parts, which is where we’ll describe how to build our snap. We’ll start by taking advantage of the nil plugin to simply unpack the pingus deb from the archive. We define our list of debs to install in a list called stage-packages. We’ll also define another section, apps, to tell snapcraft what binaries we want to be able to execute. In our case, this will just be the pingus command. Here’s what my first draft looks like:
snapcraft.yaml
1234567891011121314151617181920212223
name:pingusversion:'0.1'summary:Free Lemmings(TM) clonedescription:|Pingus is a free clone of the popular Lemmings game.|Your goal is to guide a horde of penguins through a world full of obstaclesand penguin traps to safety. Although penguins (unlike lemmings) are rathersmart, they sometimes lack the necessary overview and now rely on you tosave them.grade:develconfinement:devmodeparts:archives:plugin:nilstage-packages:-pingusapps:pingus:command:usr/games/pingus
Nice, right? Building and installing our snap is easy:
We used devmode here because our app will be running unconfined (a topic for another blog post). Now, for the moment of truth! The snap tools automatically put our new app in PATH, so we can just run pingus:
12
$ pingus
/snap/pingus/x2/usr/games/pingus: 2: exec: /usr/lib/games/pingus/pingus: not found
¡Ay, caramba! We’ve run into a fairly common issue while snapping legacy software: hardcoded paths. Fortunately, the corresponding pingus executable is very simple. It’s trying to execute a command living in /usr/lib/games/pingus, which is not in our snap’s PATH. The easiest way to fix this is to fix the pingus executable. Since we don’t want to spend time modifying the upstream to use a relative path, we can create our own version of the pingus wrapper locally and copy it into our snap. The only change to this new wrapper will be prepending the snap’s install path $SNAP to the absolute paths:
Now we can update our yaml file with a new part called env which will use the dump plugin to copy our wrapper file into the snap. We’ll also update our command to call the wrapper:
When you run snapcraft this time, the env part will be built. After performing another install, you can run pingus, and you should be greeted with one of the best Lemmings clones available! Because we’re running unconfined in devmode, this all just works without any issues. I intend to write another blog post in the near future with the details on confining pingus, so look out for that soon. I may also go into detail on building more complex cases, such as building snaps from source and building custom plugins, or reviewing a case study such as the libertine snap.
For much, much more on snaps, be sure to visit snapcraft.io. If you’re looking for a published version of pingus as a snap, you can try sudo snap install --devmode --beta pingus-game, and you can run the game with pingus-game.pingus.
Whether I’m adding dependencies, updating package names, or creating new package spins, I always have issues testing my debian packages. Something will work locally, only to fail on jenkins under a clean environment. Fortunately, there’s a nifty tool called pbuilder that exists to help out in these situations. pbuilder uses a chroot to set up a clean environment to build packages, and can even be used to build packages for systems with architectures different from your own.
Note: All code samples were originally written from a machine running Ubuntu 16.10 64-bit. Your mileage may vary.
Clean builds for current distro
Given a typical debian-packaged project with a debian directory (control, rules, .install), you can use debuild to build a package from your local environment:
12345
$ cd my-project
$ debuild
...
$ ls ../*.deb
my-project.deb
This works pretty well for sanity checks, but sometimes knowing your sane just isn’t quite enough. My development environment is filled with libraries and files installed in all kinds of weird ways and in all kinds of strange places, so there’s a good chance packages built successfully on my machine may not work on everyone’s machine. To solve this, I can install pbuilder and set up my first chroot:
1234
$ # install pbuilder and its dependencies$ sudo apt-get install pbuilder debootstrap devscripts
$ # create a chroot for your current distro with build-essential pre-installed$ sudo pbuilder create --debootstrapopts --variant=buildd
Since I use debuild pretty frequently, I also rely on pdebuild which performs debuild inside of the clean chroot environment, temporarily installing the needed dependencies listed in the control file.
1234
$ cd my-project
$ pdebuild
$ ls /var/cache/pbuilder/result/*.deb
my-project.deb
Alternatively, I could create the .dsc file and then use pbuilder to create the package from there:
12345678
$ # generate a dsc file however you like$ cd my-project
$ bzr-builddeb -- -us -uc
$ cd ..
$ # use pbuilder to create package$ sudo pbuilder build my-project.dsc
$ ls /var/cache/pbuilder/result/*.deb
my-project.deb
Clean cross builds
Let’s say that you need to build for an older distribution of Ubuntu on a weird architecture. For this example, let’s say vivid with armhf. We can use pbuilder-dist to verify and build our packages for other distros and architectures:
1234567
$ # create the chroot, once again with build-essential pre-installed$ pbuilder-dist vivid armhf create --debootstrapopts --variant=buildd
$ # the above command could take a while, but once it's finished$ # we can attempt to build our package using a .dsc file$ pbuilder-dist vivid armhf build my-project-dsc
$ ls ~/pbuilder/vivid-armhf_result/*.deb
my-project.deb
Custom, persistent chroot changes
In some cases, you may need to enable other archives or install custom software in your chroot. In the case of our vivid-armhf chroot, let’s add the stable-overlay ppa which updates the outdated vivid with some more modern versions of packages.
123456789
$ # login to our vivid-armhf chroot, and save state when we're finished$ # if --save-after-login is omitted, a throwaway chroot will be used$ pbuilder vivid armhf login --save-after-login
(chroot)$ # install the package container add-apt-repository for convenience(chroot)$ apt install software-properties-common
(chroot)$ add-apt-repository ppa:ci-train-ppa-service/stable-phone-overlay
(chroot)$ exit$ # update packages in the chroot$ pbuilder-dist vivid armhf update
pbuilder and chroots are powerful tools in the world of packaging and beyond. There are scripting utilities, as well as pre- and post-build hooks which can customize your builds. There are ways to speed up clean builds using local caches or other “cheats”. You could use the throwaway terminal abilities to create and destroy tiny worlds as you please. All of this is very similar to the utility which comes from using docker and lxc, though the underlying “container” is quite a bit different. Using pbuilder seems to have a much lower threshold for setup, so I prefer it over docker for clean build environments, but I believe docker/lxc to be the better tool for managing the creation of consistent virtual environments.
I currently write a lot of python and C++. Although I religiously unit test my C++ code, I’m a bit ashamed to say that I haven’t had much experience with python unit testing until recently. You know how it is - python is one of those interpreted languages, you mostly use it to do quick hacks, it doesn’t need tests. Until you’ve written your entire D-Bus service using python, and every time you make a code change a literal python appears on the screen to crash your computer. So I’ve started writing a bunch of tests and found (as expected) a tangled mess of dependencies and system calls.
In many C-like languages, you can fix most of your dependency problems with The Big Three: mocks, fakes, and stubs. A fake is an actual implementation of an interface used for non-production environments, a stub is an implementation of an interface returning a pre-conceived result, and a mock is a wrapper around an interface allowing a programmer to accurately map what actions were performed on the object. In C-like languages, you use dependency injection to give our classes fakes, mocks, or stubs instead of real objects during testing.
The good news is that we can also use dependency injection in python! However, I found that relying solely on dependency injection would pile on more dependencies than I wanted and was not going to work to cover all my system calls. But python is a dynamic language. In python, you can literally change the definition of a class inside of another class. We call this operation patch and you can use it extensively in testing to do some pretty cool stuff.
Code Under Test
Let’s define some code to test. For all of these examples, I’ll be using python3.5.2 with the unittest and unittest.mock libs on Ubuntu 16.10. You can the final versions of these code samples on github.
fromrandomimportrandintclassWorkerStrikeException(Exception):passclassWorker(object):""" A Worker will work a full 40 hour week and then go on strike. Each time a Worker works, they work a random amount of time between 1 and 40. """def__init__(self):self.hours_worked=0defwork(self):timesheet=randint(1,40)self.hours_worked+=timesheetifself.hours_worked>40:raiseWorkerStrikeException("This worker is picketing")returntimesheetclassBoss(object):""" A Boss makes profit using workers. Bosses squeeze 1000 monies out of a Worker for each hour worked. Workers on strike are instantly replaced. """def__init__(self,worker):self.worker=workerself.profit=0defmake_profit(self):try:self.profit+=self.worker.work()*1000exceptWorkerStrikeExceptionase:print("%s"%e)self.worker=Worker()self.profit+=self.worker.work()*1000finally:returnself.profit
These are two simple classes (and a custom Exception) that we’ll use to demonstrate unit testing in python. The first class, Worker, will work a maximum of 40 hours per week before picketing it’s corporation. Each time work is called, the Worker will work a random number of hours. The Boss class takes in a Worker object, which it uses as it performs make_profit. The profit is determined by the number of hours worked multiplied by 1000. When the worker starts picketing, the Boss will hire a new Worker to take their place. So it goes.
Mocking the Worker Class
Our goal is to fully test the Boss class. We’ve left ourselves a dependency to inject in the __init__ method, so we could start there. We’ll mock the Worker and pass it into the Boss initializer. We’ll then set up the Worker.work method to always return a known number so we can test the functionality of make_profit.
1234567891011121314151617181920212223
importunittest.mockfromunittestimportTestCasefromcorpimportwork# your impl fileclassBossTest(TestCase):deftest_profit_adds_up(self):worker=unittest.mock.create_autospec(work.Worker)worker.work.return_value=8boss=work.Boss(worker)self.assertEqual(boss.make_profit(),8000)self.assertEqual(boss.make_profit(),16000)worker.work.return_value=10self.assertEqual(boss.make_profit(),26000)worker.work.assert_has_calls([unittest.mock.call(),unittest.mock.call(),unittest.mock.call()])if__name__=='__main__':unittest.main()
To run this test, use the command python3 -m testtools.run test, where test is the name of your test file without the .py.
One curiosity here is unittest.mock.create_autospec. Python will also let you directly create a Mock, which will absorb all attribute calls regardless of whether they are defined, and MagicMock, which is like Mock except it also mocks magic methods. create_autospec will create a mock with all of the defined attributes of the given class (in our case work.Worker), and raise an Exception when the attribute is not defined on the specced class. This is really handy, and eliminates the possibility of tests “accidentally passing” because they are calling default attributes defined by the generic Mock or MagicMock initializers.
We set the return value of the work function with return_value, and we can change it on a whim if we so desire. We then use assertEqual to verify the numbers are crunching as expected. One further thing I’ve shown here is assert_has_calls, a mock assertion to verify that work was called 3 times on our mock method.
You may also note that we subclassed TestCase to enable running this class as part of our unit testing framework with the special __main__ method definition at the bottom of the file.
Patching the Worker Class
Although our first test demonstrates how to make_profit with a happy worker, we also need to verify how the Boss handles workers on strike. Unforunately, the Boss class creates his own Worker internally after learning they can’t trust the Worker we gave them in the initializer. We want to create consistent tests, so we can’t rely on the random numbers generated by randint in Worker.work. This means we can’t just depend on dependency injection to make these tests pass!
At this point we have two options: we can patch the Worker class or we can patch the randint function. Why not both! As luck would have it, there are a few ways to use patch, and we can explore a couple of these ways in our two example tests.
We’ll patch the randint function using a method decorator. Our intent is to make randint return a static number every time, and then verify that profits keep booming even as we push workers past their limit.
When calling patch, you must describe the namespace relative to the module you’re importing. In our case, we’re using randint in the corp.work module, so we use corp.work.randint. We define the return_value of randint to simply be 20. A fine number of hours per day to work an employee, according to the Boss. patch will inject a parameter into the test representing an automatically created mock that will be used in the patch, and we use that to assert that our calls were all made the way we expected.
Since we know the inner workings of the Worker class, we know that this test exercised our code by surpassing a 40-hour work week for our poor Worker and causing the WorkerStrikeException to be raised. In doing so, we’re depending on the Worker/Boss implementation to stay in-sync, which is a dangerous assumption. Let’s explore patching the Worker class instead.
To spice things up, we’ll use the ContextManager syntax when we patch the Worker class. We’ll create one mock Worker outside of the context to use for dependency injection, and we’ll use this mock to raise the WorkerStrikeException as a side effect of work being called too many times. Then we’ll patch the Worker class for newly created instances to return a known timesheet.
12345678910111213141516171819202122
deftest_profit_adds_up_despite_strikes(self):worker=unittest.mock.create_autospec(work.Worker)worker.work.return_value=12boss=work.Boss(worker)withunittest.mock.patch('corp.work.Worker')asMockWorker:scrub=MockWorker.return_valuescrub.work.return_value=4self.assertEqual(boss.make_profit(),12000)self.assertEqual(boss.make_profit(),24000)worker.work.side_effect=work.WorkerStrikeException('Faking a strike!')self.assertEqual(boss.make_profit(),28000)self.assertEqual(boss.make_profit(),32000)worker.work.assert_has_calls([unittest.mock.call(),unittest.mock.call(),unittest.mock.call()])scrub.work.assert_has_calls([unittest.mock.call(),unittest.mock.call()])
After the first Worker throws a WorkerStrikeException, the second Worker (scrub) comes in to replace them. In patching the Worker, we are able to more accurately describe the behavior of Boss regardless of the implementation details behind Worker.
A Non-Political Conclusion
I’m not saying this is the best way to go about unit testing in python, but it is an option that should help you get started unit testing legacy code. There are certainly those who see this level of micromanaging mocks and objects as tedious, but there is be benefit to defining the way a class acts under exact circumstances. This was a contrived example, and your code may be a little bit harder to wrap with tests.
The release of Ubuntu 16.10 Yakkety Yak in the coming months will bring about the public release of Unity 8 as a pre-installed desktop session (though not as the default session). It’s been a long time coming, and there’s a lot of new features which will break older applications. Canonical has unveiled snappy as the preferred packaging system for Unity 8, but what about all those old deb packages?
Disclaimer: I work for Canonical on one of the teams making all of this fancy stuff work.
A (Very) Brief Explanation
The toolchain we’ll be relying on is called libertine, and it’s essentially a wrapper around unprivileged LXC and chroot-based containers. We prefer to use LXC containers on newer OSes, but we must continue supporting chroot containers on many devices due to kernel limitations.
What You’ll Need
For desktop Unity 8, you’ll need the packages for libertine, libertine-tools, and lxc to get started. This will install a CLI and GUI for maintaining Libertine containers and applications.
If you’re running Wily or newer, you can just run the following in your terminal:
1
$ sudo apt install libertine
Otherwise, you’ll need to add the stable overlay PPA first:
At this point, if you’re on desktop you can open up the GUI which will guide you through creating a new container and installing applications. Search the Dash (or Apps scope) for libertine and, given that we haven’t pushed a buggy version recently, you’ll be presented with a Qt application for maintaining containers. I highly recommend using the GUI, because then you are guaranteed not to be following out-of-date console commands.
…But maybe you prefer the terminal. Or maybe you’re secretly SSH’d into the target machine or Ubuntu Touch device and need to use the terminal. If so…
The CLI
The CLI we’ll be using is libertine-container-manager. It has a manpage, a --help option, and autocomplete to help you out in a jam.
The first thing you’ll want to do is create a container. There are a lot of options, but to create an optimal container for your current machine you only need to specify the id and name parameters:
A couple of things to note here: Your id must be unique and conform to the simple click name regex - this is what will identify your container on a system level. The name should be human-readable so you can easily identify what might be inside your container. If you don’t specify a name, your id will be used. The CLI will likely ask you for a password to use in the container in case you ever need it. You can leave this blank if you’re not concerned with that kind of thing.
At this point, a bunch of things should be happening in your terminal. This will pull a container image for your current distro and install all the requirements to get started maintaining and running X apps. This could take anywhere from a few minutes to the next hour depending on your network and disk speeds. Once you’re done, you can use the list subcommand to list all installed containers (note you probably just have one at this point). If you ever want to delete your container, you can run libertine-container-manager destroy -i desktopapps.
Once that’s finished, we can start installing apps. To find apps available, you can use the search-cache subcommand:
This will return a few strings from the apt-cache of the container with id “desktopapps” that match “office”. Now, if you want to install “libreoffice”:
This will install the full libreoffice suite. Nice! Similarly, you can use the remove-package subcommand to remove applications. Don’t remember what apps you’ve installed? Use the list-apps command:
Maybe you’re an avid Steam for Linux gamer and want to try to get some games working. Since Steam still only comes in a 32-bit binary, you’ll need to enable the multiarch repos, and then you can just install Steam like any other app:
Steam will ask you to agree to their user agreement from the command line, which you should be able to do easily. If you need to use the readline frontend for dpkg, you can append --readline to the install-package command to enable it.
There are many other commands to explore to maintain your container, but for now I’ll let you check the manpage or open the GUI to explore further.
Running Apps
Now that you’ve installed some apps, you probably want to run them. You can install the Libertine Scope, which will allow you to peruse your installed apps in a Unity 8 session. You can either install it from the App Store on a device (search for “Desktop Apps Scope”) or through apt on desktop with:
1
$ sudo apt install libertine-scope
In a Unity 8 session, you can now find the scope and click on apps to run them. Note that there are many apps which still don’t work, such as those requiring a terminal or sudo; consider these a work in progress.
The Future
I’ve been toiling away the past few weeks getting a scope ready which can be used explicitly to install/remove X apps in Unity 8, like the current Ubuntu Software Center (or app store on Touch devices). This scope should be available anywhere the libertine scope is available, meaning that it will alleviate a lot of the pain associated with installing/removing apps for a large chunk of users. Using the Libertine GUI or Libertine CLI will still allow for much more customization, but those tools are largely designed with power users in mind.
Are you able to get libertine working on your system? Can you launch X applications to your heart’s content? Let me know in the comments!