Using virsh to add a volume (disk) to an existing vm

I have a second hard disk mounted under /thepool, and I want to make “virtual disks” in there and be able to mount them on any of my virsh-defined virtual machines.

The root filesystem of the VM resides on a different storage pool (uvtool) on a fast SSD, but for bulk storage I don’t want to fill out the SSD with crap.

# Create the storage pool, under "/thepool" where the big disk is mounted
virsh pool-define-as disk-pool dir - - - - "/thepool/libvirt-pool/"
virsh pool-build disk-pool
virsh pool-autostart disk-pool
virsh pool-start disk-pool
# CHeck it's there
virsh pool-list
# Create a volume inside the pool, qcow2 format
virsh vol-create-as disk-pool juju-zfs-pool.qcow2 64G --format qcow2
# Attach it to the VM
virsh attach-disk juju /thepool/libvirt-pool/juju-zfs-pool.qcow2  vdc --persistent  --subdriver=qcow2
# Now inside the vm, /dev/vdc exists and can be formatted/partitioned and mounted as normal

Converting only one stream in a mkv file.

These mkv files have h.265 hevc video which my media player can’t read, so I’d like to convert only the video stream to h.264, while leaving all other streams (2 audio tracks in aac, 2 subtitle tracks) intact.

ffmpeg -i some-x265-video.mkv -map 0 -c:v libx264 -c:a copy /tmp/x264-version.mkv

Cross-timezone date calculations using the “date”command

Working remotely for a timezone-distributed company poses an interesting challenge: that of having to figure out dates and times for people in different timezones. This involves not only the relatively trivial “what time is it now in A_FARAWAY_PLACE”, but “what time, in FARAWAY_PLACE_X, will it be in FARAWAY_PLACE_Z” and other fun things.

There are a handful of websites that have handy tools to do these conversions for you; but a problem I’ve found is that the web is going to the crapper, and these sites often have confusing UIs concocted by some javascript-crazed, CSS-infected webmonkey; and often they are completely swamped and rendered unusable by a rising tide of ads and other aggressive content (oh and some won’t let you do anything until you agree to them storing information in cookies in your browser – which they then bafflingly don’t use to store the *PREFERENCE* you have selected , so like a forgetful vampire, they ask you every single time if you want to accept their silly cookies).

I’ve known how to use the “date” command to show the date on a different place/timezone, which is already a huge timesaver:

but – today I was trying to answer “what time in TZ=”America/Chicago” is 1 PM, on Tuesday, in “UK/London“. This is interesting because it’s conversion between two timezones which are not the one I’m in, of a date/time in the future. So I was checking date’s man page for “how to convert a specific point in time”, when I realized date can do this for you! Right in the man page there’s this example:

Show the local time for 9AM next Friday on the west coast of the US

so then I combined that with the earlier one to come up with:

This combines

  • TZ argument to calculate dates for a specific timezone, not the current one
  • –date parameter to “display time described by STRING, not ‘now'”
  • Descriptive time specifications (1:00 PM next Tuesday – this is a pseudo-human-readable format which is not entirely intuitive – info date has the specifics)
  • TZ support *inside* the descriptive specification

And a list of known timezones can be obtained with timedatectl list-timezones.

SAML development tools

If you work on a system that needs to authenticate against an external identity provider (IdP), SAML is almost certainly a fact of life. Working on an actual Identity Provider, sometimes the concern is flipped and you need to ensure Service Providers (SPs) can authenticate against your IdP.

I inherited the somewhat clunky django-saml2-idp along with other developers in my team, and we’ve been maintaining it to add new features. If we were doing this today, we’d probably integrate the very complete OneLogin SAML library instead.

Developing with and for our somewhat homegrown SAML library is made easier by a set of developer tools. For example, OneLogin provides a toolbox to slice and dice SAML assertions; you can verify your assertions, extract attributes, see some examples, play with zipping and encoding, all in one place.

Once you have your IdP mostly working, it’s great to have a test SP to connect to it. For this, I’ve used the RSA SAML test Service Provider. You give some details about your IdP, and it will give you a URL that forwards you to the IdP for authentication, then back to the SP, which verifies authentication worked as expected and even shows you the attribute and auth payload received from the SP.

Once you get things mostly working but need to fine-tune or tweak something (I can never tell between issuer, ACS_URL and audience), the Firefox SAML Tracer extension is absolutely essential. It shows you all requests and responses, which ones contain SAML payloads, and lets you see the actual, decoded and formatted XML payload which makes it a breeze to troubleshoot.

There is an equivalent SAML tracer extension for Chrome but 1) Chrome is crap and 2) the Chrome SAML extension is crap. Use Firefox instead.

Remote display of a KVM virtual machine

In this case I’m hosting the VM on a fast server and trying to access the display on another system (a laptop).

One way to do it is by simply SSHing with X forwarding and running KVM like so:

This by default uses a terminal window, but it’s quite slow.

Another option is to start the KVM machine in nographic mode and enable a VNC server:

then on the desktop system use a vnc client to connect to the magic port:

KVM bridged to the LAN with DHCP

The goal here is to instantiate VMs with a br0 interface grabbing an IP from the LAN DHCP, so in turn the VM can instantiate LXD containers whose IP is also exposed to the LAN. That way everything is visible on the same network segment and this makes some experimentation easier.

Host configuration

Some info taken from this URL.

The metal host is running Ubuntu 18.04, which uses netplan. Here’s the netplan.yaml file:

With this, on boot the system grabs an address from the network’s DHCP service (from my home router) and puts it on the br0 interface (which bridges enp7s0, a Gigabit Ethernet port).

The system also has avahi-daemon installed so I can ssh the-server.local easily.

VM configuration

Next, the VM which I created using uvt-kvm:

The setup_network.sh script takes care of setting up the network 🙂 This can more cleanly be done with cloud-init but I’m lazy and wanted something fast.

The script deletes the cloudconfig-created .cfg file, tells cloud-init to NOT reconfigure the network, and drops the config file I actually need in place.

 

The network config in /etc/network/interfaces.d/1-bridge.cfg should look like:

LXD configuration

Finally,  install lxd. When asked to configure the lxd bridge, respond “no”, and on the next question you’ll be asked whether to supply an existing bridge. Respond “yes” and specify “br0”.

Now, when an lxd container is instantiated, it’ll by default appear on the same network (the home network!) as the VM and the main host, getting its DHCP from the home router.

When things break

Suddenly the bridge interface stopped working. I checked this to help diagnose it. But that wasn’t it. Turns out, I’d installed Docker on the main host and Docker messes with the firewall configuration by setting iptables -P FORWARD DROP. I just set it back to ACCEPT to get it working.

Bisecting Python unit test errors to find test interdependencies

 

Many of our test runs use parallelization to run faster. Sometimes we see test
failures which we can’t reproduce locally, because locally we usually run
sequentially; and even then, the test ordering seems to be somewhat
unpredictable so it’s hard to reproduce the exact test ordering seen in our
test runner.

Most of the time these failures are due to unidentified test interdependencies:
either test A causes test B to pass (where running test B in isolation would
fail), or test A causes B to fail (where running B in isolation would pass). And we have seen more complex scenarios where C passes, A-B-C passes, but A-C fails (because A sets C up for failure, while B would set C up for success). We added some diagnostic output to our test runner so it would show exactly the list of tests each process runs. This way we can copy the list and run it locally, which usually reproduces the failure.

But we needed a tool to then determine exactly which of the tests preceding the failing one was setting up the failure conditions. So I wrote this simple bisecter script, which expects a list of test names, which must contain the faily test “A”, and of course, the name of the faily test “A”. It looks for “A” in the list and will use bisection to determine which of the tests preceding “A” is causing the failure.

As an example, I used it to find a test failure in Ubuntu SSO:

 

 

Mocking iterators

A colleague wanted to mock a Journal object which both has callable methods and works as an iterator itself. So it works like this:

We mocked it like this, to be able to pass an actual list of expected values the function will iterate over:

So mock_journal is both a mock proper, where methods can be called (and then asserted on), and an iterable, which when called repeatedly will yield elements of the __next__ side_effect.

Forcing Python Requests to connect to a specific IP address

Recently I ran into a script which tried to verify HTTPS connection and response to a specific IP address. The “traditional” way to do this is  (assuming I want http://example.com/some/path on IP 1.2.3.4):

This is useful if I want to specifically test how 1.2.3.4 is responding; for instance, if example.com is DNS round-robined to several IP addresses and I want to hit one of them specifically.

This also works for https requests if using Python <2.7.9 because older versions don’t do SNI and thus don’t pass the requested hostname as part of the SSL handshake. However, Python >=2.7.9 and >=3.4.x conveniently added SNI support, breaking this hackish way of connecting to the IP, because the IP address embedded in the URL is passed as part of the SSL handshake, causing errors (mainly, the server returns a 400 Bad Request because the SNI host 1.2.3.4 doesn’t match the one in the HTTP headers example.com).

The “easiest” way to achieve this is to force the IP address at the lowest possible level, namely when we do socket.create_connection. The rest of the “stack” is given the actual hostname. So the sequence is:

  1. Open a socket to 1.2.3.4
  2. SSL wrap this socket using the hostname.
  3. Do the rest of the HTTPS traffic, headers and all over this socket.

Unfortunately Requests hides the socket.create_connection call in the deep recesses of urllib3, so the specified chain of classes is needed to propagate the given dest_ip value all the way down the stack.

After wrestling with this for a bit, I wrote a TransportAdapter and accompanying stack of subclasses to be able to pass a specific IP for connection.

Use it like this:

There are a good number of subtleties on how it works, because it messes with the connection stack at all levels, I suggest you read the README to see how to use it in detail and whether it applies to you need. I even included a complete example script that uses this adapter.

Resources that helped:

http://stackoverflow.com/questions/22609385/python-requests-library-define-specific-dns

https://github.com/RhubarbSin/example-requests-transport-adapter/blob/master/adapter.py

Take me to your leader – Using Juju leadership for cron tasks in a multiunit service

I’m working on adding some periodic maintenance tasks to a service deployed using Juju. It’s a standard 3-tier web application with a number of Django application server units for load balancing and distribution.

Clearly the maintenance tasks’ most natural place to run is in one of these units, since they have all of the application’s software installed and doing the maintenance is as simple as running a “management command” with the proper environment set up.

A nice property we have by using Juju is that these application server units are just clones of each other, this allows scaling up/down very easily because the units are treated the same. However, the periodic maintenance stuff introduces an interesting problem, because we want only one of the units to run the maintenance tasks (no need for them to run several times). The maintenance scripts can conceivably be run in all units, even simultaneously (they do proper locking to avoid stepping on each other). And this would perhaps be OK if we only had 2 service units, but what if, as is the case, we have many more? there is still a single database and hitting it 5-10 times with what is essentially a redundant process sounded like an unacceptable tradeoff for the simplicity of the “just run them on each unit” approach.

We could also implement some sort of duplicate collapsing, perhaps by using something like rabbitmq and celery/celery beat to schedule periodic tasks. I refused to consider this since it seemed like swatting flies with a cannon, given that the first solution coming to mind is a one-line cron job. Why reinvent the wheel?

The feature that ended up solving the problem, thanks to the fine folks in Freenet’s #juju channel, is leadership, a feature which debuted in recent versions of Juju. Essentially, each service has one unit designated as the “leader” and it can be targeted with specific commands, queried by other units (‘ask this to my service’s leader’) and more importantly, unambiguously identified: a unit can determine whether it is the leader, and Juju events are fired when leadership changes, so units can act accordingly. Note that leadership is fluid and can change, so the charm needs to account for these changes. For example, if the existing leader is destroyed or has a charm hook error, it will be “deposed” and a new leader is elected from among the surviving units. Luckily all the details of this are handled by Juju itself, and charms/units need only hook on the leadership events and act accordingly.

So it’s then as easy as having the cron jobs run only on the leader unit, and not on the followers.

The simplistic way of using leadership to ensure only the leader unit performs an action was something like this in the crontab:

This uses juju-run with the unit’s name (which is hardcoded in the crontab – this is a detail of how juju run is used which I don’t love, but it works) to run the is-leader command in the unit. This will print out “True” if the executing unit is the leader, and False otherwise. So this will condition execution on the current unit being the leader.

Discussing this with my knowledgeable colleagues, a problem was pointed out: juju-run is blocking and could potentially stall if other Juju tasks are being run. This is possibly not a big deal but also not ideal, because we know leadership information changes infrequently and we also have specific events that are fired when it does change.

So instead, they suggested updating the crontab file when leadership changes, and hardcoding leadership status in the file. This way units can decide whether to actually run the command based on locally-available information which removes the lock on Juju.

The solution looks like this, when implemented using Ansible integration in the charm. I just added two tasks: One registers a variable holding is-leader output when either the config or leadership changes:

The second one fires on the same events and just uses the registered variable to write the crontabs appropriately. Note that Ansible’s “cron” plugin takes care of ensuring “crupdate” behavior for these crontab entries. Just be mindful if you change the “name” because Ansible uses that as the key to decide whether to update or create anew:

A created crontab file (in /etc/cron.d/roadmr-maintenance) looks like this:

A few notes about this. The IS_LEADER variable looks redundant. We could have put it directly in the comparison or simply wrote the crontab file only in the leader unit, removing it on the other ones. We specifically wanted the crontab to exist in all units and just be conditional on leadership. IS_LEADER makes it super obvious, right there in the crontab, whether the command will run. While redundant, we felt it added clarity.

Save for the actual value of IS_LEADER, the crontab is present and identical in all units. This helps people who log directly into the unit to understand what may be going on in case of trouble. Traditionally people log into the first unit; but what if that happens to not be the leader? If we write the crontab only on the leader and remove from other units, it will not be obvious that there’s a task running somewhere.

Charm Ansible integration magically runs tasks by tags identifying the hook events they should fire on. So by just adding the three tags, these events will fire in the specified order on config-changed, leader-elected and leader-settings-changed events.

The two leader hooks are needed because leader-elected is only fired on the actual leader unit; all the others get leader-settings-changed instead.

Last but not least, on’t forget to also declare the new hooks in your hooks.py file, in the hooks declaration which now looks like this (see last two lines added):

Finally, I’d be remiss not to mention an existing bug in leadership event firing. Because of that, until leadership event functionality is fixed and 100% reliable, I wouldn’t use this technique for tasks which absolutely, positively need to be run without fail or the world will end. Here, I’m just using them for maintenance and it’s not a big deal if runs are missed for a few days. That said, if you need a 100% guarantee that your tasks will run, you’ll definitely want to implement something more robust and failproof than a simple crontab.

Proxying Python file-like objects for fun and profit

As part of a project I’m working on, I wanted to be able to do some “side processing” while writing to a file-like object. The processing is basically checksumming on-the-fly. I’m essentially doing something like:

what I’d like is to be able to also get the data read from source and use hashlib’s update mechanism to get a checksum of the object. The easiest way to do it would be using temporary storage (an actual file or a StringIO), but I’d prefer to avoid that since the files can be quite large. The second way to do it is to read the source twice. But since that may come from a network, it makes no sense to read it twice just to get the checksum. A third way would be to have destination be a file-like derivative that updates an internal hash with each read block from source, and then provides a way to retrieve the hash.

Instead of creating my own file-like where I’d mostly be “passing through” all the calls to the underlying destination object (which incidentally also writes to a network resource), I decided to use padme which already should do most of what I need. I just needed to unproxy a couple of methods, add a new method to retrieve the checksum at the end, and presto.

A first implementation looks like this:

This however doesn’t work for reasons I was unable to fathom on my own:

This is clearly because super(sha256file, self) refers to the *class* and I need the *instance* which is the one with the write method. So Zygmunt helped me get a working version ready:

here’s the explanation of what was wrong:

– first of all the exception tells you that the super-object (which is a relative of base_proxy) has no write method. This is correct. A proxy is not a subclass of the proxied object’s class (some classes cannot be subclasses). The solution is to call the real write method. This can be accomplished with type(self).__proxiee__.write()

– second of all we need to be able to hold state, namely the hash attribute (I’ve renamed it to _hash but it’s irrelevant to the problem at hand). Proxy objects can store state, it’s just not terribly easy to do. The proxied object (here a file) may or may not be able to store state (here it cannot). The solution is to make it possible to access some of the state via standard means. The new (small) satateful_proxy class implements __setattr__ and __delattr__ in the same way __getattribute__ was always implemented. That is, those methods look at the __unproxied__ set to know if access should be routed to the original or to the proxy.
– the last problem is that __unproxied__ is only collected by the proxy_meta meta-class. It’s extremely hard to change that meta-class (because padme.proxy is not the real class that you ever use, it’s all a big fake to make proxy() both a function-like and class-like object.)

The really cool thing about all this is not so much that my code is now working, but that those ideas and features will make it into an upcoming version of Padme 🙂 So down the line the code should become a bit simpler.

Updating lxc image/container caches

One of lxc’s nice time-saving features is that, after initial container creation, it will cache the files it downloaded to do so, and when you create a new container using the same template/version/architecture, it will leverage the existing files and create the container with minimal downloads and really quickly.

A downside of this is that the cache can become stale; this is apparent when you want to install a package in a container and apt-get gives 404 errors indicating the version of the package the container knows about, is no longer available in the archive (most likely superseded by a new one).

This is easily fixed by always doing apt-get update in the container prior to any package installs/upgrades. However, it’s cumbersome, and if you’re creating dozens of new containers every day, the bandwidth and time spent re-downloading can quickly add up.

To update the “base image” or cache, which resides in /var/cache/lxc for each version, you can do two things.

most templates also support –flush-cache so if you’re calling lxc-create directly, just add an extra –flush-cache as template args (after –) and the cache will be flushed before making the container. Something like

this will obliterate the existing cache and re-download everything before creating the container.

If you want to update an existing cache do something like:

this will update the cache and all subsequently-created containers will know about the latest package versions.

 

WiFi interfaces on Ubuntu Server

Sometimes you may want to configure a wireless interface on a system with Ubuntu Server. The most common use case (for me, at least) is to run some tests with server, which require two network interfaces, on a laptop (it’s what I have available to play with) with an ethernet interface and a wireless interface. As long as Ubuntu sees the wireless interface, it’s quite easy to set things up so the wireless comes up at boot time.

You will probably need to set up the server to forward and masquerade the internal network (usually, the ethernet segment is the internal one, while the wireless counts as the “outside” interface). There are plenty of tutorials to do this over the internet, so I won’t extend this post by detailing that.

Of course, the wireless will grab a dynamic IP address, so use caution with that as the address may change (or, assign a static one from your router’s unused range). Anyway. Put this in /etc/network/interfaces:

Then you can do ifup wlan0 to bring the interface up. It should also come up automagically at boot time.

More fun with avconv

This was used to resync a file whose audio was consistently 1.75 seconds behind the video track. The resulting file also contains the first 2 subtitle tracks from the original file.

Sources were here.

Speeding up local debian builds with sbuild (eatmydata, apt-cacher-ng and config laziness)

As part of the team that maintains several testing tools for Ubuntu, including checkbox, I sometimes find myself needing to build .deb packages from our source tree.

600px-Old_timer_structural_worker2
“building stuff is hard…”

A simple way of achieving this is of course to run dpkg-buildpackage or even bzr-buildpackage. Assuming all build-deps are correctly installed in the host system, this will result in a nicely built set of .debs.

This approach has a few caveats, in that it’s different from the build process actually employed to create the packages that ultimately get uploaded to Ubuntu (or even the ones available in Launchpad PPAs).

The two main differences are that Launchpad builds the packages in a “clean” environment, installing build-deps from scratch, whereas dpkg-buildpackage will rely on what’s installed in the system. So if you miss specifying a build-dep, your local build may work because you have it installed, but the PPA build will fail because it will not be present.

The second big difference is that with the local approach, you’re “limited” to building packages for the “host” system. Sure, you can specify a different target release in your debian/changelog file, but some aspect of your build may be tied to your system’s tools, versions and layout, and if for some reason they don’t match the actual target at installation time, things will fail in interesting ways.

Clearly, one way to test what the Launchpad build process will spit out is to build a source package and dput that to be built directly on a PPA. The problem here is that the feedback loop becomes excruciatingly slow; PPAs are a shared resource and build times can go from minutes to many hours.

Based on all this, it makes sense to try to use a local build environment that more closely replicates what PPAs do to build your packages.

Fortunately, the PPA builders use free software, so it’s relatively easy to do local builds in a similar environment, completing quickly due to use of local resources, and only upload to Launchpad once you’re pretty sure your build will succeed.

The software in question is sbuild, and I already wrote a post detailing how to install sbuild and set up a build environment for any Ubuntu release you need.

This setup worked fine for the occasional package build when you know packaging is mostly correct. For a fast build such as checkbox, setting up the build environment with all needed packages and build-deps takes about 10 minutes (depending mostly on download speed for all the packages). Of course on a more complex package, compilation time may start to be a factor.

Anyway, the 10-minute time can be too slow if you’re trying to fix a tricky problem and need a fast feedback loop. Plus the process produces a lot of transient files and downloads a set of packages many times, so there’s plenty of room for improvement here.

Speeding up local package installation and build

Eatmydata: it's so fast! (but not too safe)
Eatmydata: it’s so fast! (but not too safe)

A large part of the time spent doing the “local” part of the process is writing files to disk. One way to speed this up is to use a ramdisk to store the build. I’m too lazy and have too little RAM to use this approach, so the alternative was setting up eatmydata inside the chroot. Since these are mostly temporary files or throwaway packages, it’s OK to lose the safety of constant syncs in exchange for a huge boost in speed.

The setup for eatmydata inside the chroot is described here. This looks a bit hard to automate, but luckily we don’t have to, as recent versions of mk-sbuild simply support a –eatmydata parameter, if given this will install eatmydata inside the chroot and do the choot config file change to enable eatmydata.

Adding PPA

You can add a custom PPA to an image. Once the chroot image is built, enter the “golden master”:

You can add a deb line (get it from launchpad) to your sources:

Then you need to get the GPG key for the PPA and add it manually with the very basic tools provided in the chroot (sorry, no apt-add-repository):

Then exit the golden image. After this, your builds from this chroot will be able to fetch packages from the PPA.

Again, that’s a bit of work to do for each VM. Instead, what I did was create a file in /etc/schroot/setup.d to do this automatically. You can of course replace the PPAs you need in the echo lines at the end. Name the file something like 81add-ppas:

Notice that again, I was very lazy and instead of downloading the gpg keys as shown above (as for some reason trying to run gpg from the setup script didn’t work), I just configured apt to allow unauthenticated packages. Since this sbuild is mainly for testing purposes it’s not a big deal to skip this verification step. Also, there’s some logic to automatically detect the chroot release, so the same config file works equally well for any Ubuntu release.

Apt-cacher-ng

As the name suggests, this nifty utility will cache packages so the next time you need them they’ll be fetched from local storage rather than from the network. A bit of config is needed to have sbuild download packages from here.

Hello, I got these packages cached for you...
Hello, I got these packages cached for you…

First, install apt-cacher-ng on the host system. You can verify it’s listening on port 3142 by any means you like.

Then, to set it up automatically in chroots, add this to the host system’s  /etc/schroot/setup.d/80apt-cacher-ng (rather, create that file; it doesn’t exist by default):

With these two setup.d scripts and the –eatmydata magic, it’s easy to create sbuild environments which will be much faster when building packages.

As a comparison, building msmtp (chosen because this tests mainly the speedup components, not needing any packages from a PPA) takes about 40 seconds with these suggested tweaks:

Whereas on a non-tweaked chroot it takes about 1:38 minutes:

It looks like they’re about 3 times faster, but that’s misleading because I deliberately chose a small, quick-to-compile package. Still, you can at least reduce network and disk access very easily now. Note, also, that my test system has a fast SSD. Speedup on a traditional rotary magnetic hard-disk is likely to be much higher.