Categories
Uncategorized

new minidlna server broke my network

We added a new device which can expose a connected USB drive via DLNA, internally it uses minidlna which uses SSDP for service discovery. For some strange reason that rendered my *existing* minidlna (hosted on a raspberry pi) invisible. When researching the problem, it looks like neighbor discovery (which didn’t happen before as there were no other devices) uses a multicast 239.0.0.0/8 address which my rpi was blocking due to reasons (only allows traffic via the local network and a vpn gateway). My theory is that the new minidlna device took over as “primary” and then couldn’t find other peers and so the old server wasn’t visible anymore. The solution was to allow the specific multicast address used by SSDP.

#!/bin/bash
iptables -F
#Tunnel interface
iptables -A INPUT -i tun+ -j ACCEPT
iptables -A OUTPUT -o tun+ -j ACCEPT
#Localhost and local networks
iptables -A INPUT -s 127.0.0.0/16 -j ACCEPT
iptables -A OUTPUT -d 127.0.0.0/16 -j ACCEPT
iptables -A INPUT -s 192.168.0.0/16 -j ACCEPT
iptables -A OUTPUT -d 192.168.0.0/16 -j ACCEPT
#multicast for minidlna/SSSP
iptables -I OUTPUT -d 239.255.255.250 -j ACCEPT
iptables -I INPUT -d 239.255.255.250 -j ACCEPT
#Allow VPN establishment, this is the port in the config's #remote
iptables -A OUTPUT -p udp --dport 1198 -m state --state NEW,ESTABLISHED -j ACCEPT
iptables -A INPUT -p udp --sport 1198 -m state --state ESTABLISHED,RELATED -j ACCEPT
#Drop everything else
iptables -A INPUT -j DROP
iptables -A OUTPUT -j DROP

Categories
Uncategorized

convert color pdf to monochrome

gs \
 -sOutputFile=output.pdf \
 -sDEVICE=pdfwrite \
 -sColorConversionStrategy=Gray \
 -dProcessColorModel=/DeviceGray \
 -dCompatibilityLevel=1.4 \
 -dNOPAUSE \
 -dBATCH \
 input.pdf
Categories
Uncategorized

face/touch id on webkit

https://webkit.org/blog/11312/meet-face-id-and-touch-id-for-the-web/ looks like it might work to use an Apple device as a webauthn authenticator device. Worth some experimentation !

Categories
Uncategorized

Jump Bike invasion in Montreal and why Uber sucks

As Uber’s Jump electric bike offering arrives in Montreal, a very common sight starts appearing: Orange Jump bikes parked everywhere: in parks, on sidewalks, tied to private property or public structures, sometimes blocking the way for pedestrians.

Bad parking spot, blocks the sidewalk, out-of-area…

Uber’s permit stipulates that bikes have to be parked in designated spots only; a “designated spot” is a bike rack, not any random pole or fence. Of course, users park where convenient, and of course, Uber always washes their hands from any responsibility by claiming “it’s the user’s responsibility to park bikes in the appropriate spot”. Yet, for all their billions of dollars and technology, they are unwilling to enforce in any way compliance with the permit the city has granted.

I say “unwilling” because it’s entirely feasible to use technology to ensure compliance. I gave this some thought and came up with at least 3 solutions in 5 minutes. The solutions use technologies that are well within Uber’s reach and areas of expertise. They could locate and identify the allowed parking spots and areas and disallow releasing a bike unless it’s properly parked, which can be done either by analyzing Google Street View imagery (using AI, with which Uber is quite adept), sending a human to actually note locations of valid parking structures, or using AI to identify common and likely parking spots, based on density and frequency of bikes parked nearby.

Interestingly, a much smaller company (Lime, the electric scooters) has also just arrived in Montreal, with similar restrictions as to where scooters can be parked. They came up with yet another solution: upon finishing a ride, one must take a picture of the scooter, showing it is correctly parked in a designated area, and upload it.

Another one yay

That very simple solution is easy and cheap to implement, and it keeps users honest while ensuring everyone complies with the conditions set by the city. Honesty, however, is something that escapes Uber; it’s been shown time and time again that they will do the bare minimum necessary, and sometimes not even that if they perceive that the level of enforcement will allow them to get away with just shifting blame to users.

In my opinion, the city should mandate that Uber implement measures to curb badly-parked bicycles, seize badly-parked bikes and impose hefty fines on each one that is found, and ultimately (because these measures will NOT make Uber relent), just rescind their permit. A predatory, disrespectful company like Uber should NOT be allowed to operate in our city.

Categories
Uncategorized

Do you deploy to production on Fridays?

Yes or No (or Bullshit you betcha I do 🤠 wee-haw)

Categories
Uncategorized

Web pages are 60-70% ads nowadays

Here’s a screenshot of a typical web page, highlighting the actual content and how much of the page are actually ads and unrelated stuff. (Click on the scaled image to see a larger version but I warn you it’s quite long)

Holy canoli
Categories
English Geeky Uncategorized

KVM bridged to the LAN with DHCP

The goal here is to instantiate VMs with a br0 interface grabbing an IP from the LAN DHCP, so in turn the VM can instantiate LXD containers whose IP is also exposed to the LAN. That way everything is visible on the same network segment and this makes some experimentation easier.

Host configuration

Some info taken from this URL.

The metal host is running Ubuntu 18.04, which uses netplan. Here’s the netplan.yaml file:

network:
    ethernets:
        enp7s0:
            addresses: []
            dhcp4: no
            dhcp6: no
            optional: true
    bridges:
        br0:
            dhcp4: true
            dhcp6: no
            interfaces:
                - enp7s0
            parameters:
                stp: false
                forward-delay: 0
    version: 2

With this, on boot the system grabs an address from the network’s DHCP service (from my home router) and puts it on the br0 interface (which bridges enp7s0, a Gigabit Ethernet port).

The system also has avahi-daemon installed so I can ssh the-server.local easily.

VM configuration

Next, the VM which I created using uvt-kvm:

# Get a Xenial cloud image
uvt-simplestreams-libvirt --verbose sync release=xenial arch=amd64
# Create/launch a VM
PARAMS='--memory 8192 --disk 32 --cpu 4'
uvt-kvm create the-vm  $PARAMS --bridge br0 --packages avahi-daemon,bridge-utils,haveged --run-script-once setup_network.sh

The setup_network.sh script takes care of setting up the network 🙂 This can more cleanly be done with cloud-init but I’m lazy and wanted something fast.

The script deletes the cloudconfig-created .cfg file, tells cloud-init to NOT reconfigure the network, and drops the config file I actually need in place.

#!/bin/bash

echo "Acquire::http::Proxy \"http://192.168.1.187:3128\"; " >/etc/apt/apt.conf.d/80proxy

# Drop the cloudinit-configured interface
ifdown ens3

# Reconfigure the network...
cat </etc/network/interfaces.d/1-bridge.cfg
auto lo br0

iface lo inet loopback

iface ens3 inet manual

iface br0 inet dhcp
    bridge_ports ens3
    bridge_stp off       # disable Spanning Tree Protocol
    bridge_waitport 0    # no delay before a port becomes available
    bridge_fd 0          # no forwarding delay
EOF

echo "network: {config: disabled}" > /etc/cloud/cloud.cfg.d/99-disable-network-config.cfg
rm /etc/network/interfaces.d/50-cloud-init.cfg

# Then bring up the new nice bridge
ifup br0

apt-get remove -y snapd && apt-get -y autoremove

 

The network config in /etc/network/interfaces.d/1-bridge.cfg should look like:

auto lo br0

iface lo inet loopback

iface ens3 inet manual

iface br0 inet dhcp
    bridge_ports ens3
    bridge_stp off       # disable Spanning Tree Protocol
    bridge_waitport 0    # no delay before a port becomes available
    bridge_fd 0          # no forwarding delay

LXD configuration

Finally,  install lxd. When asked to configure the lxd bridge, respond “no”, and on the next question you’ll be asked whether to supply an existing bridge. Respond “yes” and specify “br0”.

Now, when an lxd container is instantiated, it’ll by default appear on the same network (the home network!) as the VM and the main host, getting its DHCP from the home router.

When things break

Suddenly the bridge interface stopped working. I checked this to help diagnose it. But that wasn’t it. Turns out, I’d installed Docker on the main host and Docker messes with the firewall configuration by setting iptables -P FORWARD DROP. I just set it back to ACCEPT to get it working.

Categories
House buying Uncategorized

Buying a house in Montreal – Getting a realtor

After visiting the mortgage broker and getting an idea of how much we can afford (oh I just found this link with some very sensible advice on how much you can afford), I wanted to go looking for a realtor to help me wade through the house-buying process. When I mentioned I didn’t yet have one, the mortgage broker offered to refer me to an agency. Sure enough, a few days later I got a call from a realtor who asked about my basic needs and signed me up for Centris, the Quebec equivalent of MLS.

The realtor enters your search criteria, such as price range, number of bedrooms/bathrooms, type of construction, desired neighborhoods and some other features. Then the system will e-mail you when new listings are published. Supposedly you’ll have access to “pre-listings”, but in practice I’ve been able to see the same listings that are published in http://centris.ca.

However, the Realtor-managed site does show a lot of additional information, like more detail on the rooms, better data on building/lot areas, and very importantly, information about closing dates, which may even influence the interest rate you’ll get for the loan.

Additionally, I sent my realtor a list we wrote with the requirements we had for a property. We indicated general building requirements, possible locations, and other wishlist items such as “no creepy basements”, closeness to parks, river and amenities, distance to public transport and so on. She thought this was quite useful to fine-tune the criteria, although since the process is mechanized, there are some criteria the system is not able to verify (e.g. no concept of creepiness on basements).

Keep in mind that the data captured in Centris may be inaccurate, and that could affect the results of filtering. Example: initially we specified we wanted to have a driveway (though not necessarily a garage) and there were very few listings. After we removed the driveway criteria, a lot more popped up, and a lot of them did have driveways! The issue here is that the listing brokers didn’t capture that information. So try to make your criteria as broad as possible, and do part of the filtering yourself, when going through the listings.

Another example: we wanted two bathrooms (even if one of them is a half-bathroom with no shower), but if you ask the system for 2 bathrooms, sometimes it doesn’t consider halfs in the criteria. So even though it’s super important for us, we decided to leave this criteria out and are focusing on visually checking for a second bathroom and/or possibility of building one.

Once listings start landing in my inbox, we compiled a list of houses we wanted to visit and told the realtor about them.

Categories
Uncategorized

Weechat trigger sounds based on specific keywords

Weechat used to require some weird perl scripts to trigger on specific conditions, but since version 1.1 (from 2014) a trigger plugin can do all that without needing an external script.

This will create a trigger that runs a command when a specific word (or words) is mentioned in any channel you’re on:

/trigger addreplace warningword signal *,irc_in2_PRIVMSG "${message_without_tags} =~ (danger will robinson|stop the line|help me)" "" "/exec -bg /usr/bin/paplay /usr/share/sounds/ubuntu/notifications/Positive.ogg
Categories
Uncategorized

Vegan picadillo

Vegan picadillo, served with fried white basmati rice
Vegan picadillo, served with fried white basmati rice

Picadillo is a traditional Mexican recipe, usually made with minced meat.  Seitan, however, makes a great substitute for minced meat, and since most of picadillo’s flavor comes from the sauce and reduction process, the flavor stays mostly similar.

Ingredients

  • Half a kg of Seitan (here’s the best recipe we’ve found – can be made well in advance as it keeps nicely in the fridge).
  • One large potato, diced
  • Two large carrots, diced
  • One cup cooked green peas
  • Two cups of vegetable broth
  • Two or three tomatoes (about 500g worth)
  • Two garlic cloves, finely chopped
  • One quarter onion, finely chopped
  • 1 teaspoon olive oil

Serves 6-8.

How to make:

Mince the seitan: Chop it into small dice, then run in small batches through a food processor on high, until you get a size similar to cooked, minced meat.

Prepare the sauce: Put the tomatoes, garlic and broth in the blender, blend for 1 minute or until smooth.

Do the thing: On a large (5L or more) pot, fry the onion with the olive oil until transparent. Once fried, dump the seitan, potato and carrot dice in the pot, dump the sauce and stir (it should initially look like a stew – if it’s drier, make some more sauce and add it to the pot). Set the heat to medium-high, bring the mixture to a boil and let simmer until the liquid is consumed and the carrots and potatoes are soft. BEWARE, there’ll come a point where you will need to start stirring to avoid burning the bottom part of the stew. This will happen even if the top seems to have enough liquid, so keep an eye on it. It should take 20-25 minutes to evaporate the sauce to the desired consistency.

When done, stir in the already-cooked green peas (so they remain firm, if you cook them in the stew they’ll go mushy).

Serve with white or red rice, or with corn tortillas.

Categories
Uncategorized

How to configure e-mail alerts with Munin

I had a hell of a time configuring Munin to send out e-mail alerts if values surpass specific thresholds. Many of the articles I found focused just on setting up the email command (which was the easy part), while few told me *how* to configure the per-service thresholds.

Once the thresholds are configured, you’ll see a green line for the warning threshold and a blue line for the critical one, like in this graph:

munin-it

Some of Munin’s plugins already have configured thresholds (such as disk space monitoring which will send a warning at 92% usage and a critical alert at 96% or so). But others don’t, and I wanted to keep an eye on e.g. system load, network throughtput and outgoing e-mail.

The mail command can be configured in /etc/munin-conf.d/alerts.conf:

contact.myname.command mail -s "Munin ${var:group} :: ${var:host}" thisisme@somewhere.com

Next in /etc/munin.conf, under the specific host I want to receive alerts for, I did something like:

[www.myserver.com]
    address 127.0.0.1
    use_node_name yes
    postfix_mailvolume.volume.warning 100000
    load.load.warning 1.0
    load.load.critical 5.0
    df._dev_sda1.warning 60

This will send alert if the postfix plugin’s volume surpasses 100k, if the load plugin’s load values surpass 1.0 or 5.0 (warning and critical, respectively) and if df plugin’s _dev_sda1 value is over 60% (this is disk usage).

Now here’s the tricky part: How to figure out what the plugin name is, and what the value from this plugin is? (if you get these wrong, you’ll get the dreaded UNKNOWN is UNKNOWN alert).

Just look in /etc/munin/plugins for the one that monitors the service you want alerts for. Then run it with munin-run, for example, for the memory plugin:

$ sudo munin-run memory 
slab.value 352796672
swap_cache.value 6959104
page_tables.value 8138752
vmalloc_used.value 102330368
apps.value 413986816
free.value 120274944
buffers.value 215904256
cached.value 4964200448
swap.value 28430336
committed.value 962179072
mapped.value 30339072
active.value 2746691584
inactive.value 2787188736

These are the values you have to use (so memory.active.warning 500000000 will alert if active memory goes about 5GB).

A tricky one is diskstats:

# munin-run diskstats
multigraph diskstats_latency
sda_avgwait.value 0.0317059353689672
sdb_avgwait.value 0.00127923627684964
sdc_avgwait.value 0.00235443037974684

multigraph diskstats_utilization
sda_util.value 6.8293650462148
sdb_util.value 0.000219587438166445
sdc_util.value 0.000150369658744413

In this case, use diskstats_utilization.sda_util.warning (so the value in “multigraph” is used as if it were the plugin name).

diskstats_utilization.sda_util.warning 60
Categories
Uncategorized

Easy mounting of host directories in lxc container.

This can be done manually as explained here. But I wanted to do this on one fell swoop, so this command worked:

echo "lxc.mount.entry = /src/path/i/wanted/to/share  /var/lib/lxc/container-name/rootfs/mnt none bind 0 0" | sudo tee -a /var/lib/lxc/container-name/config

If done frequently, a function may be useful. I’m too lazy to write that now but I’ll add it later.

Categories
English Geeky Uncategorized

Proxying Python file-like objects for fun and profit

As part of a project I’m working on, I wanted to be able to do some “side processing” while writing to a file-like object. The processing is basically checksumming on-the-fly. I’m essentially doing something like:

source = get_a_readable_file_like_object()
destination = get_a_writable_file_like_object()

destination.write(source.read())

what I’d like is to be able to also get the data read from source and use hashlib’s update mechanism to get a checksum of the object. The easiest way to do it would be using temporary storage (an actual file or a StringIO), but I’d prefer to avoid that since the files can be quite large. The second way to do it is to read the source twice. But since that may come from a network, it makes no sense to read it twice just to get the checksum. A third way would be to have destination be a file-like derivative that updates an internal hash with each read block from source, and then provides a way to retrieve the hash.

Instead of creating my own file-like where I’d mostly be “passing through” all the calls to the underlying destination object (which incidentally also writes to a network resource), I decided to use padme which already should do most of what I need. I just needed to unproxy a couple of methods, add a new method to retrieve the checksum at the end, and presto.

A first implementation looks like this:

#!/usr/bin/python
from __future__ import print_function
import urllib2 as requestlib
import hashlib
import padme

class sha256file(padme.proxy):
    @padme.unproxied
    def __init__(self, *args, **kwargs):
        self.hash = hashlib.new('sha256')
        return super(sha256file, self).__init__()

    @padme.unproxied
    def write(self, data):
        self.hash.update(data)
        return super(sha256file, self).write(data)

    @padme.unproxied
    def getsha256(self):
        return self.hash.hexdigest()

url = "http://www.canonical.com"
request = requestlib.Request(url)

reader = requestlib.urlopen(request)
with open("output.html", "wb") as destfile:
    proxy_destfile = sha256file(destfile)
    for read_chunk in reader:
        proxy_destfile.write(read_chunk)
print("SHA256 is {}".format(proxy_destfile.getsha256()))

This however doesn’t work for reasons I was unable to fathom on my own:

python ./cp2.py
Traceback (most recent call last):
   File "./cp2.py", line 33, in 
     proxy_destfile.write(read_chunk)
   File "./cp2.py", line 20, in write
     return super(sha256file, self).write(data)
AttributeError: 'super' object has no attribute 'write'

This is clearly because super(sha256file, self) refers to the *class* and I need the *instance* which is the one with the write method. So Zygmunt helped me get a working version ready:

#!/usr/bin/python
from __future__ import print_function
try:
    import urllib2 as requestlib
except:
    from urllib import request as requestlib
import hashlib
import padme
 
 
from padme import _logger
 
 
class stateful_proxy(padme.proxy):
 
    @padme.unproxied
    def add_proxy_state(self, *names):
        """ make all of the names listed proxy state attributes """
        cls = type(self)
        cls.__unproxied__ = set(cls.__unproxied__)
        cls.__unproxied__.update(names)
        cls.__unproxied__ = frozenset(cls.__unproxied__)
 
    def __setattr__(self, name, value):
        cls = type(self)
        if name not in cls.__unproxied__:
            proxiee = cls.__proxiee__
            _logger.debug("__setattr__ %r on proxiee (%r)", name, proxiee)
            setattr(proxiee, name, value)
        else:
            _logger.debug("__setattr__ %r on proxy itself", name)
            object.__setattr__(self, name, value)
 
    def __delattr__(self, name):
        cls = type(self)
        if name not in cls.__unproxied__:
            proxiee = type(self).__proxiee__
            _logger.debug("__delattr__ %r on proxiee (%r)", name, proxiee)
            delattr(proxiee, name)
        else:
            _logger.debug("__delattr__ %r on proxy itself", name)
            object.__delattr__(self, name)
 
 
class sha256file(stateful_proxy):
 
    @padme.unproxied
    def __init__(self, *args, **kwargs):
        # Declare 'hash' as a state variable of the proxy itself
        self.add_proxy_state('_hash')
        self._hash = hashlib.new('sha256')
        return super(sha256file, self).__init__(*args, **kwargs)
 
    @padme.unproxied
    def write(self, data):
        self._hash.update(data)
        return type(self).__proxiee__.write(data)
 
    @padme.unproxied
    def getsha256(self):
        return self._hash.hexdigest()
 
 
url = "http://www.canonical.com"
request = requestlib.Request(url)
 
reader = requestlib.urlopen(request)
with open("output.html", "wb") as destfile:
    proxy_destfile = sha256file(destfile)
    for read_chunk in reader:
        proxy_destfile.write(read_chunk)
print("SHA256 is {}".format(proxy_destfile.getsha256()))

here’s the explanation of what was wrong:

– first of all the exception tells you that the super-object (which is a relative of base_proxy) has no write method. This is correct. A proxy is not a subclass of the proxied object’s class (some classes cannot be subclasses). The solution is to call the real write method. This can be accomplished with type(self).__proxiee__.write()

– second of all we need to be able to hold state, namely the hash attribute (I’ve renamed it to _hash but it’s irrelevant to the problem at hand). Proxy objects can store state, it’s just not terribly easy to do. The proxied object (here a file) may or may not be able to store state (here it cannot). The solution is to make it possible to access some of the state via standard means. The new (small) satateful_proxy class implements __setattr__ and __delattr__ in the same way __getattribute__ was always implemented. That is, those methods look at the __unproxied__ set to know if access should be routed to the original or to the proxy.
– the last problem is that __unproxied__ is only collected by the proxy_meta meta-class. It’s extremely hard to change that meta-class (because padme.proxy is not the real class that you ever use, it’s all a big fake to make proxy() both a function-like and class-like object.)

The really cool thing about all this is not so much that my code is now working, but that those ideas and features will make it into an upcoming version of Padme 🙂 So down the line the code should become a bit simpler.

Categories
English Geeky Uncategorized

Speeding up local debian builds with sbuild (eatmydata, apt-cacher-ng and config laziness)

As part of the team that maintains several testing tools for Ubuntu, including checkbox, I sometimes find myself needing to build .deb packages from our source tree.

600px-Old_timer_structural_worker2
“building stuff is hard…”

A simple way of achieving this is of course to run dpkg-buildpackage or even bzr-buildpackage. Assuming all build-deps are correctly installed in the host system, this will result in a nicely built set of .debs.

This approach has a few caveats, in that it’s different from the build process actually employed to create the packages that ultimately get uploaded to Ubuntu (or even the ones available in Launchpad PPAs).

The two main differences are that Launchpad builds the packages in a “clean” environment, installing build-deps from scratch, whereas dpkg-buildpackage will rely on what’s installed in the system. So if you miss specifying a build-dep, your local build may work because you have it installed, but the PPA build will fail because it will not be present.

The second big difference is that with the local approach, you’re “limited” to building packages for the “host” system. Sure, you can specify a different target release in your debian/changelog file, but some aspect of your build may be tied to your system’s tools, versions and layout, and if for some reason they don’t match the actual target at installation time, things will fail in interesting ways.

Clearly, one way to test what the Launchpad build process will spit out is to build a source package and dput that to be built directly on a PPA. The problem here is that the feedback loop becomes excruciatingly slow; PPAs are a shared resource and build times can go from minutes to many hours.

Based on all this, it makes sense to try to use a local build environment that more closely replicates what PPAs do to build your packages.

Fortunately, the PPA builders use free software, so it’s relatively easy to do local builds in a similar environment, completing quickly due to use of local resources, and only upload to Launchpad once you’re pretty sure your build will succeed.

The software in question is sbuild, and I already wrote a post detailing how to install sbuild and set up a build environment for any Ubuntu release you need.

This setup worked fine for the occasional package build when you know packaging is mostly correct. For a fast build such as checkbox, setting up the build environment with all needed packages and build-deps takes about 10 minutes (depending mostly on download speed for all the packages). Of course on a more complex package, compilation time may start to be a factor.

Anyway, the 10-minute time can be too slow if you’re trying to fix a tricky problem and need a fast feedback loop. Plus the process produces a lot of transient files and downloads a set of packages many times, so there’s plenty of room for improvement here.

Speeding up local package installation and build

Eatmydata: it's so fast! (but not too safe)
Eatmydata: it’s so fast! (but not too safe)

A large part of the time spent doing the “local” part of the process is writing files to disk. One way to speed this up is to use a ramdisk to store the build. I’m too lazy and have too little RAM to use this approach, so the alternative was setting up eatmydata inside the chroot. Since these are mostly temporary files or throwaway packages, it’s OK to lose the safety of constant syncs in exchange for a huge boost in speed.

The setup for eatmydata inside the chroot is described here. This looks a bit hard to automate, but luckily we don’t have to, as recent versions of mk-sbuild simply support a –eatmydata parameter, if given this will install eatmydata inside the chroot and do the choot config file change to enable eatmydata.

Adding PPA

You can add a custom PPA to an image. Once the chroot image is built, enter the “golden master”:

sudo schroot -c source:saucy-amd64 -u root

You can add a deb line (get it from launchpad) to your sources:

cat >>/etc/apt/sources.list.d/something.list
 # Copy line here

Then you need to get the GPG key for the PPA and add it manually with the very basic tools provided in the chroot (sorry, no apt-add-repository):

apt-key add -
 # Paste GPG armored key here

Then exit the golden image. After this, your builds from this chroot will be able to fetch packages from the PPA.

Again, that’s a bit of work to do for each VM. Instead, what I did was create a file in /etc/schroot/setup.d to do this automatically. You can of course replace the PPAs you need in the echo lines at the end. Name the file something like 81add-ppas:

#!/bin/sh
set -e
. "$SETUP_DATA_DIR/common-data"
. "$SETUP_DATA_DIR/common-functions"
. "$SETUP_DATA_DIR/common-config"
echo "$STAGE" >>/tmp/stages
 if [ $STAGE = "setup-start" ] || [ $STAGE = "setup-recover" ]; then
 echo "APT::Get { AllowUnauthenticated "1"; };" > $CHROOT_PATH/etc/apt/apt.conf.d/80unauthenticate
 info "ADDING PPAS"
 SLD_PATH="${CHROOT_PATH}/etc/apt/sources.list.d/roadmr.list"
 . $CHROOT_PATH/etc/lsb-release
 MY_RELEASE=$DISTRIB_CODENAME
 [ -n "$MY_RELEASE" ] || MY_RELEASE=trusty
 echo "# Added by the schroot setup mechanism (roadmr)" > $SLD_PATH
 echo "deb http://ppa.launchpad.net/checkbox-dev/ppa/ubuntu $MY_RELEASE main" >> $SLD_PATH
 echo "deb http://ppa.launchpad.net/ubuntu-sdk-team/ppa/ubuntu $MY_RELEASE main" >> $SLD_PATH
 fi

Notice that again, I was very lazy and instead of downloading the gpg keys as shown above (as for some reason trying to run gpg from the setup script didn’t work), I just configured apt to allow unauthenticated packages. Since this sbuild is mainly for testing purposes it’s not a big deal to skip this verification step. Also, there’s some logic to automatically detect the chroot release, so the same config file works equally well for any Ubuntu release.

Apt-cacher-ng

As the name suggests, this nifty utility will cache packages so the next time you need them they’ll be fetched from local storage rather than from the network. A bit of config is needed to have sbuild download packages from here.

Hello, I got these packages cached for you...
Hello, I got these packages cached for you…

First, install apt-cacher-ng on the host system. You can verify it’s listening on port 3142 by any means you like.

Then, to set it up automatically in chroots, add this to the host system’s  /etc/schroot/setup.d/80apt-cacher-ng (rather, create that file; it doesn’t exist by default):

#!/bin/sh
 set -e
 . "$SETUP_DATA_DIR/common-data"
 . "$SETUP_DATA_DIR/common-functions"
 . "$SETUP_DATA_DIR/common-config"
 if [ $STAGE = "setup-start" ] || [ $STAGE = "setup-recover" ]; then
 echo "# Added by the schroot setup mechanism (roadmr)" > "${CHROOT_PATH}/etc/apt/apt.conf.d/80proxy"
 echo "Acquire::http::Proxy \"http://127.0.0.1:3142\";" >> "${CHROOT_PATH}/etc/apt/apt.conf.d/80proxy"
fi

With these two setup.d scripts and the –eatmydata magic, it’s easy to create sbuild environments which will be much faster when building packages.

As a comparison, building msmtp (chosen because this tests mainly the speedup components, not needing any packages from a PPA) takes about 40 seconds with these suggested tweaks:

Build Architecture: amd64
 Build-Space: 5948
 Build-Time: 17
 Distribution: trusty
 Host Architecture: amd64
 Install-Time: 12
 Job: msmtp_1.4.31-1.dsc
 Machine Architecture: amd64
 Package: msmtp
 Package-Time: 40
 Source-Version: 1.4.31-1
 Space: 5948
 Status: successful
 Version: 1.4.31-1
 ─────────────────────────────────────────────────
 Finished at 20140320-1301
 Build needed 00:00:40, 5948k disc space

Whereas on a non-tweaked chroot it takes about 1:38 minutes:

Build Architecture: amd64
 Build-Space: 5568
 Build-Time: 17
 Distribution: trusty
 Host Architecture: amd64
 Install-Time: 31
 Job: msmtp_1.4.31-1.dsc
 Machine Architecture: amd64
 Package: msmtp
 Package-Time: 98
 Source-Version: 1.4.31-1
 Space: 5568
 Status: successful
 Version: 1.4.31-1
──────────────────────────────────────────────────Finished at 20140320-1310
Build needed 00:01:38, 5568k disc space

It looks like they’re about 3 times faster, but that’s misleading because I deliberately chose a small, quick-to-compile package. Still, you can at least reduce network and disk access very easily now. Note, also, that my test system has a fast SSD. Speedup on a traditional rotary magnetic hard-disk is likely to be much higher.

Categories
Uncategorized

Markdown

Projects on github will show a README or README.md file directly on the project page. This is a good place to give some introduction or quick instructions for your project. This supports Markdown, which allows you to craft a README that will both be readable when seen in plain text, and will render nicely when seen directly in github.

Here’s a handy Markdown syntax reference and tutorial. Also, at some point I needed clarification on how to make nested lists, which I found in StackOverflow. There’s a wealth of Markdown-related information on the web!

Two useful tidbits. To render a markdown document to HTML, for previewing so you don’t have to upload stuff to github just to see what your README will look like,

apt-get install  python-markdown

and then run

markdown_py

on your README.md.

Also, vim supports markdown and will do its best to help you, but one unhelpful thing is its insistence to render underscores (_) in inverted text (as it assumes it’s the beginning of an underlined section). Just a warning 🙂