Ubuntu – Automating virtual machine installation using network preseeds

Virtual machines are very useful for testing. I often use them to verify changes to software, without messing up the local environment. Due to laziness I use VirtualBox and install Ubuntu official ISOs on them, rather than something more elegant/complicated such as kvm, lxc containers or chroots. This replicates an actual desktop environment pretty closely so is ideal for reporting bugs and validating that fixes to software work as expected.

Taking a virtual machine to a point where it’s mostly usable is a bit involved. I launched the desktop ISO, did the manual install procedure, rebooted, installed the VirtualBox extensions so I could mount the host’s drives, did some group changes, rebooted again… this is getting a bit tiring!

I had a quick look at Vagrant to see if it could somehow ease the task. It’s very interesting but didn’t really work in this case, as the virtual machine still has to be set up the way I describe before being able to package and then use it. What I’m after, really, is a way to set up a VM from scratch, just by doing the installation and adding a few extra packages.

This is what preseeding does, but up until now I had only played with local preseeds, baked into the ISO image. I imagined being able to load a preseed from the network would be difficult to set up quickly, and on a personal workstation, which is what would best fit my use case.

Turns out that virtualbox and a simple python module make this very easy. With the default configuration (NAT networking), a virtualbox VM will get an IP address through DHCP, and it will be able to reach the host’s public IP address. So as long as we configure the Ubuntu installer  correctly and have something serving that file, things are very easy. One of the parts I like about this is that experimenting with this is as easy as changing the local preseed and rebooting the VM. About the only cumbersome part is typing the kernel parameters every time, but since there’s only three of them to type/change, this is not as bad as it sounds.

  1. Put your preseed files in a directory (called, for instance, preseed.cfg).
  2. Change to this directory and run python -m SimpleHTTPServer. This starts a miniature HTTP Server on port 8000.
  3. If you like, verify that the preseed is served properly: wget http://:8000/preseed.cfg
  4. Set up the virtual machine, point it to the Ubuntu installation CD, start it.
  5. When you get the keyboard and human icon, press any key.
  6. Move to “install Ubuntu” but don’t press Enter.
  7. Press F6 to access the “advanced mode”. At this point we’re modifying the kernel command line.
  8. Go to the beginning, delete the “file=” portion.
  9. Add “auto url=http://:8000/preseed.cfg”.
  10. Replace “only-ubiquity” with “automatic-ubiquity”.
  11. Press Enter
  12. Sit back and relax while the virtual machine gets installed.

This fits the bill perfectly for me, it removes the manual steps in setting up a testing VM (which I don’t need to keep afterwards, so I can just delete it and recreate with the same procedure), allows for easy experimentation and customization, and doesn’t use a lot of strange technologies or components.

Here’s a link to a sample, basic preseed file. You can customize mainly the late_command (rather, the success_command for ubiquity) and anything else you like. The installation-guide-amd64 package has more details and sample preseed files.

Note that for server installations the kernel command line will be a bit different:

  • No need to add automatic-ubiquity.
  • You DO need to add the “auto url=blahblah” part.
  • For it to be 100% automated, you need to specify a few parameters that in debian-installer are requested *before* the preseed is loaded. Add these: debconf/priority=critical locale=en_US console-setup/ask_detect=false console-setup/layoutcode=us netcfg/choose_interface=auto
  • Note that for debian-installer, the late_command is used as opposed to the ubiquity/success_command.

Reference

Debian preseeding guide

Ubiquity automation

 

Video conversion for iPhone with avconv

avconv replaces the venerable ffmpeg. It can be used to convert videos for the iPhone quite easily.

then run this script:

Another example. This uses time to calculate elapsed time, also nice and ionice to try to reduce impact on system resources. It forces downsampling to two audio channels (-ac 2), useful if the source audio stream is in e.g. 5.1 format.

 

A final example which forces a specific aspect ratio. The source video had the correct pixel dimensions but a bad aspect ratio was encoded in the original file (and was carried over to the recoded one), making it look squished.

Vim and the X clipboard

Usually when I needed to paste stuff from a text file into a GUI program (most commonly, the browser), I resorted to opening the text file in gedit and copying/pasting from there. Using the X clipboard by selecting text with the mouse kinda worked, but it’s subject to Vim’s visual representation of the text, which may include unwanted display-related breaks. So using gedit was the easiest, but also awfully kludgy solution.

I did some research and learned that vim does have direct access to the X clipboard. I tried the commands they mention (basically “+y to yank selected text, then I tried to paste in a GUI application; or “+p to paste from the current X clipboard). They didn’t work. My installed version of Vim in Ubuntu lacked the xterm_clipboard setting. I was in despair!

Then I came across this bug report in Launchpad. Upon reading it I realized that it was as simple as installing vim-gtk. I had never considered this, as it includes a graphical Vim version which I have absolutely no use for. However the bug report mentions that it also includes a text version of vim compiled with X clipboard support. So I installed, fired up Vim, and the feature works well!

I can now have a buffer with long lines, with :set wrap and :set linebreak, which would be afwul if I cut/pasted it with the mouse. I can select text using vim commands and just yank it into the + register, and it’s instantly available in the X clipboard. Bliss!

 

Ubuntu and Juju with local providers

Want to play with Ubuntu’s awesome Juju but don’t want to get into the hassle of getting EC2 configured?

https://juju.ubuntu.com/

It’s actually pretty easy to set up a local provider to experiment with this.

You need to be running Ubuntu 12.04 (yes, it’s not released yet but you can use the beta version or daily  images). Oh, and you can install this on a virtual machine if you really don’t want Juju to mess with your actual system.

Make sure to have a valid SSH key, if you don’t have one, create it with

Then

Once the packages are installed, run

this adds you to the libvirtd group at runtime.

Then run juju bootstrap. Juju will complain about a config file or something. Ignore it! and then edit .juju/environments.yaml and replace everything on that file with this:

The admin-secret is a MD5 random key, you should probably generate your own with something like this:

Then finally it’s time to bootstrap things:

This will exit pretty quickly, but things are not ready yet. Note that it will take a few minutes to get packages and actually prepare the nodes.

Once your juju is bootstrapped you can follow the rest of the steps here:

https://juju.ubuntu.com/docs/user-tutorial.html#bootstrapping

Notifications – during and after launching

When I launch a long-running process I like to forget about it, but how do I know when it’s finished?

You can of course have it send an email after it finishes:

For this to work, it’s very useful to have ssmtp configured, so you have a sane, working local SMTP agent.

You can also send the notification only if the command succeeds:

OK, so you forgot to add the notification to your initial command line. You can use a loop to monitor a particular process and notify you when it’s done.

In this case I’ll be monitoring an instance of netcat. Determining the process name is up to you 🙂 The delimiters $ and ^ look for the executable names only.

The while loop will run while the process exists; once the process disappears the loop continues with the next instruction in the line, which is popping up an alert on the desktop and then sending an email. So if I’m not glued to the desktop, I’ll still get an email when this is done.

while pgrep $nc^; do sleep 5; done; alert; (echo “finished” |mail -s “finished” you @somewhere.com)

find’s printf action

If you use find, it outputs full paths, which may not always be desirable. Turns out find has a -printf action with which you can do  niceties such as outputting plain filenames (as if you’d used basename on them, but this means one less command on your pipeline):

The -printf command has a lot of formatting variables and possibilites! give it a try, look at the man page for more information.

ASCII video rendering

If you’re a CLI jockey you may enjoy looking at a nice ASCII rendering of your face via your webcam:

mplayer -vo caca tv:// -tv driver=v4l2:width=640:height=480:device=/dev/video0

Or to watch your favorite video in ASCII rendering:

vlc –vout caca some-file.avi

 

 

Random picking without repetition

So the problem was to draw people at random from a list. The list is contained in a leads.txt text file, one per line.

This nifty one-liner will output a randomly-picked person from that file every time it’s invoked. it’ll then remove the name from the file so it doesn’t get repeated again.

It can be shortened by changing shuf |head 1 to shuf -h 1.

If you’d rather avoid deleting already-chosen entries from the file, this version just comments the names it picks:

Building Debian/Ubuntu packages with sbuild

Many of the on-line instructions and tutorials are quite complicated. Why? It was easy for me:

To build a virtual machine:

this will create a schroot in /var/lib/schroots/precise-i386. Note how it appends the architecture to the schroot name. Also note that the first time you run mk-sbuild, it’ll show you a configuration file and configure your environment. I didn’t change anything in the config file, I used it “as it was”. When it prompts you to log out, do it, otherwise things won’t work.

OK now you want to build a package using your chroot with sbuild:

This will build the package on precise for ALL available architectures. Note that -d is just “precise”; the -A flag will tell sbuild to build architecture: any packages for all available architectures (so if you have amd64 and i386 chroots, it’ll do the right thing and build two packages).

If you want to build arch-specific packages:

This will magically build for the given architecture (i386). Note that arch: any packages will also be built.

You can also specify the arch as a parameter (but then you have to leave it out of the -d name):

This will not work:

Using diff on the output of two commands – named pipe and bash magic

Ever wanted to diff the output of two commands? Usually it’s done by first piping each command to a temporary file and then diffing them.

The following syntax creates a named pipe for the command and uses the pipe’s name instead of a filename. Bash takes care of everything automagically so all you have to do is:

That’s a dumb example, but how about this?

The commands can be as complicated as you need them to be!

Why I’m staying on Unity

A very interesting conversation erupted today, beginning when a coworker sent a lengthy email stating his reasons for altogether leaving Ubuntu 11.04’s new Unity desktop interface and instead resorting to the good, old-fashioned Gnome 2 “Classic” session.

In it he makes some very valid points about functionality that’s different to what he was used to. This understandably affects his workflow, so instead of wrestling with a new interface, he chose to go with the old one, hopefully until Unity matures enough for him to be able to customize it to his liking.

 

What’s interesting was the amount of responses it got, where everyone spoke about their “pet peeves” with Unity. The vast majority were changes in how Unity handles things, that interfered with people’s workflows. It’s understandable that even a small change in how your user interface behaves, when you’ve become adept at working with it, disrupts things enough (and annoyingly enough) that you either go back to the old user interface, or just start fiddling with the new one until you find a way to get things to an acceptable state.

Which is what struck me as curious about this thread: there were basically two camps, those who flat out abandoned Unity for the time being, and those who actually went looking into how Unity behaves and integrates with the environment, and came up with ways to make Unity more comfortable to those used to the “old ways” of Gnome 2.x and its desktop interface.

Without demerit to the original poster, whose points were quite valid, a lot of responses suggested ways to solve about 80% of his complaints about Unity. However, the fact that it took a team of experts to solve the problems that a user (and another expert, at that) was experiencing, is testament to the fact that Unity could still be made more intuitive, easier and more customizable.

I finally upgraded to Ubuntu 11.04 and Unity this past weekend. Like many, I experienced some usability issues, where the desktop wasn’t behaving the way I was used to. However, my use of the system means that I basically want the UI to stay out of my way. So the main change I had to make was to get the Unity dock to auto-hide, so that it only appears when I ask it to. The rest of the time it’s hidden away. Everything else, well, it’s admittedly different than what I’m used to, but that’s change for you. Was Unity making a change for change’s sake? Maybe so, but I think it’s change in the right direction. Even if it somewhat alienates experienced users (for whom, however, workarounds exist that handle nearly all their concerns), I think the true success of Unity is in how it works for new users. And here are two examples.

Another coworker posted his experience with showing Ubuntu and Unity to a newbie, fresh-from-Windows user. The user’s comments were along the lines of “this looks nice”, “It’s easy to use” and “I’m keeping it”.

Also, even though some have complained about the app lens being hard to use (and it’s a complaint I’ve seen already twice), I’ve seen users realize “but hey, if it’s really that messy, you can use the search field to find what you need, right?”. So yes, end users are realizing this, and it’s just a matter of polishing how things work. If all, I think it’s great to move users away from the “the computer has only two buttons” mindset and get them using the keyboard a little more.

So yes indeed, I’m staying on Unity, and I’m looking forward to seeing it maturing into a better desktop interface. as Mark Shuttleworth said, it’s a foundation on which the next generations of Ubuntu user experience will be built. I’ll be thrilled to be along for the ride.

Finally, for a great write on why your desktop changed, and why the developers would appreciate you giving it a whirl and helping improve it (even just commenting on the stuff you find hard, unintuitive or just plain wrong) is better than just swearing off these newfangled changes (without which, face it, you’d still be using fwm and MIT Athena widgets), please drop by Federico Mena-Quintero’s activity log and read his wonderful and short article “Moving into your new Gnome 3 house“.

The perfect keyboard layout?

I remember an easier time when all keyboards had the same layout (C-64, anyone?) and if you wanted to type special characters you had to resort to arcane command sequences, if they were at all possible.

My, how times have changed.

My first PC compatible had a spanish keyboard, and you could very simplistically tell the OS (MS-DOS) about your keyboard layout. For a while this worked pretty well. Then someone decided that Latin America was so different from Spain, that we needed our very own keyboard layout; this layout just moves stuff around needlessly, destroying many years of experience for those of us who were accustomed to the spanish keyboard. I understand removing the ç as it’s not used in Latin America, but why move all the rest of the stuff around?

Latin American Keyboard

So basically I got used to the spanish keyboard which has worked well in all kinds of OSes, from MS-DOS to Windows, OS/2 and yes, Linux.
While the Latin American layout was such a pariah that, at some point, it got overwritten by the Latvian keyboard (la), so when doing a system upgrade, all of a sudden your keyboard was in latvian, and you had to select “latam” for Latin America.

French Canadian Keyboard

Eventually I happened to get a laptop with a Canadian French keyboard. Luckily, this is not the dreaded french AZERTY keyboard, but basically an english keyboard layout with most symbol keys mapped very strangely. So if you want to type the basic alphabet you’re OK, like you’d be with an english keyboard, but things start getting weird when you need to create special characters or compose accents, cedillas and stuff like that. This was so different from any other layout I’ve used, that I was basically freaking out. I could just ignore the red characters on my keyboard, and/or use it as just an english keyboard, but I routinely need to compose text in spanish and in french, so how would I go about doing this?

And no, the ages-old trick of memorizing ASCII codes for special characters doesn’t cut it: for one, it’s unreliable on Linux (especially on graphical mode), and for another, it’s just primitive! I used to chuckle at all the people I’ve seen through the years who had a nice “cheat sheet” glued to their desktop with ASCII codes for frequently-used accented characters, as opposed to taking 15 minutes to correctly configure their keyboards to do this natively.

So anyway, what I came across while checking out the available keyboard maps under Linux and trying to figure out how to type stuff on the Canadian keyboard, was this wonder of wonders, the US International with AltGr Dead Keys layout.

Basically, it takes the right Alt key (labeled AltGr on my keyboard, a monstrosity I was already used to from the LatinAmerican and spanish keyboards) and uses it to “compose” or “deadkey” stuff (dead keys are like accents, for instance, where you press the accent key and then the next letter you type will be accented). In combination with ~, “, ‘ and , this enables me to type nearly all accented characters with relative ease.

Also, I can use AltGr+vowel to type acute-accented vowels (áéíóú), and AltGr+n for ñ.

Grave accents (è) and tilded letters (ã) can be composed by AltGr+accent (use for grave, ~ for tilde), and then the letter you want to type.

What I like about Linux’s keyboard selection thingy is that you can see an actual layout map. Thus, even if my keyboard doesn’t have the characters stenciled in, I can take a quick peek and see where stuff I need might be.

Thus I can do things like use ç or €, all with a minimum of fuss. Also more complicated stuff like ï œ ø is still just one AltGr+key away. All this while preserving a layout that’s very familiar to everyone (english), and where most strange characters using while programming {}][\|~ are also much easier to use than on the spanish keyboard I was used to (it needs AltGr for all sorts of braces and piping, which makes it very painful on my hands).

The actual US International with AltGr deadkeys layout as shown by the Gnome keyboard selection applet.

So there you have it, if you see yourself wrestling with choosing a good physical keyboard layout *and* making it work on your OS, stop pulling your hair out, get an english-layout keyboard and use US International with AltGr Dead Keys!

Internet access in Canada – Stop the Meter!

I have no idea what the CRTC is. But they recently ruled on something that means that, my internet provider, which I chose based on the fact that they had no download caps (unlike other greedy providers like Bell, Rogers or Videotron), will now have to institute said caps.

In practical terms this is how it looks. If I had my link downloading stuff at 100% capacity, I could potentially download 1500 GB of data in a month. I’d pay 45$ a month for this. Now, however, they limit me to 30GB a month, for the same price. If I go over, they charge me 1$ per extra GB, up to a limit of 60$ a month, at which point I’ll have downloaded 90GB. At this point they stop charging extra and I can download more. However, if I hit 300GB, they cut me off for the rest of the month.

This is basically them not honoring the contractual obligation which says I get X amount of service for Y amount of money. This is extremely unfair for me, and although I understand the position ISPs are in, at the mercy of big telcos, I certainly wish they’d put up a bit more resistance to this.

So indeed, the problem for the end user is that it’s more expensive and inconvenient to use bandwidth. And however much telcos whine about how “power users” are the ones saturating the pipes, the fact is, these power users pay for the bandwidth they use, at the rates set by the providers, and now the providers wanting to basically charge more for the same service, is a bit ridiculous and speaks of greed and money-hunger. Also, it basically stifles innovation, the kind that would move big media out of the picture, since I’m actually penalized for doing stuff like watching TV online, which is very convenient for me, but uses quite a bit of bandwidth (that, in the other hand, I’d already paid for).

I’d be inclined to go with this counterproposal to my ISP, and ultimately to the big telcos.

Previously I could download 1500 GB in a month at 45$, meaning each GB would cost me 0.028$ (compare this to the abusive 1$ a GB for overage, which is a 3500% increase). So what I propose is, if you want to limit me to 30 GB, then I’ll pay only 0.84$ a month, which by their previous rates is a fair amount. Hell, limit me to 300GB, for which I’ll pay only 8.40$ a month.

Optionally, OK, charge more if I go over 30 GB. But conversely, and if the power user / common user argument holds any value, prorate for users who go UNDER the 30 GB; so if I download only 10 GB in a month, I only pay 15$ for internet access.

I bet no ISP would like going with either of these proposals. Guess what: We users don’t like your proposal either. So please go to www.stopthemeter.ca and raise your voice against this idiotic measure that puts Canadians at a huge disadvantage technology- and connectivity-wise.

The myth of better device support on Windows

It’s long been argued that peripheral support in Linux is far inferior to that under Windows, and that this has been a factor for Windows’ dominance in the desktop. More and more, the myth that Windows has any kind of technical superiority leaves place to the fact that marketing, and being bundled with nearly every PC sold worldwide, are Windows’ only keys to its widespread adoption. And here’s a story to prove that point.

I bought a printer (HP Photosmart C4780). It’s one of those cheap, $50 numbers that eat through ink like crazy. So I come home, wondering if I’ll have to install 500 MB of crap as included in the bundled CD to get the printer to work with my Mac at home.

As is usually the case with the Mac, I just plugged it in and it worked, both the printer and the scanner, without a hitch or problem.

I then proceeded to do the same on a freshly installed Ubuntu 10.10 laptop. Same story, the printer just worked, and Ubuntu even recognized it when being plugged in, no need to install drivers or anything.

Now, on Windows the printer wouldn’t have worked at all without installing a boatload of crap, HP is notoriously bloaty when it comes to their bundled software.

The usual wisdom is that hardware manufacturers care more about Windows, and ship all their hardware with drivers and stuff to make it work. It would seem, then, that the burden is on Apple and Linux distributions to provide drivers and support to most hardware. It would seem like a daunting task. But they do it, and the end result is that Mac OS and most Linux distros include drivers for everything, right out of the box. This puts them a step ahead of Windows, when it comes to ease of use, at the cost of maybe a slight bloat. Still, my Ubuntu installation is much leaner than the 16-GB behemoth that is Windows 7.

So there you have it, the myth of better hardware support on Windows, finally debunked.

Now, if I could only get the braindead wireless support on the HP printer to work…

Flash Sucks

¿A world without Flash?

I’ve always been a hater of Macromedia/Adobe Flash. Now that the entire Apple-Adobe controversy has rekindled the debate of whether the web is a better or worse place because of Flash, I realized why it is I don’t like Flash.

Also, I realized most technically-inclined people dislike Flash too, because they recognize a lot of its shortcomings, unlike the layperson who only cares about the web being pretty, full of animations and beeps and stuff.

Now, before I begin, let me state this: I’m griping about Flash as a web content creation platform/tool. I couldn’t care less about its use as a mobile development tool. A lot of bloggers have expressed more informed opinions on this topic.

For me, a true flash hater, what Flash does is take control away from the end-user, the consumer of content, and give it to the content creator, the designer.

If you’re the designer this is all fine and dandy; you can control exactly what the user sees, you can tell your application to be exactly this many pixels wide, this many pixels high, and how to look and behave down to the pixel and the microsecond. This is why designers love Flash; it not only lets them work in a familiar environment and with familiar tools, but it also gives them complete control about how and what the user sees and can do.

By the way, don’t be fooled; a designer that claims to know web design but uses only Flash is not a web designer. Flash was created to allow designers (Adobe’s primary clientele) to be able to say (untruthfully) they can design web sites.

A flash-only website. Click it and weep.

The problem is, the web wasn’t meant to be this way. Fundamentally, the kind of content the web was created for, was meant to empower the user. This is why the web browser was designed from the very beginning to not impose those very parameters (width, height, fonts, and so on); the content should adjust to whatever the user’s agent can display. So web content reflows to adapt to your browser; it should degrade for those systems that for any reason lack a certain capability (think Lynx and visually-impaired users). It should also allow me, the user, to alter how it looks and is rendered. This is why I can disable cookies, javascript, replace or even remove altogether the CSS used to format my content, decide not to display images, and so on. Even the most complex non-flash web page consists of text and images; and with a bit of cleverness I can get both the text and the images and incorporate them in the rest of my workflow; paste them into a document, translate them, email them to someone else, the possibilities are limitless since web content is delivered to me as-is, as bytes I can not only look at, but also manipulate as I would any other kind of information on my computer.

This freedom is lost on a Flash-only (or mostly) website. What’s worse, instead of the content being, well, content, stuff I can get out of the browser and process and manipulate in other ways, it becomes merely an image, a photograph or a movie trapped in the clutches of the Flash plugin. I can’t copy the text, I can’t scroll except through the provisions the designer made for me, I can’t easily extract the audio or the images, and I’m basically limited, not by the constraints of my browser, but by those set forth by both Adobe through its display plugin, and the designer. And let’s face it, most designers are also clueless about user interfaces and ease-of-use, unlike the people who designed my web browser, which is rendered mostly useless on a Flash site.

It is this loss of freedom that makes Flash so dangerous, and why I think it would be a good thing for Flash to disappear eventually.

Flash adds nothing of true value to the Web, as we could all live happy without all the animations, all the desktop-apps-masquerading-as-web-apps made in Flash (write a Web app from the ground up, it’s not that hard), all the stupid content that forces me to work its way instead of my way, and luckily, thanks to the advent of HTML5, the one thing for which Flash has proven to be indispensible (web video) we won’t need it even for that. Because, let’s face it, web video was Flash’s killer application; everything else that could once be done only in Flash is now doable in AJAX, CSS and Javascript. And honestly, if Flash had been such a good technology for those things, we would have stayed with it and not bothered with anything else.

If anything, the existence of so many alternatives to Flash and whatever it can do, is evidence that the world at large truly does not like Flash.