Juju2 unit/service name autocompletion.

If juju1 and juju2 are installed on the same system, juju1’s bash auto completion breaks because it expects services where in juju2 they’re called applications.

Maybe juju2 has correct bash completion, but in the system I’m working on, only juju1 autocompletion was there, so I had to hack the autocomplete functions. Just added these at the end of .bashrc to override the ones in the juju1 package. Notice they work for both juju1 and juju2 by using dict.get() to not die if a particular key isn’t found.

 

 

Take me to your leader – Using Juju leadership for cron tasks in a multiunit service

I’m working on adding some periodic maintenance tasks to a service deployed using Juju. It’s a standard 3-tier web application with a number of Django application server units for load balancing and distribution.

Clearly the maintenance tasks’ most natural place to run is in one of these units, since they have all of the application’s software installed and doing the maintenance is as simple as running a “management command” with the proper environment set up.

A nice property we have by using Juju is that these application server units are just clones of each other, this allows scaling up/down very easily because the units are treated the same. However, the periodic maintenance stuff introduces an interesting problem, because we want only one of the units to run the maintenance tasks (no need for them to run several times). The maintenance scripts can conceivably be run in all units, even simultaneously (they do proper locking to avoid stepping on each other). And this would perhaps be OK if we only had 2 service units, but what if, as is the case, we have many more? there is still a single database and hitting it 5-10 times with what is essentially a redundant process sounded like an unacceptable tradeoff for the simplicity of the “just run them on each unit” approach.

We could also implement some sort of duplicate collapsing, perhaps by using something like rabbitmq and celery/celery beat to schedule periodic tasks. I refused to consider this since it seemed like swatting flies with a cannon, given that the first solution coming to mind is a one-line cron job. Why reinvent the wheel?

The feature that ended up solving the problem, thanks to the fine folks in Freenet’s #juju channel, is leadership, a feature which debuted in recent versions of Juju. Essentially, each service has one unit designated as the “leader” and it can be targeted with specific commands, queried by other units (‘ask this to my service’s leader’) and more importantly, unambiguously identified: a unit can determine whether it is the leader, and Juju events are fired when leadership changes, so units can act accordingly. Note that leadership is fluid and can change, so the charm needs to account for these changes. For example, if the existing leader is destroyed or has a charm hook error, it will be “deposed” and a new leader is elected from among the surviving units. Luckily all the details of this are handled by Juju itself, and charms/units need only hook on the leadership events and act accordingly.

So it’s then as easy as having the cron jobs run only on the leader unit, and not on the followers.

The simplistic way of using leadership to ensure only the leader unit performs an action was something like this in the crontab:

This uses juju-run with the unit’s name (which is hardcoded in the crontab – this is a detail of how juju run is used which I don’t love, but it works) to run the is-leader command in the unit. This will print out “True” if the executing unit is the leader, and False otherwise. So this will condition execution on the current unit being the leader.

Discussing this with my knowledgeable colleagues, a problem was pointed out: juju-run is blocking and could potentially stall if other Juju tasks are being run. This is possibly not a big deal but also not ideal, because we know leadership information changes infrequently and we also have specific events that are fired when it does change.

So instead, they suggested updating the crontab file when leadership changes, and hardcoding leadership status in the file. This way units can decide whether to actually run the command based on locally-available information which removes the lock on Juju.

The solution looks like this, when implemented using Ansible integration in the charm. I just added two tasks: One registers a variable holding is-leader output when either the config or leadership changes:

The second one fires on the same events and just uses the registered variable to write the crontabs appropriately. Note that Ansible’s “cron” plugin takes care of ensuring “crupdate” behavior for these crontab entries. Just be mindful if you change the “name” because Ansible uses that as the key to decide whether to update or create anew:

A created crontab file (in /etc/cron.d/roadmr-maintenance) looks like this:

A few notes about this. The IS_LEADER variable looks redundant. We could have put it directly in the comparison or simply wrote the crontab file only in the leader unit, removing it on the other ones. We specifically wanted the crontab to exist in all units and just be conditional on leadership. IS_LEADER makes it super obvious, right there in the crontab, whether the command will run. While redundant, we felt it added clarity.

Save for the actual value of IS_LEADER, the crontab is present and identical in all units. This helps people who log directly into the unit to understand what may be going on in case of trouble. Traditionally people log into the first unit; but what if that happens to not be the leader? If we write the crontab only on the leader and remove from other units, it will not be obvious that there’s a task running somewhere.

Charm Ansible integration magically runs tasks by tags identifying the hook events they should fire on. So by just adding the three tags, these events will fire in the specified order on config-changed, leader-elected and leader-settings-changed events.

The two leader hooks are needed because leader-elected is only fired on the actual leader unit; all the others get leader-settings-changed instead.

Last but not least, on’t forget to also declare the new hooks in your hooks.py file, in the hooks declaration which now looks like this (see last two lines added):

Finally, I’d be remiss not to mention an existing bug in leadership event firing. Because of that, until leadership event functionality is fixed and 100% reliable, I wouldn’t use this technique for tasks which absolutely, positively need to be run without fail or the world will end. Here, I’m just using them for maintenance and it’s not a big deal if runs are missed for a few days. That said, if you need a 100% guarantee that your tasks will run, you’ll definitely want to implement something more robust and failproof than a simple crontab.

Proxying Python file-like objects for fun and profit

As part of a project I’m working on, I wanted to be able to do some “side processing” while writing to a file-like object. The processing is basically checksumming on-the-fly. I’m essentially doing something like:

what I’d like is to be able to also get the data read from source and use hashlib’s update mechanism to get a checksum of the object. The easiest way to do it would be using temporary storage (an actual file or a StringIO), but I’d prefer to avoid that since the files can be quite large. The second way to do it is to read the source twice. But since that may come from a network, it makes no sense to read it twice just to get the checksum. A third way would be to have destination be a file-like derivative that updates an internal hash with each read block from source, and then provides a way to retrieve the hash.

Instead of creating my own file-like where I’d mostly be “passing through” all the calls to the underlying destination object (which incidentally also writes to a network resource), I decided to use padme which already should do most of what I need. I just needed to unproxy a couple of methods, add a new method to retrieve the checksum at the end, and presto.

A first implementation looks like this:

This however doesn’t work for reasons I was unable to fathom on my own:

This is clearly because super(sha256file, self) refers to the *class* and I need the *instance* which is the one with the write method. So Zygmunt helped me get a working version ready:

here’s the explanation of what was wrong:

– first of all the exception tells you that the super-object (which is a relative of base_proxy) has no write method. This is correct. A proxy is not a subclass of the proxied object’s class (some classes cannot be subclasses). The solution is to call the real write method. This can be accomplished with type(self).__proxiee__.write()

– second of all we need to be able to hold state, namely the hash attribute (I’ve renamed it to _hash but it’s irrelevant to the problem at hand). Proxy objects can store state, it’s just not terribly easy to do. The proxied object (here a file) may or may not be able to store state (here it cannot). The solution is to make it possible to access some of the state via standard means. The new (small) satateful_proxy class implements __setattr__ and __delattr__ in the same way __getattribute__ was always implemented. That is, those methods look at the __unproxied__ set to know if access should be routed to the original or to the proxy.
– the last problem is that __unproxied__ is only collected by the proxy_meta meta-class. It’s extremely hard to change that meta-class (because padme.proxy is not the real class that you ever use, it’s all a big fake to make proxy() both a function-like and class-like object.)

The really cool thing about all this is not so much that my code is now working, but that those ideas and features will make it into an upcoming version of Padme 🙂 So down the line the code should become a bit simpler.

Updating lxc image/container caches

One of lxc’s nice time-saving features is that, after initial container creation, it will cache the files it downloaded to do so, and when you create a new container using the same template/version/architecture, it will leverage the existing files and create the container with minimal downloads and really quickly.

A downside of this is that the cache can become stale; this is apparent when you want to install a package in a container and apt-get gives 404 errors indicating the version of the package the container knows about, is no longer available in the archive (most likely superseded by a new one).

This is easily fixed by always doing apt-get update in the container prior to any package installs/upgrades. However, it’s cumbersome, and if you’re creating dozens of new containers every day, the bandwidth and time spent re-downloading can quickly add up.

To update the “base image” or cache, which resides in /var/cache/lxc for each version, you can do two things.

most templates also support –flush-cache so if you’re calling lxc-create directly, just add an extra –flush-cache as template args (after –) and the cache will be flushed before making the container. Something like

this will obliterate the existing cache and re-download everything before creating the container.

If you want to update an existing cache do something like:

this will update the cache and all subsequently-created containers will know about the latest package versions.

 

WiFi interfaces on Ubuntu Server

Sometimes you may want to configure a wireless interface on a system with Ubuntu Server. The most common use case (for me, at least) is to run some tests with server, which require two network interfaces, on a laptop (it’s what I have available to play with) with an ethernet interface and a wireless interface. As long as Ubuntu sees the wireless interface, it’s quite easy to set things up so the wireless comes up at boot time.

You will probably need to set up the server to forward and masquerade the internal network (usually, the ethernet segment is the internal one, while the wireless counts as the “outside” interface). There are plenty of tutorials to do this over the internet, so I won’t extend this post by detailing that.

Of course, the wireless will grab a dynamic IP address, so use caution with that as the address may change (or, assign a static one from your router’s unused range). Anyway. Put this in /etc/network/interfaces:

Then you can do ifup wlan0 to bring the interface up. It should also come up automagically at boot time.

More fun with avconv

This was used to resync a file whose audio was consistently 1.75 seconds behind the video track. The resulting file also contains the first 2 subtitle tracks from the original file.

Sources were here.

Speeding up local debian builds with sbuild (eatmydata, apt-cacher-ng and config laziness)

As part of the team that maintains several testing tools for Ubuntu, including checkbox, I sometimes find myself needing to build .deb packages from our source tree.

600px-Old_timer_structural_worker2
“building stuff is hard…”

A simple way of achieving this is of course to run dpkg-buildpackage or even bzr-buildpackage. Assuming all build-deps are correctly installed in the host system, this will result in a nicely built set of .debs.

This approach has a few caveats, in that it’s different from the build process actually employed to create the packages that ultimately get uploaded to Ubuntu (or even the ones available in Launchpad PPAs).

The two main differences are that Launchpad builds the packages in a “clean” environment, installing build-deps from scratch, whereas dpkg-buildpackage will rely on what’s installed in the system. So if you miss specifying a build-dep, your local build may work because you have it installed, but the PPA build will fail because it will not be present.

The second big difference is that with the local approach, you’re “limited” to building packages for the “host” system. Sure, you can specify a different target release in your debian/changelog file, but some aspect of your build may be tied to your system’s tools, versions and layout, and if for some reason they don’t match the actual target at installation time, things will fail in interesting ways.

Clearly, one way to test what the Launchpad build process will spit out is to build a source package and dput that to be built directly on a PPA. The problem here is that the feedback loop becomes excruciatingly slow; PPAs are a shared resource and build times can go from minutes to many hours.

Based on all this, it makes sense to try to use a local build environment that more closely replicates what PPAs do to build your packages.

Fortunately, the PPA builders use free software, so it’s relatively easy to do local builds in a similar environment, completing quickly due to use of local resources, and only upload to Launchpad once you’re pretty sure your build will succeed.

The software in question is sbuild, and I already wrote a post detailing how to install sbuild and set up a build environment for any Ubuntu release you need.

This setup worked fine for the occasional package build when you know packaging is mostly correct. For a fast build such as checkbox, setting up the build environment with all needed packages and build-deps takes about 10 minutes (depending mostly on download speed for all the packages). Of course on a more complex package, compilation time may start to be a factor.

Anyway, the 10-minute time can be too slow if you’re trying to fix a tricky problem and need a fast feedback loop. Plus the process produces a lot of transient files and downloads a set of packages many times, so there’s plenty of room for improvement here.

Speeding up local package installation and build

Eatmydata: it's so fast! (but not too safe)
Eatmydata: it’s so fast! (but not too safe)

A large part of the time spent doing the “local” part of the process is writing files to disk. One way to speed this up is to use a ramdisk to store the build. I’m too lazy and have too little RAM to use this approach, so the alternative was setting up eatmydata inside the chroot. Since these are mostly temporary files or throwaway packages, it’s OK to lose the safety of constant syncs in exchange for a huge boost in speed.

The setup for eatmydata inside the chroot is described here. This looks a bit hard to automate, but luckily we don’t have to, as recent versions of mk-sbuild simply support a –eatmydata parameter, if given this will install eatmydata inside the chroot and do the choot config file change to enable eatmydata.

Adding PPA

You can add a custom PPA to an image. Once the chroot image is built, enter the “golden master”:

You can add a deb line (get it from launchpad) to your sources:

Then you need to get the GPG key for the PPA and add it manually with the very basic tools provided in the chroot (sorry, no apt-add-repository):

Then exit the golden image. After this, your builds from this chroot will be able to fetch packages from the PPA.

Again, that’s a bit of work to do for each VM. Instead, what I did was create a file in /etc/schroot/setup.d to do this automatically. You can of course replace the PPAs you need in the echo lines at the end. Name the file something like 81add-ppas:

Notice that again, I was very lazy and instead of downloading the gpg keys as shown above (as for some reason trying to run gpg from the setup script didn’t work), I just configured apt to allow unauthenticated packages. Since this sbuild is mainly for testing purposes it’s not a big deal to skip this verification step. Also, there’s some logic to automatically detect the chroot release, so the same config file works equally well for any Ubuntu release.

Apt-cacher-ng

As the name suggests, this nifty utility will cache packages so the next time you need them they’ll be fetched from local storage rather than from the network. A bit of config is needed to have sbuild download packages from here.

Hello, I got these packages cached for you...
Hello, I got these packages cached for you…

First, install apt-cacher-ng on the host system. You can verify it’s listening on port 3142 by any means you like.

Then, to set it up automatically in chroots, add this to the host system’s  /etc/schroot/setup.d/80apt-cacher-ng (rather, create that file; it doesn’t exist by default):

With these two setup.d scripts and the –eatmydata magic, it’s easy to create sbuild environments which will be much faster when building packages.

As a comparison, building msmtp (chosen because this tests mainly the speedup components, not needing any packages from a PPA) takes about 40 seconds with these suggested tweaks:

Whereas on a non-tweaked chroot it takes about 1:38 minutes:

It looks like they’re about 3 times faster, but that’s misleading because I deliberately chose a small, quick-to-compile package. Still, you can at least reduce network and disk access very easily now. Note, also, that my test system has a fast SSD. Speedup on a traditional rotary magnetic hard-disk is likely to be much higher.

Dell XPS 13 Developer Edition – Ultrabook with Ubuntu preinstalled

Introduction

13110259513_ab06058d55_h
Dell XPS 13 Developer Edition

In late 2010, as I was starting a new job, I bought a new laptop, a Samsung QX410, based largely on this review.

From the beginning I was quite happy with that laptop: the screen is decent, the keyboard spacious and comfortable, battery life is OK, and in retrospective it’s a solidly-built laptop that stood up to 2.5 years of heavy daily use.

However, eventually the time came for it to die, so in late 2013 I had to start looking for replacements. Since I still needed to work, I borrowed a Lenovo Thinkpad T520 to use temporarily. Spec-wise it was similar to the Samsung, save for the larger 15″ screen and the thing that really spoiled me, the excellent Lenovo laptop keyboard. Alas, this appears to have been lost in the latest generation; I had a chance to try a Thinkpad X240 (one of the candidates for replacing the Samsung) and found the island-style keyboard odd and uncomfortable.

At the same time I was able to use a Dell XPS 13 for a few days, and the keyboard on that little machine felt extremely comfortable and close to the Samsung’s (which I’m already used to). So the XPS 13 immediately moved to the top of the list. In early 2014 I finally pulled the trigger and got the Developer Edition XPS 13. The Developer Edition is the result of the “Sputnik Project” and has been available for all previous XPS 13 generations. Here are my impressions of it.

The XPS 13 is somewhat of a MacBook Air lookalike with the same slim wedgy shape. While the footprint is a bit smaller than the 13-inch Air, they look very similar from a top view. The XPS’s top lid and frame are machined aluminum, though the similarity ends there, as the keyboard and deck are coated with black soft-touch rubber, and the bottom is black carbon fiber. The XPS is different enough to qualify as “inspired by”, rather than “knockoff of”.

Incidentally, the MacBook air was also on my list of candidate laptops. The XPS 13 beat it for a couple of reasons. The screen is higher-resolution and touch-enabled, it’s Ubuntu-certified and available with Ubuntu preinstalled, and it has a PC keyboard. Apple keyboards are great but I just can’t get used to the ctrl-alt-cmd layout and always keep hitting the wrong keys. Also, for a comparable configuration, cost was similar. So in the end the XPS 13 won.

Chassis

The XPS 13 is really thin, as befits a 13-inch ultrabook, and has the typical front taper to a 6-mm thinness.

13110158745_07d1ddefa6_hThe thing weighs about 1.4 kg. Since it’s so tiny it actually feels heavy for the size, but it’s by no means heavy. Carry it in a backpack and you’ll barely feel it’s there. Quite a difference from the 2.25-kg Samsung and the 2.5-kg Thinkpad T520, which looks and feels like a behemoth next to the XPS (A nostalgia shot of those laptops next to the XPS is at the end of this post).

To give an idea of the XPS’s dimensions I made a quick photo shoot next to some comparable pre-ultrabook ultraportables.

13110428234_c422061de5_hTop row shows a Dell Vostro V13 and an Acer Aspire One netbook (11″ screen). Bottom row includes the XPS 13 and the legendary Thinkpad X201 with 12″ screen. Despite having a 13″ screen, the XPS is smaller than the Thinkpad X201 and the V13. Footprint-wise, the screen is the limiting factor, so for a 13″-class system it’s reasonable to not see a huge difference.

The side view is where the XPS 13’s thinness becomes evident. The XPS 13 is on top, above the X201 and with the V13 at the very bottom.

13110436524_e63ebb8526_h

The V13 was a tremendously thin machine for its generation, which shows in the fact that it’s almost as thin as the XPS 13’s rear. Of course, the XPS 13’s taper at the front is goes on to be about as thin as the V13’s screen. The V13 made performance compromises to fit in such a svelte chassis, but in all honesty the XPS 13 is also not a speed demon and also has fewer ports and expandability than other bulkier options.

The X201 isn’t even in the same league here; comparing the thickest parts (rear end) it’s almost 3 times thicker than the XPS 13. The X201 is still quite light, about 200g heavier than the X201, the chassis is incredibly sturdy (XPS 13 doesn’t feel flimsy at all but I’d hesitate to put it through the kind of abuse a Thinkpad is known to just shrug off), and it has a lot more expandability, in the form of more USB ports, an expresscard slot, a media card reader, ethernet and modem ports, and incredibly, a full VGA connector.

By comparison and list of ports, the XPS 13 only has a combo audio jack, 2 USB 3.0 ports, and a mini-DP port for external video. This is the price one pays for ultraportability…

In the following paragraphs I posted pictures of the XPS 13, open, next to the V13 and the X201. The keyboard looks minimalistic in comparison but it’s quite comfortable to use. The V13’s is not as nice, while the X201 has that fantastic Lenovo keyboard.

Keyboard and touchpad

13110754823_13f889a40c_hThe XPS 13’s keyboard has no extraneous dedicated keys, other than the power and mute buttons; everything else is handled by the standard keys, with F-keys doubling as special-function keys to switch monitors, control wireless, show battery information, control volume and brightness as well as keyboard backlight, and perform media control functions.

All standard keys are standard-sized and in their proper positions. Exceptions are the half-size F-keys, including insert and delete, and the cursor keys. One thing I don’t like is lack of dedicated page-up and page-down keys; these are handled (along with home and end) by the cursor keys in combination with Fn. I use pg-up and pg-down extensively to switch tabs in Firefox and this is really a sore point for me. But that’s about the only tradeoff this excellent keyboard makes.

13110241143_b97ea6b6d7_hThe touchpad seems to be a synaptics model, one of those “buttonless” trackpads, although it does have distinct clicklable sections at the bottom. Unlike the Samsung’s touchpad which was very troublesome and only worked in tap-to-click mode, this touchpad’s clickable buttons also work perfectly, so you’re free to click or two-finger-tap anywhere, or use the “discrete” buttons if you like, which makes things like dragging much easier.

A backlit keyboard is nice to have, but when the light is on, an annoying high-pitched whine comes from underneath the keyboard. This problem has been reported to Dell by many users and is still awaiting a fix or response. As a result, I usually keep the backlight off.

Screen

Both the laptops I’ve been using lately had non-IPS panels with industry-standard resolutions for the time. 1366×768 for the Samsung is pretty typical. One gripe I had is that the Samsung’s screen was “protected” by a glossy sheet of cheap plastic that with time became very scratched and made the screen harder to see. This can be seen in the picture at the end of this post.

The XPS 13’s screen is stunning by comparison, if only because it’s a much newer panel. Viewing angles are amazing, the 1920×1080 resolution is razor-sharp and crisp (and even a bit too high for the screen size), and the backlight is strong enough to overcome the gorilla glass cover’s gloss. At least I expect it won’t get scratched easily. This is a Synaptics touchscreen which has worked very well with Ubuntu, although I haven’t used it all that much because it feels very alien to my workflow.

Performance and battery life

13110151235_0a3745dece_hI’m ill-equipped to provide an assessment here, as the jump from the pre-Sandy Bridge Samsung to the Haswell XPS 13 (plus a bump from i5 to i7 CPU) is so enormous that this machine just feels like it flies. One thing worth mentioning is that, while the Samsung had a standard mobile CPU (i5 480M with 3M cache and 35W TDP), the XPS 13 has the aforementioned, newer i7 with 4 MB cache, but in a low-TDP (only 15W), ULV variant. So by its ultraportable nature it’s on the lower side of the specs spectrum, however the generational advantage plus i7-ness really do make a difference and the system is snappy at all times.

Perhaps the biggest leap forward is the LiteOn SSD. While not hooked up directly to PCIe like a MacBook’s, this mSATA drive is absurdly fast in comparison to what I’d been using, resulting in a 3-second boot time (even with disk encryption), way faster than the 20-30 seconds I was getting on the Samsung.

Under moderate load (a few terminal windows open where I’m typing stuff, plus a browser with a some tabs, one of which is playing a Youtube video), the XPS 13 reports a battery lifetime of about 6 hours. For comparison, the Samsung lasted about 3 hours with a comparable workload on a 66Wh battery.

Given a mostly-idle workload (browser with static content plus a few terminal windows), the XPS 13 reports about 8 hours of battery life.

Software

Perhaps the nicest thing about the XPS 13 is that it’s certified for Ubuntu, and the Developer Edition I got comes preinstalled with Ubuntu 12.04, augmented by some OEM-specific tweaks to ensure all the hardware works correctly. Indeed everything works out of the box, and the first-boot experience is very smooth and polished, definitely less cumbersome than booting a Windows machine for the first time.

In case it’s needed, a utility to create a recovery disk is provided. I created a USB stick which can be used to quickly restore the machine to factory status. I then proceeded to erase the preinstalled Ubuntu version and install the latest development release (which will be released as Ubuntu 14.04). Don’t get me wrong, the preinstall is perfectly usable for 99% of people as it has a typical Ubuntu installation with all the usual tools, receives security and browser upgrades until 2017, and even includes a plethora of cloud software development tools such as Juju and Virtualbox (this is why it’s called a “Developer edition” and is focused on cloud development). However, because of my work, I really wanted to have the newest possible Ubuntu version. An ulterior motive was to verify whether the OEM-specific tweaks in the preinstalled version were “upstreamed” and made available in subsequent Ubuntu versions. This is a policy for the Ubuntu certification program; whenever possible, the work done when enabling a new machine is made available in the following stock Ubuntu release.

With a couple of exceptions, everything continued to work just as it did with the preinstalled version, and I was able to recreate a working environment complete with a transfer of my backup in only a few minutes. The fast SSD and USB3.0 transfers from my backup drive are partly to thank for this.

As exceptions, the touchpad didn’t get recognized and required blacklisting an i2c-hid module; and I lost the media control keys (which I seldom use, so I haven’t bothered to re-enable them).

This makes it a great alternative for regions where the Developer Edition is unavailable; just procure the Windows version of this laptop and installing the latest Ubuntu will result in a working system.

Left to right: Samsung QX410, Dell XPS 13 (driving the external monitor on the left), Lenovo Thinkpad T520
Left to right: Samsung QX410, Dell XPS 13 (driving the external monitor on the left), Lenovo Thinkpad T520

Here’s a  quick overview of the things I didn’t like about the XPS 13. Of course, none of them were deal-breakers for me, but I wanted to sum them up to highlight the fact that yes, it’s not a perfect machine.

  • Whining electrical noise (a defect, so once Dell confirms this is fixable I’ll apply for warranty service).
  • Lack of dedicated pgup/pgdn keys.
  • Screen a bit too glossy.
  • a dearth of ports.
  • Screen resolution too high for my poor, tired eyes (I’m half-kidding with this one).

To sum up, despite the above, I very highly recommend the XPS 13 Developer Edition. In addition to the sleek and solid hardware, you get Ubuntu preinstalled which will cover 99% of people’s needs, a system which benefits from the upstreamed enablement work resulting in an excellent platform to run the latest Ubuntu, *and* you send the message that Ubuntu preinstalls are desired by users, all while freeing yourself from the Microsoft tax which has plagued Linux laptop users for so long.

Why I’m staying on Unity

A very interesting conversation erupted today, beginning when a coworker sent a lengthy email stating his reasons for altogether leaving Ubuntu 11.04’s new Unity desktop interface and instead resorting to the good, old-fashioned Gnome 2 “Classic” session.

In it he makes some very valid points about functionality that’s different to what he was used to. This understandably affects his workflow, so instead of wrestling with a new interface, he chose to go with the old one, hopefully until Unity matures enough for him to be able to customize it to his liking.

 

What’s interesting was the amount of responses it got, where everyone spoke about their “pet peeves” with Unity. The vast majority were changes in how Unity handles things, that interfered with people’s workflows. It’s understandable that even a small change in how your user interface behaves, when you’ve become adept at working with it, disrupts things enough (and annoyingly enough) that you either go back to the old user interface, or just start fiddling with the new one until you find a way to get things to an acceptable state.

Which is what struck me as curious about this thread: there were basically two camps, those who flat out abandoned Unity for the time being, and those who actually went looking into how Unity behaves and integrates with the environment, and came up with ways to make Unity more comfortable to those used to the “old ways” of Gnome 2.x and its desktop interface.

Without demerit to the original poster, whose points were quite valid, a lot of responses suggested ways to solve about 80% of his complaints about Unity. However, the fact that it took a team of experts to solve the problems that a user (and another expert, at that) was experiencing, is testament to the fact that Unity could still be made more intuitive, easier and more customizable.

I finally upgraded to Ubuntu 11.04 and Unity this past weekend. Like many, I experienced some usability issues, where the desktop wasn’t behaving the way I was used to. However, my use of the system means that I basically want the UI to stay out of my way. So the main change I had to make was to get the Unity dock to auto-hide, so that it only appears when I ask it to. The rest of the time it’s hidden away. Everything else, well, it’s admittedly different than what I’m used to, but that’s change for you. Was Unity making a change for change’s sake? Maybe so, but I think it’s change in the right direction. Even if it somewhat alienates experienced users (for whom, however, workarounds exist that handle nearly all their concerns), I think the true success of Unity is in how it works for new users. And here are two examples.

Another coworker posted his experience with showing Ubuntu and Unity to a newbie, fresh-from-Windows user. The user’s comments were along the lines of “this looks nice”, “It’s easy to use” and “I’m keeping it”.

Also, even though some have complained about the app lens being hard to use (and it’s a complaint I’ve seen already twice), I’ve seen users realize “but hey, if it’s really that messy, you can use the search field to find what you need, right?”. So yes, end users are realizing this, and it’s just a matter of polishing how things work. If all, I think it’s great to move users away from the “the computer has only two buttons” mindset and get them using the keyboard a little more.

So yes indeed, I’m staying on Unity, and I’m looking forward to seeing it maturing into a better desktop interface. as Mark Shuttleworth said, it’s a foundation on which the next generations of Ubuntu user experience will be built. I’ll be thrilled to be along for the ride.

Finally, for a great write on why your desktop changed, and why the developers would appreciate you giving it a whirl and helping improve it (even just commenting on the stuff you find hard, unintuitive or just plain wrong) is better than just swearing off these newfangled changes (without which, face it, you’d still be using fwm and MIT Athena widgets), please drop by Federico Mena-Quintero’s activity log and read his wonderful and short article “Moving into your new Gnome 3 house“.

Internet access in Canada – Stop the Meter!

I have no idea what the CRTC is. But they recently ruled on something that means that, my internet provider, which I chose based on the fact that they had no download caps (unlike other greedy providers like Bell, Rogers or Videotron), will now have to institute said caps.

In practical terms this is how it looks. If I had my link downloading stuff at 100% capacity, I could potentially download 1500 GB of data in a month. I’d pay 45$ a month for this. Now, however, they limit me to 30GB a month, for the same price. If I go over, they charge me 1$ per extra GB, up to a limit of 60$ a month, at which point I’ll have downloaded 90GB. At this point they stop charging extra and I can download more. However, if I hit 300GB, they cut me off for the rest of the month.

This is basically them not honoring the contractual obligation which says I get X amount of service for Y amount of money. This is extremely unfair for me, and although I understand the position ISPs are in, at the mercy of big telcos, I certainly wish they’d put up a bit more resistance to this.

So indeed, the problem for the end user is that it’s more expensive and inconvenient to use bandwidth. And however much telcos whine about how “power users” are the ones saturating the pipes, the fact is, these power users pay for the bandwidth they use, at the rates set by the providers, and now the providers wanting to basically charge more for the same service, is a bit ridiculous and speaks of greed and money-hunger. Also, it basically stifles innovation, the kind that would move big media out of the picture, since I’m actually penalized for doing stuff like watching TV online, which is very convenient for me, but uses quite a bit of bandwidth (that, in the other hand, I’d already paid for).

I’d be inclined to go with this counterproposal to my ISP, and ultimately to the big telcos.

Previously I could download 1500 GB in a month at 45$, meaning each GB would cost me 0.028$ (compare this to the abusive 1$ a GB for overage, which is a 3500% increase). So what I propose is, if you want to limit me to 30 GB, then I’ll pay only 0.84$ a month, which by their previous rates is a fair amount. Hell, limit me to 300GB, for which I’ll pay only 8.40$ a month.

Optionally, OK, charge more if I go over 30 GB. But conversely, and if the power user / common user argument holds any value, prorate for users who go UNDER the 30 GB; so if I download only 10 GB in a month, I only pay 15$ for internet access.

I bet no ISP would like going with either of these proposals. Guess what: We users don’t like your proposal either. So please go to www.stopthemeter.ca and raise your voice against this idiotic measure that puts Canadians at a huge disadvantage technology- and connectivity-wise.

The myth of better device support on Windows

It’s long been argued that peripheral support in Linux is far inferior to that under Windows, and that this has been a factor for Windows’ dominance in the desktop. More and more, the myth that Windows has any kind of technical superiority leaves place to the fact that marketing, and being bundled with nearly every PC sold worldwide, are Windows’ only keys to its widespread adoption. And here’s a story to prove that point.

I bought a printer (HP Photosmart C4780). It’s one of those cheap, $50 numbers that eat through ink like crazy. So I come home, wondering if I’ll have to install 500 MB of crap as included in the bundled CD to get the printer to work with my Mac at home.

As is usually the case with the Mac, I just plugged it in and it worked, both the printer and the scanner, without a hitch or problem.

I then proceeded to do the same on a freshly installed Ubuntu 10.10 laptop. Same story, the printer just worked, and Ubuntu even recognized it when being plugged in, no need to install drivers or anything.

Now, on Windows the printer wouldn’t have worked at all without installing a boatload of crap, HP is notoriously bloaty when it comes to their bundled software.

The usual wisdom is that hardware manufacturers care more about Windows, and ship all their hardware with drivers and stuff to make it work. It would seem, then, that the burden is on Apple and Linux distributions to provide drivers and support to most hardware. It would seem like a daunting task. But they do it, and the end result is that Mac OS and most Linux distros include drivers for everything, right out of the box. This puts them a step ahead of Windows, when it comes to ease of use, at the cost of maybe a slight bloat. Still, my Ubuntu installation is much leaner than the 16-GB behemoth that is Windows 7.

So there you have it, the myth of better hardware support on Windows, finally debunked.

Now, if I could only get the braindead wireless support on the HP printer to work…

Flash Sucks

¿A world without Flash?

I’ve always been a hater of Macromedia/Adobe Flash. Now that the entire Apple-Adobe controversy has rekindled the debate of whether the web is a better or worse place because of Flash, I realized why it is I don’t like Flash.

Also, I realized most technically-inclined people dislike Flash too, because they recognize a lot of its shortcomings, unlike the layperson who only cares about the web being pretty, full of animations and beeps and stuff.

Now, before I begin, let me state this: I’m griping about Flash as a web content creation platform/tool. I couldn’t care less about its use as a mobile development tool. A lot of bloggers have expressed more informed opinions on this topic.

For me, a true flash hater, what Flash does is take control away from the end-user, the consumer of content, and give it to the content creator, the designer.

If you’re the designer this is all fine and dandy; you can control exactly what the user sees, you can tell your application to be exactly this many pixels wide, this many pixels high, and how to look and behave down to the pixel and the microsecond. This is why designers love Flash; it not only lets them work in a familiar environment and with familiar tools, but it also gives them complete control about how and what the user sees and can do.

By the way, don’t be fooled; a designer that claims to know web design but uses only Flash is not a web designer. Flash was created to allow designers (Adobe’s primary clientele) to be able to say (untruthfully) they can design web sites.

A flash-only website. Click it and weep.

The problem is, the web wasn’t meant to be this way. Fundamentally, the kind of content the web was created for, was meant to empower the user. This is why the web browser was designed from the very beginning to not impose those very parameters (width, height, fonts, and so on); the content should adjust to whatever the user’s agent can display. So web content reflows to adapt to your browser; it should degrade for those systems that for any reason lack a certain capability (think Lynx and visually-impaired users). It should also allow me, the user, to alter how it looks and is rendered. This is why I can disable cookies, javascript, replace or even remove altogether the CSS used to format my content, decide not to display images, and so on. Even the most complex non-flash web page consists of text and images; and with a bit of cleverness I can get both the text and the images and incorporate them in the rest of my workflow; paste them into a document, translate them, email them to someone else, the possibilities are limitless since web content is delivered to me as-is, as bytes I can not only look at, but also manipulate as I would any other kind of information on my computer.

This freedom is lost on a Flash-only (or mostly) website. What’s worse, instead of the content being, well, content, stuff I can get out of the browser and process and manipulate in other ways, it becomes merely an image, a photograph or a movie trapped in the clutches of the Flash plugin. I can’t copy the text, I can’t scroll except through the provisions the designer made for me, I can’t easily extract the audio or the images, and I’m basically limited, not by the constraints of my browser, but by those set forth by both Adobe through its display plugin, and the designer. And let’s face it, most designers are also clueless about user interfaces and ease-of-use, unlike the people who designed my web browser, which is rendered mostly useless on a Flash site.

It is this loss of freedom that makes Flash so dangerous, and why I think it would be a good thing for Flash to disappear eventually.

Flash adds nothing of true value to the Web, as we could all live happy without all the animations, all the desktop-apps-masquerading-as-web-apps made in Flash (write a Web app from the ground up, it’s not that hard), all the stupid content that forces me to work its way instead of my way, and luckily, thanks to the advent of HTML5, the one thing for which Flash has proven to be indispensible (web video) we won’t need it even for that. Because, let’s face it, web video was Flash’s killer application; everything else that could once be done only in Flash is now doable in AJAX, CSS and Javascript. And honestly, if Flash had been such a good technology for those things, we would have stayed with it and not bothered with anything else.

If anything, the existence of so many alternatives to Flash and whatever it can do, is evidence that the world at large truly does not like Flash.

Open letter to Amazon.com: Please make my Kindle not suck

Update: It appears Amazon is indeed listening; I was able to preorder Robert J. Sawyer’s latest for Kindle delivery, and most of the titles I talk about in this post are alerady available in my region. Thanks Amazon!

Like (according to Amazon.com) millions of people, I own a Kindle e-book reader. However, I’m a bit irked by the fact that Amazon is treating Kindle users as second-class citizens. As early adopters who paid a hefty sum for Amazon’s flagship product, I think we deserve better.

I’ve been a fan of e-ink technology since I first learned about the early, clumsy prototypes. When the original Kindle came out, I nearly jumped at the chance to get one. However I decided that the hassle of having a Kindle in a non-supported country (Mexico), meaning I’d have to jump through hoops to get content into the kindle, was not worth being an early adopter.

So patiently I waited, until, in late 2009, Amazon finally started selling the Kindle, complete with wireless content delivery, in Mexico and a host of other countries. “Great”, I thought. “I get to have my nice gadget, save on shipping costs and delivery time, and I still get to read a lot”.

The story has been a bit different. And it has more to do with politics and commercial interests than with technology. Let’s get this out of the way right now: I have only ONE complaint about the tech side of the Kindle, and it doesn’t even have anything to do with the product itself. More about that later.

So I got my shiny new kindle and went online to get some books for it. I naturally searched for my favorite Sci-fi author, Canadian writer Robert J. Sawyer.

To my dismay, there’s very little from him available as Kindle content. None of the books I was interested in were available: nor Calculating God, the first RJS book I read; neither Factoring Humanity, my all-time RJS favorite; I can’t get the Quintaglio Ascension trilogy, one of the very few RJS titles I haven’t read. They’re simply not available for the Kindle.

Titles are being “kindlefied” all the time. However selection is still quite shallow.

Sometimes I do find the title I’m looking for, only to be greeted by the message “not available in your region”. Amazon, if you CAN send physical books to my region, why can’t you deliver them to my Kindle? I know you’re going to say it’s not the same, but to me, that doesn’t cut it.

A few days ago I received a notification for Dan Simmons’ latest book. Black Hills was to come out in a few days, and I was offered a nice pre-order discount. However, it didn’t apply to the Kindle edition. So you mean to tell me that, even though I’d click on “buy now” this minute AND wait for the book to actually come out and be delivered to my Kindle, I can’t? and that the only way to take advantage of the discount is to wait for the dead-tree version to actually come out? well, never mind, because the book is for sale right now and there’s no Kindle edition in sight. So anyway I have to either get the hardcover or wait until the publisher decides it’s OK to let the Kindle edition out. It’s ridiculous that a hardcover book delivery will actually have me reading it sooner than the instantly-delivered electronic version.

Amazon, this is one area where you have to work with publishers and let them see what a big market they’re missing, and help them reach it. Because all these artificial restrictions, stemming from the irrational fear they have of electronic distribution, will only end up hurting their bottom line. I’m able (and more than willing) to purchase books. Look at my past history if you don’t believe me: even with a 50% delivery overcharge (the joys of not being in the United States) I routinely spent over $500 a year on books. Now I’m a bit weary of ordering physical books, since I’d prefer to offset the delivery cost with my Kindle; however, many of the titles that interest me aren’t available for the Kindle.

Interestingly, I find myself loading mostly classic literature on the Kindle; from Wilkie Collins to Jules Verne, these wonderful titles are available for free in Kindle-compatible formats. This is a consequence of the titles I want not being available on the Kindle; so if I have to choose between Jack London’s Call of the Wild  (old book, I’ve read it 1000 times, I can get it for free at mobipocket.com) and Robert Sawyer’s Starplex (haven’t read it, but is not available for the Kindle), guess what, I’ll get the former.

Now for my one technical quip: What’s this about “optimized for large screens” books? so now I need a Kindle DX to read content? That just sucks.

So Amazon, you have the clout, but also the flexibility to work with publishers and stop (both you and them) treating us like second-class citizens, just because we find the convenience of the e-book reader worth the high admission price. A lack of reasonably-priced content shouldn’t be part of that price.

Video tutorials suck – most of the time, or – the bow tie

For a long time the Internet was a veritable treasure trove of howtos and tutorials; this is people (mostly) selflessly sharing the stuff that’d taken them a lot to learn, in order to benefit the crowds. Philosophically, this has a lot to do with the Free Software movement. Most people wouldn’t realize it, but the “share freely” idea is what has propelled pieces of software such as Linux or Firefox to their current positions.

I digress. However, at some point, someone decided that a) the Internet was now fast enough to carry video, and b) people were too stupid to read and follow instructions. This brought about the unfortunate appearance of video tutorials. I usually rant against these, as I can still read faster than I can watch a video, where some random dude takes me step by step at their own pace (intead of at mine). Video tutorials also suck when you need some quick, compact piece of reference material to “refresh” your knowledge about a procedure, which would be better served by a 2-kb piece of text, instead of a 10-mb, 5-minute video.

Still, I must admit there are instances where a video tutorial makes the most sense; some steps in procedures are, indeed, better explained by following the actual action (and perhaps having a narrator telling you what the hell is going on).

I recently found myself needing to learn how to tie a bowtie. None of the text tutorials helped, no matter how well-written or illustrated they were. There is ONE crucial step that basically necessitates a video for you to understand it. I spent 40 minutes wrestling with the text-and-pictures instructions. The video made it clear in under a minute.

So, without further ado, if you EVER need to learn how to tie a bow tie, don’t bother with anything else: these three videos will show you how it’s done.

The first is the one that best explains the CRUCIAL step of “finding the hole”.

The second one goes into a bit more detail, I hate this guy who says “go ahead and” all the time, but his explanations are good.

The final one is hilarious from the way the woman “handles” his male-model, but it’s also instructional and explains the crucial step adequately.

Enjoy!

“To its devotees the bow tie suggests iconoclasm of an Old World sort, a fusty adherence to a contrarian point of view. The bow tie hints at intellectualism, real or feigned, and sometimes suggests technical acumen, perhaps because it is so hard to tie. Bow ties are worn by magicians, country doctors, lawyers and professors and by people hoping to look like the above. But perhaps most of all, wearing a bow tie is a way of broadcasting an aggressive lack of concern for what other people think.”
—Warren St John, The New York Times

Back to the stone age: a tale of two phones

So my iPhone fell and got damaged. To its credit I have to say I did hit it pretty hard several times in the past, and it’d survived. However this time it didn’t, and I had to get a replacement. I had to pay for it since it was out of warranty. However the truly painful thing was spending one week without the perks of the modern smartphone.

I had to dig out my trusty 5-year-old Nokia 7210 (not the SuperNova, I mean the original funky-buttoned 7210), a stylish and compact phone which, however, is pretty featureless by modern standards. You can talk on the phone, send SMS (barely; I don’t know how I sent messages without a full QWERTY keyboard) and that’s about it. It has no camera, no network access, the screen is only 128-color and uploading stuff requires a tedious conversion process, and it only supports 4-voice MIDI polyphonic tones.

This was due in no small part to the death of my Blackberry’s lame battery; the ‘berry would have been a decent temporary replacement for the iPhone,even though it’s not compatible with my data plan. So here’s a tip: when your phone is about to be left indefinitely in a drawer, remove the battery.

Being without the iPhone, what I missed the most was:

  • The QWERTY keyboard, without a doubt, is the most-missed feature. Whether virtual or real, it’s a necessity if you plan on composing a lot of text.
  • The camera, believe it or not, is really useful for a lot of purposes.
  • Synchronization with my computer’s address book. A lesser phone can do it but the Nokia lacked connectivity (only infrared).
  • The browser, being able to access the internet anywhere, anytime has become a true necessity.
  • E-mail. Yes, also not being able to receive emails periodically or, at least, on demand, is crippling and makes me feel out of touch and claustrophobic.
  • Music, I guess it’s a case of “if you have it, you will use it”. Somehow carrying the iPod around in addition to the Nokia didn’t seem like a good idea.

What I didn’t miss:

  • Ringtones. However weak the Nokia’s ringtone support is, it’s very loud and adequate, and my favorite ringtone ever (acceleration.mid) was available. I like it so much, I made an MP3 of it and loaded it on the iPhone.
  • GPS. It’s cool to have it but I really don’t use it all that often.
  • Most of my games. I don’t play on the iPhone that often. I must point out that neither the Nokia nor the iPhone had the “snakes” game from older (and newer) Nokia phones. I guess this 7210 got stuck in the past.

Also in case you hadn’t noticed, the entire point of this rant was so that I could have a new post before the 12th and thus keeping my blog updated “more than once every 6 months”.