Why I’m staying on Unity

A very interesting conversation erupted today, beginning when a coworker sent a lengthy email stating his reasons for altogether leaving Ubuntu 11.04’s new Unity desktop interface and instead resorting to the good, old-fashioned Gnome 2 “Classic” session.

In it he makes some very valid points about functionality that’s different to what he was used to. This understandably affects his workflow, so instead of wrestling with a new interface, he chose to go with the old one, hopefully until Unity matures enough for him to be able to customize it to his liking.

 

What’s interesting was the amount of responses it got, where everyone spoke about their “pet peeves” with Unity. The vast majority were changes in how Unity handles things, that interfered with people’s workflows. It’s understandable that even a small change in how your user interface behaves, when you’ve become adept at working with it, disrupts things enough (and annoyingly enough) that you either go back to the old user interface, or just start fiddling with the new one until you find a way to get things to an acceptable state.

Which is what struck me as curious about this thread: there were basically two camps, those who flat out abandoned Unity for the time being, and those who actually went looking into how Unity behaves and integrates with the environment, and came up with ways to make Unity more comfortable to those used to the “old ways” of Gnome 2.x and its desktop interface.

Without demerit to the original poster, whose points were quite valid, a lot of responses suggested ways to solve about 80% of his complaints about Unity. However, the fact that it took a team of experts to solve the problems that a user (and another expert, at that) was experiencing, is testament to the fact that Unity could still be made more intuitive, easier and more customizable.

I finally upgraded to Ubuntu 11.04 and Unity this past weekend. Like many, I experienced some usability issues, where the desktop wasn’t behaving the way I was used to. However, my use of the system means that I basically want the UI to stay out of my way. So the main change I had to make was to get the Unity dock to auto-hide, so that it only appears when I ask it to. The rest of the time it’s hidden away. Everything else, well, it’s admittedly different than what I’m used to, but that’s change for you. Was Unity making a change for change’s sake? Maybe so, but I think it’s change in the right direction. Even if it somewhat alienates experienced users (for whom, however, workarounds exist that handle nearly all their concerns), I think the true success of Unity is in how it works for new users. And here are two examples.

Another coworker posted his experience with showing Ubuntu and Unity to a newbie, fresh-from-Windows user. The user’s comments were along the lines of “this looks nice”, “It’s easy to use” and “I’m keeping it”.

Also, even though some have complained about the app lens being hard to use (and it’s a complaint I’ve seen already twice), I’ve seen users realize “but hey, if it’s really that messy, you can use the search field to find what you need, right?”. So yes, end users are realizing this, and it’s just a matter of polishing how things work. If all, I think it’s great to move users away from the “the computer has only two buttons” mindset and get them using the keyboard a little more.

So yes indeed, I’m staying on Unity, and I’m looking forward to seeing it maturing into a better desktop interface. as Mark Shuttleworth said, it’s a foundation on which the next generations of Ubuntu user experience will be built. I’ll be thrilled to be along for the ride.

Finally, for a great write on why your desktop changed, and why the developers would appreciate you giving it a whirl and helping improve it (even just commenting on the stuff you find hard, unintuitive or just plain wrong) is better than just swearing off these newfangled changes (without which, face it, you’d still be using fwm and MIT Athena widgets), please drop by Federico Mena-Quintero’s activity log and read his wonderful and short article “Moving into your new Gnome 3 house“.

The myth of better device support on Windows

It’s long been argued that peripheral support in Linux is far inferior to that under Windows, and that this has been a factor for Windows’ dominance in the desktop. More and more, the myth that Windows has any kind of technical superiority leaves place to the fact that marketing, and being bundled with nearly every PC sold worldwide, are Windows’ only keys to its widespread adoption. And here’s a story to prove that point.

I bought a printer (HP Photosmart C4780). It’s one of those cheap, $50 numbers that eat through ink like crazy. So I come home, wondering if I’ll have to install 500 MB of crap as included in the bundled CD to get the printer to work with my Mac at home.

As is usually the case with the Mac, I just plugged it in and it worked, both the printer and the scanner, without a hitch or problem.

I then proceeded to do the same on a freshly installed Ubuntu 10.10 laptop. Same story, the printer just worked, and Ubuntu even recognized it when being plugged in, no need to install drivers or anything.

Now, on Windows the printer wouldn’t have worked at all without installing a boatload of crap, HP is notoriously bloaty when it comes to their bundled software.

The usual wisdom is that hardware manufacturers care more about Windows, and ship all their hardware with drivers and stuff to make it work. It would seem, then, that the burden is on Apple and Linux distributions to provide drivers and support to most hardware. It would seem like a daunting task. But they do it, and the end result is that Mac OS and most Linux distros include drivers for everything, right out of the box. This puts them a step ahead of Windows, when it comes to ease of use, at the cost of maybe a slight bloat. Still, my Ubuntu installation is much leaner than the 16-GB behemoth that is Windows 7.

So there you have it, the myth of better hardware support on Windows, finally debunked.

Now, if I could only get the braindead wireless support on the HP printer to work…

Flash Sucks

¿A world without Flash?

I’ve always been a hater of Macromedia/Adobe Flash. Now that the entire Apple-Adobe controversy has rekindled the debate of whether the web is a better or worse place because of Flash, I realized why it is I don’t like Flash.

Also, I realized most technically-inclined people dislike Flash too, because they recognize a lot of its shortcomings, unlike the layperson who only cares about the web being pretty, full of animations and beeps and stuff.

Now, before I begin, let me state this: I’m griping about Flash as a web content creation platform/tool. I couldn’t care less about its use as a mobile development tool. A lot of bloggers have expressed more informed opinions on this topic.

For me, a true flash hater, what Flash does is take control away from the end-user, the consumer of content, and give it to the content creator, the designer.

If you’re the designer this is all fine and dandy; you can control exactly what the user sees, you can tell your application to be exactly this many pixels wide, this many pixels high, and how to look and behave down to the pixel and the microsecond. This is why designers love Flash; it not only lets them work in a familiar environment and with familiar tools, but it also gives them complete control about how and what the user sees and can do.

By the way, don’t be fooled; a designer that claims to know web design but uses only Flash is not a web designer. Flash was created to allow designers (Adobe’s primary clientele) to be able to say (untruthfully) they can design web sites.

A flash-only website. Click it and weep.

The problem is, the web wasn’t meant to be this way. Fundamentally, the kind of content the web was created for, was meant to empower the user. This is why the web browser was designed from the very beginning to not impose those very parameters (width, height, fonts, and so on); the content should adjust to whatever the user’s agent can display. So web content reflows to adapt to your browser; it should degrade for those systems that for any reason lack a certain capability (think Lynx and visually-impaired users). It should also allow me, the user, to alter how it looks and is rendered. This is why I can disable cookies, javascript, replace or even remove altogether the CSS used to format my content, decide not to display images, and so on. Even the most complex non-flash web page consists of text and images; and with a bit of cleverness I can get both the text and the images and incorporate them in the rest of my workflow; paste them into a document, translate them, email them to someone else, the possibilities are limitless since web content is delivered to me as-is, as bytes I can not only look at, but also manipulate as I would any other kind of information on my computer.

This freedom is lost on a Flash-only (or mostly) website. What’s worse, instead of the content being, well, content, stuff I can get out of the browser and process and manipulate in other ways, it becomes merely an image, a photograph or a movie trapped in the clutches of the Flash plugin. I can’t copy the text, I can’t scroll except through the provisions the designer made for me, I can’t easily extract the audio or the images, and I’m basically limited, not by the constraints of my browser, but by those set forth by both Adobe through its display plugin, and the designer. And let’s face it, most designers are also clueless about user interfaces and ease-of-use, unlike the people who designed my web browser, which is rendered mostly useless on a Flash site.

It is this loss of freedom that makes Flash so dangerous, and why I think it would be a good thing for Flash to disappear eventually.

Flash adds nothing of true value to the Web, as we could all live happy without all the animations, all the desktop-apps-masquerading-as-web-apps made in Flash (write a Web app from the ground up, it’s not that hard), all the stupid content that forces me to work its way instead of my way, and luckily, thanks to the advent of HTML5, the one thing for which Flash has proven to be indispensible (web video) we won’t need it even for that. Because, let’s face it, web video was Flash’s killer application; everything else that could once be done only in Flash is now doable in AJAX, CSS and Javascript. And honestly, if Flash had been such a good technology for those things, we would have stayed with it and not bothered with anything else.

If anything, the existence of so many alternatives to Flash and whatever it can do, is evidence that the world at large truly does not like Flash.

Open letter to Amazon.com: Please make my Kindle not suck

Update: It appears Amazon is indeed listening; I was able to preorder Robert J. Sawyer’s latest for Kindle delivery, and most of the titles I talk about in this post are alerady available in my region. Thanks Amazon!

Like (according to Amazon.com) millions of people, I own a Kindle e-book reader. However, I’m a bit irked by the fact that Amazon is treating Kindle users as second-class citizens. As early adopters who paid a hefty sum for Amazon’s flagship product, I think we deserve better.

I’ve been a fan of e-ink technology since I first learned about the early, clumsy prototypes. When the original Kindle came out, I nearly jumped at the chance to get one. However I decided that the hassle of having a Kindle in a non-supported country (Mexico), meaning I’d have to jump through hoops to get content into the kindle, was not worth being an early adopter.

So patiently I waited, until, in late 2009, Amazon finally started selling the Kindle, complete with wireless content delivery, in Mexico and a host of other countries. “Great”, I thought. “I get to have my nice gadget, save on shipping costs and delivery time, and I still get to read a lot”.

The story has been a bit different. And it has more to do with politics and commercial interests than with technology. Let’s get this out of the way right now: I have only ONE complaint about the tech side of the Kindle, and it doesn’t even have anything to do with the product itself. More about that later.

So I got my shiny new kindle and went online to get some books for it. I naturally searched for my favorite Sci-fi author, Canadian writer Robert J. Sawyer.

To my dismay, there’s very little from him available as Kindle content. None of the books I was interested in were available: nor Calculating God, the first RJS book I read; neither Factoring Humanity, my all-time RJS favorite; I can’t get the Quintaglio Ascension trilogy, one of the very few RJS titles I haven’t read. They’re simply not available for the Kindle.

Titles are being “kindlefied” all the time. However selection is still quite shallow.

Sometimes I do find the title I’m looking for, only to be greeted by the message “not available in your region”. Amazon, if you CAN send physical books to my region, why can’t you deliver them to my Kindle? I know you’re going to say it’s not the same, but to me, that doesn’t cut it.

A few days ago I received a notification for Dan Simmons’ latest book. Black Hills was to come out in a few days, and I was offered a nice pre-order discount. However, it didn’t apply to the Kindle edition. So you mean to tell me that, even though I’d click on “buy now” this minute AND wait for the book to actually come out and be delivered to my Kindle, I can’t? and that the only way to take advantage of the discount is to wait for the dead-tree version to actually come out? well, never mind, because the book is for sale right now and there’s no Kindle edition in sight. So anyway I have to either get the hardcover or wait until the publisher decides it’s OK to let the Kindle edition out. It’s ridiculous that a hardcover book delivery will actually have me reading it sooner than the instantly-delivered electronic version.

Amazon, this is one area where you have to work with publishers and let them see what a big market they’re missing, and help them reach it. Because all these artificial restrictions, stemming from the irrational fear they have of electronic distribution, will only end up hurting their bottom line. I’m able (and more than willing) to purchase books. Look at my past history if you don’t believe me: even with a 50% delivery overcharge (the joys of not being in the United States) I routinely spent over $500 a year on books. Now I’m a bit weary of ordering physical books, since I’d prefer to offset the delivery cost with my Kindle; however, many of the titles that interest me aren’t available for the Kindle.

Interestingly, I find myself loading mostly classic literature on the Kindle; from Wilkie Collins to Jules Verne, these wonderful titles are available for free in Kindle-compatible formats. This is a consequence of the titles I want not being available on the Kindle; so if I have to choose between Jack London’s Call of the Wild  (old book, I’ve read it 1000 times, I can get it for free at mobipocket.com) and Robert Sawyer’s Starplex (haven’t read it, but is not available for the Kindle), guess what, I’ll get the former.

Now for my one technical quip: What’s this about “optimized for large screens” books? so now I need a Kindle DX to read content? That just sucks.

So Amazon, you have the clout, but also the flexibility to work with publishers and stop (both you and them) treating us like second-class citizens, just because we find the convenience of the e-book reader worth the high admission price. A lack of reasonably-priced content shouldn’t be part of that price.

Back to the stone age: a tale of two phones

So my iPhone fell and got damaged. To its credit I have to say I did hit it pretty hard several times in the past, and it’d survived. However this time it didn’t, and I had to get a replacement. I had to pay for it since it was out of warranty. However the truly painful thing was spending one week without the perks of the modern smartphone.

I had to dig out my trusty 5-year-old Nokia 7210 (not the SuperNova, I mean the original funky-buttoned 7210), a stylish and compact phone which, however, is pretty featureless by modern standards. You can talk on the phone, send SMS (barely; I don’t know how I sent messages without a full QWERTY keyboard) and that’s about it. It has no camera, no network access, the screen is only 128-color and uploading stuff requires a tedious conversion process, and it only supports 4-voice MIDI polyphonic tones.

This was due in no small part to the death of my Blackberry’s lame battery; the ‘berry would have been a decent temporary replacement for the iPhone,even though it’s not compatible with my data plan. So here’s a tip: when your phone is about to be left indefinitely in a drawer, remove the battery.

Being without the iPhone, what I missed the most was:

  • The QWERTY keyboard, without a doubt, is the most-missed feature. Whether virtual or real, it’s a necessity if you plan on composing a lot of text.
  • The camera, believe it or not, is really useful for a lot of purposes.
  • Synchronization with my computer’s address book. A lesser phone can do it but the Nokia lacked connectivity (only infrared).
  • The browser, being able to access the internet anywhere, anytime has become a true necessity.
  • E-mail. Yes, also not being able to receive emails periodically or, at least, on demand, is crippling and makes me feel out of touch and claustrophobic.
  • Music, I guess it’s a case of “if you have it, you will use it”. Somehow carrying the iPod around in addition to the Nokia didn’t seem like a good idea.

What I didn’t miss:

  • Ringtones. However weak the Nokia’s ringtone support is, it’s very loud and adequate, and my favorite ringtone ever (acceleration.mid) was available. I like it so much, I made an MP3 of it and loaded it on the iPhone.
  • GPS. It’s cool to have it but I really don’t use it all that often.
  • Most of my games. I don’t play on the iPhone that often. I must point out that neither the Nokia nor the iPhone had the “snakes” game from older (and newer) Nokia phones. I guess this 7210 got stuck in the past.

Also in case you hadn’t noticed, the entire point of this rant was so that I could have a new post before the 12th and thus keeping my blog updated “more than once every 6 months”.

The pitfalls of proprietary

Risk is a constant for today’s companies. Google, Microsoft, Apple, IBM (well, maybe not so much with IBM), Toyota… they all take risks developing and testing new technologies. The risk lies in the amount of money and resources they devote to creating new technology. When said technology involves keeping a team to upgrade, fix and evolve it, the risk multiplies. The risk is mitigated if the technology is successful and provides a reasonable return on investment. Indeed, the whole point of “risking” your resources is so that created technology might prove a commercial success and yield the company many times the investment.

However there are times when things don’t go quite right and a company has to “cut its losses” and scrap a project or product altogether. Google has done it, IBM has done it (PS/2), Apple has done it (the Lisa). Microsoft has done it many times, and in doing it yet again they help me make my point today.

Users of Microsoft’s 3D simulation platform have been rocked by news that the company has laid off off or reassigned most of the of the platform’s developers“,  reads an article at thestandard.com. Microsoft has a 3D simulation platform? Well yes, as part of their venerable Flight Simulator product (has the honor of being the first piece of commercial software I bought, circa 1988), it seems they had spawned off a 3D simulation product. Microsoft’s announced enhancements to the platform meant it was going to be targeted at markets such as real estate, city planning, and law enforcement. And developers for these industries were thrilled, and had already begun work on applications using Microsoft’s ESP technology.

Maybe the reason is the current economical climate; whatever, Microsoft seems to be shedding a lot of “non-essential” teams, among them the Flight Simulator team, followed closely by the ESP team. Streamlining seems like a sensible tactic for a profit-oriented business, right?

Users don’t seem to think that way.

“As a commercial developer who is currently working on two major ESP projects I can’t begin to express the concern I have hearing this news. I look forward to hearing from Microsoft as to the future plans for ESP”

“I’m gutted that this is probably no longer going to see the light of day. It looks like there were a lot of people working really hard to build a revolutionary product. It must be totally crushing for them to see all that work go to waste.”

‘my company used it for a solution and invested time and money into getting it approved and purchased. Microsoft sure handed us a raw deal for taking a gamble on their platform.’

Anyway, my point with all this is that proprietary software is a bad idea. Microsoft is the embodiment of all we loathe in a software company; however much they talk about being business partners, the current schism is a sample, a reminder that, should your business no longer be profitable to them, Microsoft won’t hesitate to hang you out to dry. The bottom line is all that matters to them. And their use and selling of proprietary technologies means that, should the worst happen, you’re left with no recourse but to throw all your investment away and start anew with some other product, hoping that that other company won’t do the same to you.

Rather than risking this, why not go free software? Things would be very different if Microsoft opened up ESP; it’s not like they’re going to profit from that anymore. That way companies with a reasonably talented developer pool might take the project forward, as has happened with many open-sourced, formerly-commercial products (Blender comes to mind). That’s a company that protects your investment. Microsoft just ripped them off, plain and simple.

For all those companies developing products using ESP, it’s likely their business is not primarily software development. Thus they chose to go with a commercial, specialized software vendor. And look what happened to them! Even if they don’t have the in-house expertise to develop something like ESP, a pooling of resources or funding a non-profit tasked with developing and freely releasing an ESP substitute would make sense. A law enforcement organization sees no competition from a real estate, architecture or urban planning company, so what’s it to them if they use the same, freely-developed product on which to base their custom offerings? (look at Unreal Engine and what ID Software does with their Quake FPS engines; also, ID software has open-sourced their old releases, which rings true with what I’m ranting about here). Again, as long as it’s not their core business, there’s no problem with them cooperating in the creation of a component for their main project.

Misery loves company and at least, through heated discussion in MSDN, those users who were wronged by Microsoft have come in contact with each other and might, if they have the vision to venture into the world of free software, have an opportunity to make sure this never happens again to them or others wanting similar technologies.

Nasty bug with binary files, Rails and erb.rb – how to fix it

OK, so I happily hack away on my  Rails application on a Debian box with Ruby 1.8.7 and Rails 2.1.0, and then deploy to a Fedora 8 server with Ruby 1.8.6 and Rails 2.2.2. All of a sudden a particular release causes Passenger to spit an error page on application startup. The key error was:

undefined method empty?' for nil:NilClass

Now I'm combing all over my code to find where I'm using "empty?" but I'm sure it's somewhere that gets run on application startup, otherwise it wouldn't show up when Passenger tries to start the application. But I find nothing and I'm about to shoot myself.

Following the trace I end up hacking Ruby's erb.rb file, as there appear to be some bugs in this; indeed, this one from 1.8.6 is different from what I have in 1.8.7, so the app runs fine here. I try to fix instances where empty? might get called on a nil object, but after fixing 3 of these the app stops responding altogether. Hmm, so something, somewhere, depends on erb.rb's buggy behavior. Best to leave it alone.

HOWEVER, on the deployment server, running with script/server works fine; it's only when using Passenger that things blow up.

Finally I find this thread that points me in the right direction:

One of the users dropped some
JPEG files into the /app/views/static directory, and that seems to be
jamming up the works with 2.2.2.

Indeed, as part of my last set of revisions, I'd left several samples of static content I was converting into dynamically generated pages; sure enough, they included JPGs and whatnot. Just to be safe, I decided to move the entire directory into public to avoid any problems.

Now the app runs just peachy and I only wasted 2 hours chasing down this bug. Thanks to the guys at Nabble!

Eventually it all boils down to this Rails bug reported at Lighthouse. So hopefully it'll be fixed soon. In the meanwhile, keep binary files out of your views subtree.

I'm attaching the entire Passenger error page, in case it's useful to anyone. Mainly so that Google can find it faster for other people with this problem.

Ruby on Rails application could not be started

These are the possible causes:

  • There may be a syntax error in the application's code. Please check for such errors and fix them.
  • A required library may not installed. Please install all libraries that this application requires.
  • The application may not be properly configured. Please check whether all configuration files are written correctly, fix any incorrect configurations, and restart this application.
  • A service that the application relies on (such as the database server or the Ferret search engine server) may not have been started. Please start that service.

Further information about the error may have been written to the application's log file. Please check it in order to analyse the problem.

Error message:
undefined method empty?’ for nil:NilClass
Exception class:
NoMethodError
Application root:
/var/www/spcccdec/releases/20090227005857
Backtrace:
# File Line Location
0 /usr/lib/ruby/1.8/erb.rb 478 in scan'
1 /usr/lib/ruby/1.8/erb.rb 524 in compile’
2 /usr/lib/ruby/1.8/erb.rb 691 in initialize'
3 /usr/lib/ruby/gems/1.8/gems/actionpack-2.2.2/lib/action_view/template_handlers/erb.rb 51 in new’
4 /usr/lib/ruby/gems/1.8/gems/actionpack-2.2.2/lib/action_view/template_handlers/erb.rb 51 in compile'
5 /usr/lib/ruby/gems/1.8/gems/actionpack-2.2.2/lib/action_view/template_handler.rb 11 in call’
6 /usr/lib/ruby/gems/1.8/gems/actionpack-2.2.2/lib/action_view/renderable.rb 21 in _unmemoized_compiled_source'
7 /usr/lib/ruby/gems/1.8/gems/activesupport-2.2.2/lib/active_support/memoizable.rb 57 in compiled_source’
8 /usr/lib/ruby/gems/1.8/gems/activesupport-2.2.2/lib/active_support/memoizable.rb 25 in __send__'
9 /usr/lib/ruby/gems/1.8/gems/activesupport-2.2.2/lib/active_support/memoizable.rb 25 in memoize_all’
10 /usr/lib/ruby/gems/1.8/gems/activesupport-2.2.2/lib/active_support/memoizable.rb 22 in each'
11 /usr/lib/ruby/gems/1.8/gems/activesupport-2.2.2/lib/active_support/memoizable.rb 22 in memoize_all’
12 /usr/lib/ruby/gems/1.8/gems/activesupport-2.2.2/lib/active_support/memoizable.rb 17 in freeze'
13 /usr/lib/ruby/gems/1.8/gems/actionpack-2.2.2/lib/action_view/paths.rb 88 in reload!’
14 /usr/lib/ruby/gems/1.8/gems/actionpack-2.2.2/lib/action_view/paths.rb 102 in templates_in_path'
15 /usr/lib/ruby/gems/1.8/gems/actionpack-2.2.2/lib/action_view/paths.rb 100 in each’
16 /usr/lib/ruby/gems/1.8/gems/actionpack-2.2.2/lib/action_view/paths.rb 100 in templates_in_path'
17 /usr/lib/ruby/gems/1.8/gems/actionpack-2.2.2/lib/action_view/paths.rb 86 in reload!’
18 /usr/lib/ruby/gems/1.8/gems/actionpack-2.2.2/lib/action_view/paths.rb 78 in load'
19 /usr/lib/ruby/gems/1.8/gems/actionpack-2.2.2/lib/action_view/paths.rb 109 in load’
20 /usr/lib/ruby/gems/1.8/gems/actionpack-2.2.2/lib/action_view/paths.rb 109 in each'
21 /usr/lib/ruby/gems/1.8/gems/actionpack-2.2.2/lib/action_view/paths.rb 109 in load’
22 /usr/lib/ruby/gems/1.8/gems/rails-2.2.2/lib/initializer.rb 357 in load_view_paths'
23 /usr/lib/ruby/gems/1.8/gems/rails-2.2.2/lib/initializer.rb 182 in process’
24 /usr/lib/ruby/gems/1.8/gems/rails-2.2.2/lib/initializer.rb 112 in send'
25 /usr/lib/ruby/gems/1.8/gems/rails-2.2.2/lib/initializer.rb 112 in run’
26 ./config/environment.rb 13
27 /usr/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb 31 in gem_original_require'
28 /usr/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb 31 in require’
29 /usr/lib/ruby/gems/1.8/gems/passenger-2.0.6/lib/passenger/railz/application_spawner.rb 254 in preload_application'
30 /usr/lib/ruby/gems/1.8/gems/passenger-2.0.6/lib/passenger/railz/application_spawner.rb 214 in initialize_server’
31 /usr/lib/ruby/gems/1.8/gems/passenger-2.0.6/lib/passenger/utils.rb 179 in report_app_init_status'
32 /usr/lib/ruby/gems/1.8/gems/passenger-2.0.6/lib/passenger/railz/application_spawner.rb 203 in initialize_server’
33 /usr/lib/ruby/gems/1.8/gems/passenger-2.0.6/lib/passenger/abstract_server.rb 166 in start_synchronously'
34 /usr/lib/ruby/gems/1.8/gems/passenger-2.0.6/lib/passenger/abstract_server.rb 135 in start’
35 /usr/lib/ruby/gems/1.8/gems/passenger-2.0.6/lib/passenger/abstract_server.rb 112 in fork'
36 /usr/lib/ruby/gems/1.8/gems/passenger-2.0.6/lib/passenger/abstract_server.rb 112 in start’
37 /usr/lib/ruby/gems/1.8/gems/passenger-2.0.6/lib/passenger/railz/application_spawner.rb 179 in start'
38 /usr/lib/ruby/gems/1.8/gems/passenger-2.0.6/lib/passenger/spawn_manager.rb 222 in spawn_rails_application’
39 /usr/lib/ruby/gems/1.8/gems/passenger-2.0.6/lib/passenger/spawn_manager.rb 217 in synchronize'
40 /usr/lib/ruby/gems/1.8/gems/passenger-2.0.6/lib/passenger/spawn_manager.rb 217 in spawn_rails_application’
41 /usr/lib/ruby/gems/1.8/gems/passenger-2.0.6/lib/passenger/spawn_manager.rb 126 in spawn_application'
42 /usr/lib/ruby/gems/1.8/gems/passenger-2.0.6/lib/passenger/spawn_manager.rb 251 in handle_spawn_application’
43 /usr/lib/ruby/gems/1.8/gems/passenger-2.0.6/lib/passenger/abstract_server.rb 317 in __send__'
44 /usr/lib/ruby/gems/1.8/gems/passenger-2.0.6/lib/passenger/abstract_server.rb 317 in main_loop’
45 /usr/lib/ruby/gems/1.8/gems/passenger-2.0.6/lib/passenger/abstract_server.rb 168 in `start_synchronously’
46 /usr/lib/ruby/gems/1.8/gems/passenger-2.0.6/bin/passenger-spawn-server 46

Twitter and bad karma

Few internet companies or services or phenomena, if you will, seem to attract as many visceral reactions as Twitter. First there was the whole buzz about Twitter and Ruby on Rails,  which always excites passions. Then the “Rails doesn’t scale” debacle. Next the “built wrong” accusations. Twitter even got some sprinklings from Zed Shaw’s spectacular departure from the Ruby scene. Then Twitter screws over their users and stops SMS service basically everywhere but the USA.

Recently Twitter indirectly angered another user community when they hired Rael Dornfest, creator of personal productivity apps I Want Sandy and Stikkit. The problem is that Mr. Dornfest decided to kill both services, altough he will take the “intellectual property” behind them to Twitter. Of course he is within his right to terminate a service that was free and under no promises, but all the users who had come to rely in these apps certainly don’t agree.

Sandy’s user community seems to be, by far, the most affected: messages at Rael’s “going offline” announcement range from the indifferent few to the truly upset, inflamed and disappointed at the whole Web 2.0 thing, specially Twitter. “Karma’s a bitch”, says one comment, and it’s true that Rael’s decision to leave users hanging out to dry will bring him and a whole bag of negative karma to Twitter. As another poster said, “I’d be weary of using any Twitter product with your name on it”.

Will Sandy and Stikkit return as twitter add-ons? possibly, but the real lesson to remember here is this: Twitter itself is also free, so it might go away at any time the creators decide it’s in their best personal interest to kill it and move on. So are most Web 2.0 apps. So if you must use them and learn to depend on them, you’d better make sure you choose the ones that, at least, let you get your data out when they die.

Apple: Where the hell is the push notification service?

Prior to my current iPhone i had a Blackberry Curve. And the single most important feature it had was Blackberry Messenger. I messaged other blackberry users a lot, and since I paid a flat fee for data usage, I could basically send 1000 messages a day and not get charged extra. Even considering the extra cost for the data plan, the blackberry was cheaper than my previous Nokia phone, where I had to pay for each SMS I sent.

Then I bought an iPhone. Mind you, in Mexico we had to wait for Apple to create the iPhone 3g. Then we had to pay through the nose for the device, and then again a significant amount for the monthly data plan. All in all, phone service + data plan pretty much equals what I was paying monthly for my Blackberry. However, due to SMS usage to replace what I previously did with Blackberry Messenger, the iPhone was costing me about twice as much as the Blackberry each month. I had to cut back on my messaging;  spending that amount of money for a communications device and then having to cut back on your communications just doesn’t make sense. Why did this happen?

The iPhone has no Blackberry messenger equivalent. Sure, there’s fring and plenty of other messaging applications, but since Apple didn’t see fit to allow for 3rd-party background processes, none of these applications work unless they’re in the foreground, unlike BB messenger which would deliver messages at any time.

Can it be done? Sure it can! Apple’s applications (Mail, phone, SMS) do it all the time, beeping and popping a nice notification icon to let you know “you’ve got mail”.

“Not to worry”,  I thought. “At WWDC where the iPhone 3G was announced, Apple announced a push notification service where applications could send, through Apple’s servers, background notifications to any iPhone app, to be displayed in several different ways.” This would enable messenger functionality for almost any application and would mean you could send instant messages without incurring SMS charges. After all, I’m paying through the nose for an unlimited data plan, I should put it to use. Apple said the service would be available by September.

September came and went. October came and went, and so did November.

iPhone software updates came and went: 2.0 was the original iPhone 3G release. 2.0.1, 2.0.2, the 2.1 major release on September 12th, and 2.1.1. By now we were wondering where our push notifications were.

Apple announced the 2.2 firmware in October. Expectations grew high that it would include the vaunted push notification functionality. But on November 21, we were disappointed again: 2.2 includes mostly eyecandy improvements. Apple, come on! this is an expensive device, and you can’t keep delivering disappointments. Performance and stability improvements are welcome, but WHERE THE HELL are: 1) the PUSH NOTIFICATION SERVICE YOU PROMISED YOU’D DELIVER TWO MONTHS AGO and 2) A FREAKING CUT/PASTE FUNCTION LIKE EVERY $20 PHONE ON EARTH?

This is an official call for Apple to stop wasting time and delivering the functionality I was promised; Now don’t get me wrong, I like the iPhone, but the lack of this service is costing me money, since all those messages I have to send through SMS are not cheap. The thought of going back to the blackberry has crossed my mind; so Apple, you either deliver this sooner rather than later, or I’ll snatch a Blackberry Storm the first chance I get. Because yes, the iPhone is THAT expensive; It’d be cheaper for me to purchase a Storm at retail price, than keep subsidizing, through SMS,  the iPhone’s inability to come into the 21st century with regards to BASIC functionality. Oh, and maybe then I’d be able to keep using my wonderful Blackberry unit converter, which I’ve been unable to port to the iPhone because, hey, I can’t afford the $100 to enter the iPhone developer program, because I spend it all on SMS!

BerryUnitConverter 1.2.1

Just released Berry Unit Converter 1.2.1, to fix a stupid bug with ounces and stones conversion. An upgrade is advisable. BerryUnitConverter is free software. Get more info and a download from here.

Is virtualization a step backwards?

A note on Slashdot says that vApp, [is] a tool that will allow developers to ‘encapsulate the entire app infrastructure in a single bundle — servers and all.’ Indeed part of the push with virtualization is that you can have an application running on its own instance of the operating system, and share the hardware resources between many such app/OS “bundles”.

I think this way of seeing things is dangerous! Let’s analyze history for a bit. First, application programs ran standalone on a computer. As more and more programs began to appear, it became clear and obvious that they all required several common services: memory management, input/output, disk access, printing, graphics routines, and so on. Thus operating systems were born, where the OS would handle these common tasks and free application programmers from having to do that. An added benefit is that the OS could arbitrate access to these resources and enable multitasking of several applications, since all the apps talk to the OS through APIs and need not concern themselves with low-levelness.

Then beasts such as Windows appeared. Both the OS and the applications that use it are so brain-dead, that most vendors who sell server-grade Windows applications basically require that each app has its own dedicated server on a standalone Windows installation.

This of course is ridiculous and byzantine. This is where VMware came in and realized that a typical organization could have say, 10 servers each running at 5% usage, each with a mission critical application that absolutely must be on its own on this server. And they said “well how about we run 10 instances of Windows, isolated from each other through virtualization, and then we can have a single box at 50% usage running all 10 apps the way they want to”.

This is indeed the bread-and-butter of VMWare. But beware! are you noticing a trend here? by “demoting” each OS/app set to an “app bundle” status, VMWare is indeed taking a step backwards. Okay, so they want VMWare ESX to take the place of the traditional OS, and have each application/OS running on its own. This looks suspiciously familiar to the “app has to do everything by itself” model we escaped from a couple of decades ago!

Sure, as an application programmer I was freed from having to program my own routines for a lot of tasks (for systems such as Mac OS or a decent Linux graphical environment, the libraries free me from a LOT of mundane chores). However, the second killer advantage of an OS providing services is efficiency; this means one piece of software providing access to all applications; I run one OS for all my apps and save on memory, disk space and CPU cycles.

By moving the actual OS (VMWare) down, it provides only very basic services to the “apps” on top (the OS). So indeed, every app carries a gigantic “library” of functions since, in effect, this library is now an entire operating system. The overhead for having several copies of the OS running is gigantic; each Windows installation takes up a couple of gigabytes, while consuming a few hundred megabytes of RAM and a fair share of CPU cycles. On startup, you have 10 copies of Windows, all performing the exact same bootup sequence and reading the same files (albeit from different disk locations, so no caching performance boost).

Worst of all, without proprietary hacks, you also lose the important benefit of interprocess communications. After all, and this is one of VMWare’s purported benefits, each app is isolated from the others, by virtue of running under its own OS instance.

So who is the culprit here? Sure, poorly programmed Windows applications which can’t work without littering your entire hard drive with DLLs and barf if another unknown process is running at the same time, have most of the blame. But this trend is spreading to other operating systems (Zimbra, I’m looking at you). A huge step backwards looms over us, once developers begin to think “hey, I can actually take control of the entire operating system and have it bent to my app’s will and requirement; after all, if the user has a problem with that, he can always virtualize my app and OS”.

What is needed is to go back to well-behaved applications, ones that are designed from the ground up to play well with others, and that by this very design trait, do not interfere with others.

I realize that this might be difficult; after all, with all the dependencies between system components, it might be understandable that my app’s database configuration requirements might break another’s. But then again, the solution is NOT to run two apps with TWO separate databases on TWO different operating systems. Either I find a way to NOT require my app to mess things up, or I provide with a non-system-wrecking component that gives me the service I want. Sure, it’d be a pain in the ass to run two instances of SQL Server, each on a different directory and on a different port, but it beats running two entire copies of Windows. Or wait, wasn’t Windows stable enough for this already?

Still, I think it’s a matter of politeness and cooperation between developers, to not require me to wreck my OS or virtualize in order to run an application. The reasons for virtualization must be different: consolidation of workloads, isolation for security or experimentation purposes, ease of deployment/restoration in case of disaster. Because, hey, do you all remember when everybody was saying “one of the advantages of Windows is that developers don’t have to develop printing, graphics, file access, GUIs and sound separately for each app and for each piece of hardware out there! the OS gives us that service” ? .

Sure developers deserve a break; that’s no excuse to be lazy, and you should think of us, sysadmins of the world, who also have to care for and feed the operating system instances on which your apps run. And trust me, each OS instance, however virtual it might be, still counts as a separate server, with the same care & feeding needs as if it were a standalone box. And however cool it might sound, trust me, I’d rather not wrestle with 150 virtual servers, when 5 well-kept instances would do the same job. KTHX!

prefixed_attributes

I just released a prefixed_attributes plugin for Rails.

Rails has a handy number_to_human_size method, but in order to use it, all
your quantities need to be in non-scaled units, and it’s cumbersome to have
your users typing 100 gigabyte amounts by hand. You’d normally have a
“bytes” column in your records and add virtual attributes to your models.
This plugin adds those attributes for you.
The plugin adds a prefixed_attribute method to all your classes. Use it to mark an
existing attribute on your class (even a non-AR one) like this:

prefixed_attribute :bytes, :type => :binary
prefixed_attribute :hertz, :type => :si

More information here.