Nerd Food: 2009: Year of World Domination?
Back to essay index | Back to home page |
Ninety-Ninety Rule: n. The first 90% of the code accounts for the first 90% of the development time. The remaining 10% of the code accounts for the other 90% of the development time. – Jargon
Sunday, February 1, 2009
World Domination Revisited
It's hard to believe, but it has been almost two years since I ranted on World Domination. Two years have rolled by, and it seems appropriate to resume the theme, but from a slightly different angle.
So, lets cut to the chase. Will 2009 be the Year of the Linux Desktop? The short answer is no, and I think its now clear it will apply to 2010, 2011 and so on. In fact, in 2009 we won't even meet the looser definition of World Domination I proposed, although we're getting closer.
What has happened since 2007? Well, we became a lot more popular in the desktop segment, in particular on the whole new category of MIDs and netbooks; at one point we entirely dominated it, but over time normal computing trends reasserted themselves. According to the latest figures, Linux now ships on around 10-15% of new netbooks, as opposed to the incredible 100% we had in the beginning. Netbooks had the highest visibility, but, to be fair, some inroads were made on all usual areas including server and regular desktop markets. Good, but nothing jaw dropping.
So we find ourselves, two years later, asking ourselves the age old questions. Why haven't we seen at least one significant large migration? Why didn't one of the multinational companies punt on a 100K RedHat or Novell desktop migration?
A Maturing Industry
For those who, like me, have been using Linux for over a decade, there always has been a lingering feeling, an idea on the back of one's mind insisting that one day the world would suddenly get it: the herd would finally see the light and the big mass migration would begin. Microsoft's time would come to an end, just as the once unassailable empires of IBM and DEC eventually faded. But time has gone by, the public has been exposed to Free Software, and yet no major visible changes have occurred. On the plus side, all these years of waiting gave us plenty of data points to calibrate our understanding of the software industry. Perhaps now we can begin to build a model that explains why things turned out the way they did.
Looking back, one of the key things one cannot fail to notice is how the world has changed in the last twenty years. The world in which I was brought up was a world of fast paced change, of great volatility: companies sprung from nowhere, dominated the market and then disappeared as if they never existed. Sinclair'sZX Spectrum, Commodore's C-64 and Amiga, Tandy's machines and so many other brands came and went, all in the blink of an eye. Those were heady times. It happened during so long a period that one started to believe this was the way of the world: a superior product immediately bought by a significant critical mass of consumers, only to be dumped as soon as a new leader emerged.
But the computer world was small then, and it belonged mainly to geeks. The homebrew generation had receded into the shade a bit, to be sure, and mass production took over; things got easier for users. However, the target market was still tiny, and still mainly composed of dedicated people willing to put in the hours to get things to work.
Non-geeks — particularly suits — viewed the wild west mentality of the technology sector in a completely different light. For them, it was a serious problem. The technology was promising and the killer applications were beginning to appear, but caution ruled the day. After all, one could invest a considerable amount of money on hardware and software, make a glorious 3 year plan with all bells and whistles, only to find out that the vendors had folded, or gave up on the products entirely. This was no way to run a business.
Microsoft saw it clearly and responded by talking the language business wanted to hear. It was time for "workgroups" and "solutions", for "office" and "enterprise". Windows 3.11 was a great step on that direction but, SWAGing somewhat, I'd say Windows 95 was the cornerstone. It marked the end of the wild frontier days, and signaled the consolidation of the new world order. Microsoft's strategies, much like those of a Machiavellian prince, were focused on stabilisation through domination. It was nasty, but extremely effective. Their dominance achieved what business most wanted, which was predictability and standardisation. Here was, finally, a solid ground on which to plan.
The result was explosive. Microsoft went from a small vendor in a highly competitive market to the dominant force. By creating standards — however closed one may consider them to be — the company helped expand exponentially the overall size of the market, and developed a vital symbiosis with business. Each release of Windows provided both stability (through flawless backwards compatibility) and a dazzling number of new features; and each release was delivered at a rate which fit nicely with business' need for planning.
And so the world went, in this nice happy fashion, from Windows 95 to Windows 98, NT 3.5, to NT 4, Windows 2000 to Windows XP. What Microsoft managed to achieve between Windows 95 and XP was ubiquity. Suddenly everyone everywhere was using Windows and Office, and the numbers were huge. Immense. The geeks were finally, completely, totally, utterly dwarfed by the non-geeks.
XP was a landmark release in more ways than one, though. It is with XP that we meet our second inflection point, at which the symbiosis between Microsoft and business started to breakdown. Until XP business never really questioned the upgrade cycle: hardware grew so much faster from one year to the next that it made perfect sense to upgrade machines regularly. Software in itself also suffered dramatic improvements from release to release, benefiting those who kept up with the times. Being at the cusp of the wave was a competitive advantage.
But with XP, strange things begun to happen:
- Moore's law hit a social barrier; suddenly people stopped wanting the fastest PCs and started looking instead for other things such as peripherals, bigger and better monitors, etc. And much more importantly, low end PCs became good enough for the vast majority of tasks.
- Microsoft's products went from being seen as the cheap alternative to expensive brands, to the expensive brand with no real alternatives. The operative system cost became a significant part of the overall PC cost.
- Upgrade fatigue kicked in, and many companies begun to ask just exactly why there was a need to change the entire estate so frequently.
- The size of the PC market became so large that it just wasn't feasible for a large part of it to quickly upgrade as it had happened in the past.
In short, the PC market started showing signs of maturity. Microsoft's objective — their attempt to stabilise and standardise the PC market — had been achieved; but at the same time, its success may have brought about great difficulties for the company. Seen in this light, Vista's problems are not so much technical. There may be a number of significant issues with the operative system — although most Windows users I regularly speak to seem to be pretty happy with it. It has its rough edges, but so did XP in the beginning and Windows 95 before it, and that didn't stop them from being huge successes.
What has changed fundamentally is the relationship between Microsoft and its user base. There just isn't any need for mass upgrades any more, and the more constrained IT budgets get, the more obvious this becomes. After all, Vista was extremely successful on the new PC segment; it struggled more when trying to convince existing PC owners to upgrade. I am strongly convinced that Windows 7 will suffer the same fate. The crucial element on its adoption is going to be the End Of Life of XP, because no business will want to run a product that is no longer supported by its creator. When EOL is declared for XP , all business will start to migrate to Windows 7 — but not before then. They are more than happy with it; it works, it's well supported and more importantly, "they know how it behaves". The learning curve will start from scratch, be it Vista or Windows 7, and, from a commercial perspective, for no particularly good reason.
In truth, business need only a platform that is:
- good enough (in which case they won't change)
- compellingly better (in which case they will want to change)
- compellingly cheaper (in which case they will be made to change by external pressures)
Its becoming harder and harder to create software that is so compellingly better that would make users upgrade. And Microsoft cannot start a cannibalisation strategy based on price, because its business model is based on the notion that products become progressively more expensive (after all their R&D costs increase dramatically from release to release, in the illusive search for killer features). The only weapon left to the company is to force customers to upgrade by whatever other means available — such asEOLing products. This can only be done for so long until business wises up.
It is in the midst of this carnage that Mac and Free Software products are competing. In view of this, one can conclude that no one — Microsoft included — will have an easy ride convincing large numbers of existing users, business or home, to switch. The real fight for change is going to happen on the fringes of the PC market, the beaches where those new machines are being sold.
Here, there are two weapons available to Free Software: technological superiority and price.
Competing on Technological Superiority and Price
A lot of nerds, to some extent myself included, are convinced that Linux is technologically superior to Windows. In short, UNIX is elegant and Windows is a kludge. The mystery is why no one else seems to see this. However, when one delves a bit further, there are several problems with the current state of Linux, and all of them are related to the ninety-ninety rule.
The crucial difference between the business oriented approach taken by Microsoft and other software vendors is this: its best to have something that works somewhat now than something that works perfectly in a few years time. With this in mind, one can spot many, many things were developed on Windows with an almost exclusive focus on time-to-deliver. Microsoft's engineers didn't spend months looking at X-Windows to implement a GUI, nor did they worry aboutremoting until they were forced to, or with shell scripting; the list goes on and on. Now contrast that with Linux:
- D-Bus was years in the making, and its only now we're seeing a significant adoption at the application level, with many exported interfaces;
- GStreamer was years in the making, and its only now we're seeing stability at the codec level, good support for most popular formats;
- PulseAudio has been years in the making and we're still experiencing loss of sound, problems with proprietary applications, etc.;
- XFree and X.Org have been years in the making and we still have problems with some drivers, a flickering startup on boot and on user switch;
The list goes on and on. From a user perspective, it matters not that PulseAudio (to pick but one victim) is architecturally extremely well designed and copes with an horrendously complex problem domain, made all the more complex due to the zoo of sound solutions in Linux. What matters is that he or she cannot useSkype to talk to their friends because it doesn't work. Or it may work, but the instructions are so complex that no sane non-geek could follow. Or that using Flash causes the web browser to crash.
In general, I think it's fair to say that in places were there was enough time to think, design, implement and stabilise a solution, Free Software projects did a much better job than Windows; take packaging at the distribution level for example and compare that with the amount of clicks required to keep all windows applications up-to-date. However, due to the very nature of Free Software development, solutions have a tendency to take a lot longer to reach stability. This is a good thing, because when they mature, they are truly technical achievements.
For example, it would have been easy to slot X.Org into the kernel, much like Windows did in the past to achieve better performance. Not so in Linux. The long path was taken, painstakingly working out which bits of code really needed to be in the kernel, and which should live in X land. The end result,KMS, is amazing, and will have large amounts of side-benefits — like most changes in Free Software tend to have. But even when all KMS code has been merged, we will still have to wait for the binary drivers to pick up these changes, so it may be quite some time until end users see any benefits.SELinux is also another example. Implementing the infrastructure and changing the kernel was in itself hard; but the real toil is now being done byRedHat and the community, spending many painstaking hours going through applications and creating the appropriate policies. Only then will SELinux really shine.
So, whilst I don't think, from a user perspective, that we are superior to Windows at present, I do think that in the near future (three years) we will be. What's more, we now have a platform for growth and its really easy to bring companies on board. For example, just look at dropbox and their Nautilus integration.
The Linux desktop of the future will be so uber cool its impossible to describe. Insanely fast boots, fantastic graphical support with no flickering from boot to desktop or on user switching, great integration between apps courtesy of D-Bus, all sorts of weird and wonderful sound capabilities courtesy ofPulseAudio, Telepathy for presence, great UI in Gnome and KDE. And all because each developer chose to take the long and hard path rather than the easy way out.
That being said, we have to live in the present, and we are still at the point of paying the cost. Soon the second 90% will be done.
The last topic I'd like to discuss is price. If there is something Microsoft cannot compete on with Free Software, its on price. After all, one can't really go much lower than zero. However, it's important to notice that when it comes to business, cost is a tricky thing. So much of it comes down to perception. After all, one could argue successfully that user retraining is required to move from Windows to Linux. This would dramatically increase the costs, making such a move prohibitively expensive. On the same token, one could look at the example of people such as Dave Largo, and conclude that Linux can be easily adopted by end users with very little training, requires little hardware and is infinitely configurable with little effort.
In truth, cost will never be an easy proposition for Linux until technical superiority is attained. On the server side, the battle was not "won" because Linux was free but because the solutions being offered were technically superior, integrated well with Windows and were being priced at a significant discount of a Windows equivalent. Only then did price become significant. Something similar needs to happen to the desktop market.
Conclusion
I hope I succeeded in demonstrating that there will never be a Year of the Linux Desktop as such, but instead, one should expect the continuation of present trends: a sequence of years with slow and steady gains being made. Maturity changed the rules of the game somewhat.
If we had the current Linux desktop a decade or so ago, when the market was younger and more fragmented, we would probably take on a significant share of the market, even competing againstXP . But things changed, and there is a lot more inertia everywhere. With regards to the battle for new computers, the key factor there will be technological superiority. Linux will stand a good chance of fighting for that market in the next three years, once all the core infrastructure stabilises.
Back to essay index | Back to home page |