Softpanorama

May the source be with you, but remember the KISS principle ;-)
Home Switchboard Unix Administration Red Hat TCP/IP Networks Neoliberalism Toxic Managers
(slightly skeptical) Educational society promoting "Back to basics" movement against IT overcomplexity and  bastardization of classic Unix

Softpanorama Bulletin
Vol 20, No. 03 (July, 2008))

Bulletin 1998 1999 2000 2001 2002 2003 2004 2005 2006 2007
2008 2009 2010 2011 2012 2013 2014 2015 2016 2017 2018

Review of The Big Switch: Rewiring the World, from Edison to Google

An extended defense of a utopian vision of the IT future first published in Carr's HBR article, May 10, 2008 

Save your money. This book contains nothing but an extended defense of a Utopian vision of the IT future first published in Carr's HBR article. Limited understanding of underlying IT technologies, haziness and lack of concrete detailed examples (obscurantism) are typical marks of Carr's style. Carr used focus on IT shortcomings as a smokescreen to propose a new utopia: users are mastering complex IT packages and perform all functions previously provided by IT staff, while "in the cloud" software service providers fill the rest. This is pretty fine humor, the caricature reminding me mainframe model, but not much more.

His analogies are extremely superficial and are completely unconvincing (Google actually can greatly benefit from owning an electrical generation plant or two :-) Complexity of IT systems has no precedents in human history. That means that analogies with railways and electrical grid are deeply and irrevocably flawed. They do not capture the key characteristics of the IT technology: its unsurpassed complexity and Lego type flexibility. IT became a real nerve system of the modern organizations. Not the muscle system or legs :-)

Carr's approach to IT is completely anti-historic. Promoting his "everything in the cloud" Utopia as the most important transformation of IT ever, he forgot (or simply does not know) that IT already experienced several dramatic transformations due to new technologies which emerged in 60th, 70th and 90th. Each of those transformations was more dramatic and important then neo-mainframe revolution which he tried to sell as "bright future of IT" and a panacea from all IT ills. For example, first mainframes replaced "prehistoric" computers. Then minicomputers challenged mainframes ("glass wall" datacenters) and PC ended mainframe dominance (and democratized computing.). In yet another transformation the Internet and TCP/IP (including wireless) converted datacenters to their modern form. What Carr views as the next revolution is just a blip on the screen in comparison with those events in each of which the technology inside the datacenter and on user desks dramatically changed.

As for his "everything in the cloud" software service providers there are at least three competing technologies which might sideline it: application streaming, virtualization (especially virtual appliances), and "cloud in the box". "In the cloud" software services is just one of several emerging technical trends and jury is still out how much market share each of them can grab. Application streaming looks like direct and increasingly dangerous competitor for the "in the cloud" software services model. But all of them are rather complementary technologies with each having advantages in certain situations and none can be viewed as a universal solution.

The key advantage of application streaming is that you use local computing power for running the application, not a remote server. That removes the problem of latency and bandwidth problems inherent in transmitting video stream generated by GUI interface on the remote server (were the application is running) to the client. Also modern laptops have tremendous computing power that is very expensive and not easy to match in remote server park. Once you launch the application on the client (from a shortcut ) the remote server streams (like streaming video or audio) the necessary application files to your PC and the application launches. This is done just once. After that application works as if it is local. Also only required files are sent (so if you are launching Excel you do NOT get those libraries that are shared with MS Word if it is already installed).

Virtualization promises more agile and more efficient local datacenters and while it can be used by "in the cloud" providers (Amazon uses it), it also can undercut "in the cloud" software services model in several ways. First of all it permits packaging a set of key enterprise applications as "virtual appliances". the latter like streamed applications run locally, store data locally, are cheaper, have better response time and are more maintainable. This looks to me as a more promising technical approach for complex sets of applications with intensive I/O requirements. For example, you can deliver LAMP stack appliance (Linux-Apache-PHP-MySQL) and use it on a local server for running your LAMP-applications (for example helpdesk) enjoying the same level of quality and sophistication of packaging and tuning as in case of remote software providers. But you do not depend on WAN as users connect to it using LAN which guarantees fast response time. And your data are stored locally (but if you wish they can be backed up remotely to Amazon or to other remote storage provider).

The other trend is the emergence of higher level of standardization of datacenters ("cloud in the box" or "datacenter in the box" trend). It permits cheap prepackaged local datacenters to be installed everywhere. Among examples of this trend are standard shipping container-based datacenters which are now sold by Sun and soon will be sold by Microsoft. They already contain typical services like DNS, mail, file sharing, etc preconfigured. For a fixed cost an organization gets set of servers capable of serving mid-size branch or plant. In this case the organization can save money by avoiding paying monthly "per user" fees -- a typical cost recovery model of software service providers. It also can be combined with previous two models: it is easy to stream both applications and virtual appliances to the local datacenter from central location. For a small organization such a datacenter now can be pre-configured in a couple of servers using Xen or VMware plus necessary routers and switches and shipped in a small rack.

I would like to stress that the power and versatility of modern laptop is the factor that should not be underestimated. It completely invalidates Carr's cloudy dream of users voluntarily switching to network terminal model inherent is centralized software services ( BTW mainframe terminals and, especially, "glass wall datacenters" were passionately hated by users). Remotely running applications have a mass appeal only in very limited cases (webmail). I think that users will fight tooth and nail for the preservation of the level of autonomy provided by modern laptops. Moreover, in no way users will agree to the sub-standard response time and limited feature set of "in the cloud" applications as problems with Google apps adoption demonstrated.

While Google apps is an interesting project which is now used in many small organizations instead of their own mail and calendar infrastructure, they can serve as a litmus test for the difficulties of replacing "installed" applications with "in the cloud" applications. First of all, if we are talking about replacing Open Office or Microsoft Office, Google apps functionality is really, really limited. At the same time Google have spend a lot of money and efforts creating them but never got any significant traction and/or sizable return on investment. After several years of existence this product did not even come close to the functionality of Open Office to say nothing about Microsoft Office. To increase penetration Google recently started licensing them to Salesforce and other firms. That means that the whole idea might be flawed because even such an extremely powerful organization as Google with its highly qualified staff and huge server power of datacenters cannot create an application suit that can compete with preinstalled on laptop applications, which means cannot compete with the convenience and speed of running applications locally on modern laptop.

In case of corporate editions the price is also an issue ($50 per user per year for Google apps vs. $ 220 for Microsoft Office Professional). In no way they ook like a bargain if we assume five-seven years life span for the MS Office. The same situation exists for home users: price-wise Microsoft Office can be now classified as shareware (Microsoft Office Home and Student 2007 which includes Excel, PowerPoint, Word, and OneNote costs ~$100 or ~$25 per application ). So for home users Google needs to provide Google apps for free, which taking into account the amount of design efforts and complexity of the achieving compatibility, is not a very good way of investing available cash. Please note that Microsoft can at any time add the ability to stream Office and other applications to laptops and put "pure play" cloud applications providers in a really difficult position: remote servers need to provide the same quality of interface and amount of computing power per user as the user enjoys on a modern laptop. That also suggests existence of some principal limitations of "in the cloud" approach for any complex application domain: SAP has problems with moving SAP/R3 to the cloud too and recently decided to scale back its efforts in this direction.

All-in-all computing power of a modern dual core 3 GHz laptops with 4G of memory and 200G hard drives represent a serious challenge for "in the cloud" software services providers. This power makes for them difficult to attract individual users money outside advertising-based or other indirect models. It's even more difficult for them "to shake corporate money loose": corporate users value the independence of locally installed on laptop applications and the ability to store data locally. Not everybody wants to share with Google their latest business plans.

Therefore Carr's 2003 vision looks in 2008 even less realistic then it used to be five years earlier. As during those five years datacenters actually continued to grow, Carr's value as a tech trends forecaster is open for review.

Another problem with Carr neo-mainframes vision is propaganda of "bandwidth communism". Good WAN connectivity is far from being free. Experience of any university datacenter convincingly demonstrates that a dozen of P2P enthusiasts in the neighborhood can prove futility of dreams about free high quality WAN connectivity to any skeptics. In other words this is a typical "tragedy of commons" problem and should be analyzed as such.

Viewing it from this angle makes Carr's views of reliable and free 24x7 communication with remote datacenters rather unrealistic. This shortcoming can be compensated by properties of some protocols (for example SMTP mail) and for such protocols this is not a problem, but for other it is and always will be. At the same time buying dedicated WAN links can be extremely expensive: for mid-side companies it is usually as expensive as keeping everything in house. Large companies usually already have "private clouds" anyway. That makes problematic "in the cloud" approach to any service where disruptions or low bandwidth in certain times of the day can lead to substantial monetary losses. Also bandwidth is limited: for example OC-1 and OC-3 lines have their upper limit of 51.84Mbit/s and 155.2 Mbit/s correspondingly. And even within organization not all bandwidth is used for business purposes. In a large organization there are always many "entertainment-oriented" users, who strain the connection of the firm to the Internet cloud.

Another relevant question to ask is: "What are financial benefits to a large organization for implementing Carr's vision." I do not see any substantial financial gains. IT costs in large enterprises are already minimized (often 1-3% of total costs) and further minimization does not bring much benefits. What can you save from just 1% of total costs? But you can lose a lot). Are fraction of a percent savings worth risks of outsourcing your own nerve system ? That translates into the question: "What are principal differences in behavior of those two IT models during catastrophic events ?"

The answer is: "When disaster strikes the difference between local and outsourced IT staff becomes really critical and entails huge competitive disadvantage for those organization who weakened their internal IT staff." during disasters internal IT staff really matter and treatment of the company by internal datacenter staff is completely different from treatment of the same company by google or Amazon, for which this is just another annoying customer. That brings us to the central problem with Carr's views: he is discounting IQ inherent in local IT staff. But if this IQ falls below certain threshold that really endangers an organization in case of catastrophic events.

Moreover it instantly opens such an enterprise to various form of snake-oil salesmen and IT consultants proposing their wares. In no way software service providers are altruists and if they sense that you became "IT challenged" and dependent on them they will act accordingly.

In other words an important side effect of dismantling of IT organization is that instantly makes a company a donor in the hands of ruthless external suppliers and contractors. I saw such cases as a side effects of outsourcing. Consultants (especially large consultant firms) can help but they also can become part of the problem due to the problem of loyalty. We all know what happened with medicine when doctors were allowed to be bribed by pharmaceutical companies. This situation which is aptly called "Viva Viagra" and in which useless or outright dangerous drags like Vioxx were allowed to became blockbusters was fully replicated in IT: myth about independence of IT consultants is just a myth (and moreover, some commercial IDS/IPS and EMS systems in their destructive potential are not that different from Vioxx ;-).

Carr's recommendation that companies should be more concerned with IT risk mitigation then IT strategy is complete baloney. He just does not have any "in depth" understanding of very complex security issues involved in large enterprise. Security cannot be achieved without sound IT architecture and participation of non-security IT staff. Sound architecture (which is a result of proper "IT strategy") is more important then any amount of "risk mitigation" activities which most commonly are waist of money or, worse, entail direct harm to the organizations (as SOX enthusiasts from big accounting firms recently aptly demonstrated to the surprised corporate world).

I touched only the most obvious weaknesses of the Carr's vision (or fallacy to be exact). All-in-all Carr proposed just another dangerous utopia and skillfully milked the controversy his initial HBR article generated in his two subsequent books.