The Big Switch: Rewiring the World, from Edison to Google

Review by Dr. Nikolai Bezroukov

Save your money. This book contains nothing but an extended defense of a Utopian vision of the IT future first published in Carr's HBR article. Limited understanding of underlying IT technologies, haziness and lack of concrete detailed examples (obscurantism) are typical marks of Carr's style. Carr used focus on IT shortcomings as a smokescreen to propose a new utopia: users are mastering complex IT packages and perform all functions previously provided by IT staff, while "in the cloud" software service providers fill the rest. This is pretty fine humor, the caricature reminding me mainframe model, but not much more.

His analogies are extremely superficial and are completely unconvincing (Google actually can greatly benefit from owning an electrical generation plant or two :-) Complexity of IT systems has no precedents in human history. That means that analogies with railways and electrical grid are deeply and irrevocably flawed. They do not capture the key characteristics of the IT technology: its unsurpassed complexity and Lego type flexibility. IT became a real nerve system of the modern organizations. Not the muscle system or legs :-)

Carr's approach to IT is completely anti-historic. Promoting his "everything in the cloud" Utopia as the most important transformation of IT ever, he forgot (or simply does not know) that IT already experienced several dramatic transformations due to new technologies which emerged in 60th, 70th and 90th. Each of those transformations was more dramatic and important then neo-mainframe revolution which he tried to sell as "bright future of IT" and a panacea from all IT ills. For example, first mainframes replaced "prehistoric" computers. Then minicomputers challenged mainframes ("glass wall" datacenters) and PC ended mainframe dominance (and democratized computing.). In yet another transformation the Internet and TCP/IP (including wireless) converted datacenters to their modern form. What Carr views as the next revolution is just a blip on the screen in comparison with those events in each of which the technology inside the datacenter and on user desks dramatically changed.

As for his "everything in the cloud" software service providers there are at least three competing technologies which might sideline it: application streaming, virtualization (especially virtual appliances), and "cloud in the box". "In the cloud" software services is just one of several emerging technical trends and jury is still out how much market share each of them can grab. Application streaming looks like direct and increasingly dangerous competitor for the "in the cloud" software services model. But all of them are rather complementary technologies with each having advantages in certain situations and none can be viewed as a universal solution.

The key advantage of application streaming is that you use local computing power for running the application, not a remote server. That removes the problem of latency and bandwidth problems inherent in transmitting video stream generated by GUI interface on the remote server (were the application is running) to the client. Also modern laptops have tremendous computing power that is very expensive and not easy to match in remote server park. Once you launch the application on the client (from a shortcut ) the remote server streams (like streaming video or audio) the necessary application files to your PC and the application launches. This is done just once. After that application works as if it is local. Also only required files are sent (so if you are launching Excel you do NOT get those libraries that are shared with MS Word if it is already installed).

Virtualization promises more agile and more efficient local datacenters and while it can be used by "in the loud" providers (Amazon uses it), it also can undercut "in the cloud" software services model in several ways. First of all it permits packaging a set of key enterprise applications as "virtual appliances". the latter like streamed applications run locally, store data locally, are cheaper, have better response time and are more maintainable. This looks to me as a more promising technical approach for complex sets of applications with intensive I/O requirements. For example, you can deliver LAMP stack appliance (Linux-Apache-PHP-MySQL) and use it on a local server for running your LAMP-applications (for example helpdesk) enjoying the same level of quality and sophistication of packaging and tuning as in case of remote software providers. But you do not depend on WAN as users connect to it using LAN which guarantees fast response time. And your data are stored locally (but if you wish they can be backed up remotely to Amazon or to other remote storage provider).

The other trend is the emergence of higher level of standardization of datacenters ("cloud in the box" ot "datacenter in the box" trend). It permits cheap prepackaged local datacenters to be installed everywhere. Among examples of this trend are standard shipping container-based datacenters which are now sold by Sun and soon will be sold by Microsoft. They already contain typical services like DNS, mail, file sharing, etc preconfigured. For a fixed cost an organization gets set of servers capable of serving mid-size branch or plant. In this case the organization can save money by avoiding paying monthly "per user" fees -- a typical cost recovery model of software service providers. It also can be combined with previous two models: it is easy to stream both applications and virtual appliances to the local datacenter from central location. For a small organization such a datacenter now can be pre-configured in a couple of servers using Xen or VMware plus necessary routers and switches and shipped in a small rack.

I would like to stress that the power and versatility of modern laptop is the factor that should not be underestimated. It completely invalidates Carr's cloudy dream of users voluntarily switching to network terminal model inherent is centralized software services ( BTW mainframe terminals and, especially, "glass wall datacenters" were passionately hated by users). Remotely running applications have a mass appeal only in very limited cases (webmail). I think that users will fight tooth and nail for the preservation of the level of autonomy provided by modern laptops. Moreover, in no way users will agree to the sub-standard response time and limited feature set of "in the cloud" applications as problems with Google apps adoption demonstrated.

While Google apps is an interesting project which is now used in many small organizations instead of their own mail and calendar infrastructure, they can serve as a litmus test for the difficulties of replacing "installed" applications with "in the cloud" applications. First of all, if we are talking about replacing Open Office or Microsoft Office, functionality is really, really limited. At the same time Google have spend a lot of money and efforts creating them but never got any significant traction and/or sizable return on investment. After several years of existence this product did not even come close to the functionality of Open Office. To increase penetration Google recently started licensing them to Salesforce and other firms. That means that the whole idea might be flawed because even such an extremely powerful organization as Google with its highly qualified staff and huge server power of datacenters cannot create an application suit that can compete with preinstalled on laptop applications, which means cannot compete with the convenience and speed of running applications locally on modern laptop.

In case of corporate editions the price is also an issue and Google apps in comparison with Office Professional ($50 per user per year vs. $ 220 for Microsoft Office Professional) do not look like a bargain if we assume five-seven years life span for the MS Office. The same situation exists for home users: price-wise Microsoft Office can be now classified as shareware (Microsoft Office Home and Student 2007 which includes Excel, PowerPoint, Word, and OneNote costs ~$100 or ~$25 per application ). So for home users Google need to provide Google apps for free, which taking into account the amount of design efforts and complexity of the achieving compatibility, is not a very good way of investing available cash. Please note that Microsoft can at any time add the ability to stream Office applications to laptops and put "in the cloud" Office-alternative software service providers in a really difficult position: remote servers need to provide the same quality of interface and amount of computing power per user as the user enjoys on a modern laptop. That also suggests existence of some principal limitations of "in the cloud" approach for this particular application domain. And this is not unique case. SAP has problems with moving SAP/R3 to the cloud too and recently decided to scale back its efforts in this direction.

All-in-all computing power of a modern dual core 2-3GHz laptops with 2-4G of memory and 100G-200G hard drives represent a serious challenge for "in the cloud" software services providers. This power makes for them difficult to attract individual users money outside advertising-based or other indirect models. It's even more difficult for them "to shake corporate money loose": corporate users value the independence of locally installed on laptop applications and the ability to store data locally. Not everybody wants to share with Google their latest business plans.

Therefore Carr's 2003 vision looks in 2008 even less realistic then it used to be five years earlier. As during those five years datacenters actually continued to grow, Carr's value as a tech trends forecaster is open for review.

Another problem with Carr central "software service provider" vision (aka neo-mainframes vision) is propaganda of "bandwidth communism". Good WAN connectivity is far from being free. As experience of any university datacenter convincingly demonstrates that a dozen of P2P enthusiasts in the neighborhood can prove futility of dreams about free high quality WAN connectivity to any skeptics. In other words this is a typical "tragedy of commons" problem and should be analyzed as such.

Viewing it from this angle makes Carr's views of reliable and free 24x7 communication with remote datacenters unrealistic. This shortcoming can be compensated by properties of some protocols (for example SMTP mail) and for such protocols this is not a problem, but for other it is and always will be. At the same time buying dedicated WAN links can be extremely expensive: for mid-side companies it is usually as expensive as keeping everything in house. That makes problematic "in the cloud" approach to any service where disruptions or low bandwidth in certain times of the day can lead to substantial monetary losses. Also bandwidth is limited: for example OC-1 and OC-3 lines have their upper limit of 51.84Mbit/s and 155.2 Mbit/s correspondingly. And even within organization not all bandwidth is used for business purposes. In a large organization there are always many "entertainment-oriented" users, who strain the connection of the firm to the Internet cloud.

Another relevant question to ask is: "What are financial benefits to a large organization for implementing Carr's vision." I do not see any substantial financial gains. IT costs in large enterprises are already minimized (often 1-3% of total costs) and further minimization does not bring much benefits (what can you save from just 1% of total costs; but you can lose a lot). Are fraction of a percent savings worth risks of outsourcing your own nerve system ? That translates into the question: "What are principal differences in behavior of those two IT models during catastrophic events ?" The answer is: "When disaster strikes the difference between local and outsourced IT staff becomes really critical and entails huge competitive disadvantage for those organization who weakened their internal IT staff."

That brings us to another problem with Carr's views: he is discounting IQ inherent in local IT staff. If this IQ falls below certain threshold that not only endangers an organization in case of catastrophic events but instantly opens such an enterprise to various form of snake-oil salesmen and IT consultants proposing their wares. Also software service providers are not altruists and if they sense that you are really dependent on them or became "IT challenged" they will act accordingly.

In other words an important side effect of dismantling of IT organization is that instantly makes a company a donor in the hands of ruthless external suppliers and contractors. Consultants (especially large consultant firms) can help but they also can become part of the problem due to the problem of loyalty. We all know what happened with medicine when doctors were allowed to be bribed by pharmaceutical companies. This situation which is aptly called "Viva Viagra" and in which useless or outright dangerous drags like Vioxx were allowed to became blockbusters was fully replicated in IT: myth about independence of IT consultants is just a myth (and moreover, some commercial IDS/IPS and EMS systems in their destructive potential are not that different from Vioxx ;-).

Carr's recommendation that companies should be more concerned with IT risk mitigation then IT strategy is complete baloney. He just does not have any "in depth" understanding of very complex security issues involved in large enterprise. Security cannot be achieved without sound IT architecture and participation of non-security IT staff. Sound architecture (which is a result of proper "IT strategy") is more important then any amount of "risk mitigation" activities which most commonly are simple waist of money or, worse, entail direct harm to the organizations (as SOX enthusiasts from big accounting firms recently aptly demonstrated to the surprised corporate world).

I touched only the most obvious weaknesses of the Carr's vision (or fallacy to be exact). All-in-all Carr proposed just another dangerous utopia and skillfully milked the controversy his initial HBR article generated in his two subsequent books.

Virtualization promises more agile and more efficient local datacenters. It also permits packaging key enterprise application as "virtual appliances". The latter compete directly with centralized "in the cloud" software service providers vision and have several key advantages: they are local, they are cheaper, and they are more maintainable. Delivery of virtual appliances to local datacenters instead of "in the cloud" software services looks to me a more promising technical approach for complex applications with intensive I/O requirements.

The other trend is a higher level of standardization of datacenters ("datacenter in the box"), which permit cheap local datacenters to be installed everywhere. Among examples of this trend are standard shipping container-based datacenters which are now sold by Sun and soon will be sold by Microsoft. They already contain typical services like DNS, mail, file sharing, etc preconfigured. For a fixed cost an organization gets ready-make local datacenter capable of serving mid-size branch or plant. This trend also competes with the idea of software service providers and for a medium size organization might be cheaper in the long run then paying monthly "per user" fees -- a typical cost recovery model of software service providers. It permits streaming both applications and virtual appliances to the local delivery point. For a small organization such a datacenter now can be pre-configured in a couple of servers using Xen or VMware plus necessary routers and switches and shipped in a small rack.

I would like to stress that the power of modern laptop is the factor that should not be underestimated. It completely invalidates Carr's dream of users voluntarily switching to network terminal model inherent is centralized software service provision ( BTW mainframe terminals and, especially, "glass wall datacenters" were passionately hated by users). Such a solution can have a mass appeal only in very limited cases (webmail). I think that users will fight tooth and nail for the preservation of the level of autonomy provided by modern laptops. Moreover, in no way users will agree to sub-standard response time and limited feature set of "in the cloud" applications as problems with Google apps adoption demonstrated.

Can we call Google experiment with creation of Net-based alternative of the Office (Google apps) a failure? I think we can: Google have spend a lot of money and efforts creating them and never got any traction and/or sizable return on investment. After several years this is not even a financially sustainable project. That's why Google is licensing them to Salesforce and other firms. And forget about dreams of denting Microsoft dominance. Office 2003 and 2007 each in its own way were a knockout for such Google dreams. That means that the idea might be flawed because even such an extremely powerful organization as Google with its highly qualified staff and huge server power of datacenters cannot compete with the convenience and speed of running applications locally on modern laptop. It also cannot compete on price: price-wise Microsoft Office can be now classified as shareware: the cost is $25 per application (Excel, PowerPoint, Word, and OneNote) in Microsoft Office Home and Student 2007. So any price wars with Microsoft can be fought only on zero cost basis and taking into account the amount of design efforts and complexity of the achieving compatibility this is not a very good way of investing available cash. Even for organizations flush with money. And Microsoft can any time switch to streaming Office applications to laptops and put Office software service provider in a really difficult position: remote servers need to provide the same amount of computing power per user as the user has on a modern laptop.

Computing power of a modern dual core 2GHz laptops with 2G or 4G of memory and 100G hard drives represent a serious challenge that "in the cloud" providers do not have much chance to overcome. This makes for them difficult to attract individual users money outside advertising-based or other indirect models. It will be even more difficult for them to shake large organizations money loose as corporate users value the independence of locally installed on laptop applications. As well as the ability to store data locally.

Therefore Carr's 2003 vision looks in 2008 even less realistic then it used to be five years earlier. As during those five years datacenters actually continue to grow, Carr's value as a tech trends forecaster is open for review.

Another problem with Carr central "software service provider" vision (aka neo-mainframes vision) is propaganda of "bandwidth communism". Good WAN connectivity is far from being free. As experience of any university datacenter convincingly demonstrates that a dozen of P2P enthusiasts in the neighborhood can prove futility of dreams about free high quality WAN connectivity to any skeptics. In other words this is a typical "tragedy of commons" problem and should be analyzed as such.

Viewing it from this angle makes Carr's views of reliable and free 24x7 communication with remote datacenters unrealistic. This shortcoming can be compensated by properties of some protocols (for example SMTP mail) and for such protocols this is not a problem, but for other it is and always will be. At the same time buying dedicated WAN links can be extremely expensive: for mid-side companies it is usually as expensive as keeping everything in house. That makes problematic "in the cloud" approach to any service where disruptions or low bandwidth in certain times of the day can lead to substantial monetary losses. Also bandwidth is limited: for example OC-1 and OC-3 lines have their upper limit of 51.84Mbit/s and 155.2 Mbit/s correspondingly. And even within organization only a fraction of this bandwidth can be used for business purposes. In practice corporate connection to Internet is used mainly for entertainment as in any large organization there are always quite a few "entertainment-oriented" users, who consume lion share of available bandwidth.

Another Carr's folly is overestimation of costs of IT in large corporations. IT costs in large enterprises are already minimized (often 1-3% of total costs) and further minimization does not bring much benefits (what can you save from just 1% of total costs; but you can lose a lot). I do not see any substantial financial gains from cutting a fraction of a percent. And are risks involved in such cuttings, even if they are possible, worth risks of outsourcing your own nerve system ? That translates into the question: "What are principal differences in behavior of those two IT models during catastrophic events ?" The proper answer is: "When disaster strikes the difference between local and outsourced IT staff becomes really critical and entails huge competitive disadvantage for those organizations who weakened their internal IT staff."

That brings us to another problem with Carr's views: he is discounting IQ inherent in local IT staff. If this IQ falls below certain critical threshold, that not only endangers an organization in case of catastrophic events but instantly opens such an enterprise to various form of exploitation by snake-oil salesmen and IT consultants peddling their wares. Also software service providers are not altruists and if they sense that you are really dependent on them or became "IT challenged" they will act accordingly. In other words an important side effect of dismantling of IT organization is that instantly makes a company a donor in the hands of ruthless external suppliers and contractors. Consultants (especially large consultant firms) can help but they also can become part of the problem due to the problem of loyalty. We all know what happened with medicine when doctors were allowed to be bribed by pharmaceutical companies. This situation which is aptly called "Viva Viagra" and in which useless or outright dangerous drags like Vioxx were allowed to became blockbusters was fully replicated in IT: myth about independence of IT consultants is just a myth (and moreover, some commercial IDS/IPS and EMS systems in their destructive potential are not that different from Vioxx ;-).

Carr's recommendation that companies should be more concerned with IT risk mitigation then IT strategy is complete baloney. He just does not have any "in depth" understanding of very complex availability and security issues involved in large enterprise. Neither high availability, nor security cannot be achieved without sound IT architecture. Sound architecture (which is a result of proper "IT strategy" that Carr discounted) is more important then any amount of "risk mitigation" activities which most commonly are simple waist of money or, worse, entail direct harm to the organizations (as SOX enthusiasts from big accounting firms recently aptly demonstrated to the surprised corporate world).

I touched only the most obvious weaknesses of the Carr's vision (or fallacy to be exact). All-in-all Carr proposed just another dangerous utopia and skillfully milked the controversy his initial HBR article generated in his two subsequent books.

Other Notable Reviews

4.0 out of 5 stars Very Worthwhile, One Major Flaw, March 3, 2008
By Robert D. Steele (Oakton, VA United States) - See all my reviews
(TOP 50 REVIEWER)    (REAL NAME)   
This is a very worthwhile easy to absorb book. The author is thoughtful, well-spoken, with good notes and currency as of 2007.

The one major flaw in the book is the uncritical comparison of cloud computing with electricity as a utility. That analogy fails when one recognizes that the current electrical system wastes 50% of the power going down-stream, and has become so unreliable that NSA among others is building its own private electrical power plant--with a nuclear core, one wonders? While the author is fully aware of the dangers to privacy and liberty, and below I recap a few of his excellent points, he disappoints in not recognizing that localized resilience and human scale are the core of humanity and community, and that what we really need right now, which John Chambers strangely does not appear willing to offer, is a solar-powered server-router that gives every individual Application Oriented Network control at the point of creation (along with anonymous banking and Grug distributed search), while also creating local pods that can operate independently of the cloud while also blocking Google perverted new programmable search, wherer what you see is not what's in your best interests, but rather what the highest bidder paid to force into your view.

The author cites one source as saying that Google computation can do a task at one tenth of the cost. To learn more, find my review, "Google 2.0: The Calculating Predator" and follow the bread crumbs.

The author touches on software as a service, and I am reminded of the IBM interst in "Services Science." He has a high regard for Amazon Web Services, as I do, and I was fascinated by his suggestion that Amazon differs from Google, Amazon doing virtualization while Google does task optimization (with computational mathematics). Not sure that is accurate, Google can flip a bit tomorrow and put bankers, entertainers, data service providers, and publishers out of business.

I completely enjoyed th discussion of the impact of electrification and the rise of the middle class, of the migration from World Wide Web to World Wide Computer, and of the emergence of a gift ecnomy.

The author also touches on the erosion of the middle class, citing Jagdish Bhagwati and Ben Bernake as saying that it is the Internet rather than globalization that is hurting the middle class (globalization moved the low cost jobs, the Internet moved the highly-educated jobs).

I was shocked to learn that Google can listen to my background sound via the microphone, meaning that Google is running the equivalent of a warrantless audio penetration of my office. "Do No Evil?" This is very troubling.

Page 161: "A company run by mathematicians and engineers, Google seemsx oblivious to the possible social costs of transparent personalization." Well said. The only thing more shocking to me is the utter complacency of the top management at Amazon, IBM, Oracle, and Microsoft. Search for the article by Steve Arnold, the world's foremost non-Google expert on Google, look for <Google Pressure Wave: Do the Big Boys Feel It?>.

The author touches on Internet utility to terrorists, and our military's vulnerability, but he does not get as deeply into this as he could have. The fact is the Chinese can take out our telecommunications satellites anytime they want, and they are not only hacking into our computers via the Internet, they also appear to have perfected accessing "stand-alone" computers via the electrical connection. See <Chinese Irregular Warfare oss.net>.

The portion ofthe book I most appreciated was the authors discussion of lost privacy and individuality. He says "Computer systems are not at their core technologies of emancipation. They are technologies of control." He goes on to point out that even a decentralized cloud network can be programmed to monitor and control, and that is precisely where Google is going, monitoring employees and manipulating consumers.

He touches on semantic web but misses Internet Economy Meta Language (Pierre Levy) and Open Hypertextdocument System (Doug Englebart).

He credits Google founders with wanting to get to all information in all languages all the time, and I agree that their motives are largely worthy, but they are out of control--a suprnational entity with zero oversight. I can easily envision the day coming when in addition to 27 secessionist movements across the USA, we will hundreds of virtual secessions in which communities choose to define trusted computing as localized computing.

The book ends beautifully, by saying we will not know where IT is going until our children, the first generation to be wired from day one, become adults.

A few other books I recommend:
Fog Facts: Searching for Truth in the Land of Spin
The Age of Missing Information
The Landscape of History: How Historians Map the Past
Weapons of Mass Deception: The Uses of Propaganda in Bush's War on Iraq
Lost History: Contras, Cocaine, the Press & 'Project Truth'
The Tao of Democracy: Using Co-Intelligence to Create a World That Works for All
Society's Breakthrough!: Releasing Essential Wisdom and Virtue in All the People
All Rise: Somebodies, Nobodies, and the Politics of Dignity (BK Currents)
Escaping the Matrix: How We the People can change the world
Collective Intelligence: Creating a Prosperous World at Peace

1.0 out of 5 stars The History of Power Generation, February 13, 2008

By Christian Claborne (San Diego, CA USA) - See all my reviews
(REAL NAME)   
After reading the book, the summation or description provided by Amazon above captures the core of the author's message. The first part of the book drags you through the beginnings of the electric power generation industry and how it grew and developed into what we have today. The author then uses this as an analogy to support his view that "utility computing" will replace corporate datacenters we have today. This long history wasn't necessary for the point to be delivered.

One thing this is frequently skipped over is hardware as a service and it's implementation and role that it plays in the growth and success of SaaS.

The author touched on some of the social and business impacts he sees and the impact that it has had on anyone that creates content that can be digitized. The rest of the book covered various observations about the impact of the Internet on society and business that can be found in just about any other Web 2.0 book out there.

This book continues the trend of taking a magazine article that touches the touches on the epicenter of Internet 2.0 that is so popular. "Everything is Miscellaneous" is another example. My opinion is that this book should have stayed as a magazine article. I don't recommend it unless it's the only Web 2.0 book you read.

3.0 out of 5 stars "Pancake People" and the Darker Side of the Net, February 19, 2008

By Trevor Cross "persepolis" (Hingham, MA United States) - See all my reviews
The best part of the book details the dark side of the Internet. For example, the work of Brynjolfsson and Van Alstyne in determining the balkanizing effect of the Internet on social norms is mentioned. Anyone trying to understand social networking (Facebook, Myspace) should be familiar with their work. It is sobering to realize that despite all of the hype, the Internet is in fact making us more isolated in our opinions and attitudes. Carr highlights this area well but I wish he went into even more detail.

"Pancake People" refers to Richard Foreman's description about people on the Internet being a mile wide and an inch deep. Carr describes how the technology behind the Internet (filters, etc) actually compounds this problem.

One of the author's best insights comes when he takes issue with the whole concept behind AI (artificial intelligence). He states that instead of computers becoming more human-like in their thinking, it is we who could become more computer-like in our thinking. As a humanist who grew up loving technology, I find this scenario frightening because it hits close to home. The comments (included in the book) from the co-founders of Google about creating a brain-computer interface reminded me of the "Borg" from Star Trek. For those interested, the Borg were a commentary on the communistic, totalitarian effects of unfettered technology (nanobots, brain/computer interfacing).

3.0 out of 5 stars A pretty good book, with some serious flaws, February 2, 2008

By Peter D. Tillman (Santa Fe, NM USA) - See all my reviews
(TOP 1000 REVIEWER)    (REAL NAME)       
This is a pretty good book, but by turns interesting and annoying. Carr sketches the history of the rise of the big electric utilities in the early 20th century, then predicts that "utility computing" will similarly displace inhouse corporate IT facilities in the early 21st century, just as companies stopped generating their own electricity way back then.

The historical review is nicely done -- I learned, for instance, that General Electric was once Edison General Electric -- and Carr is on to the reason why companies adopt new technology: it's cheaper, more convenient and/or the competition has already adopted it. The annoyances start when he starts prognosticating. As Yogi Berra once observed, "the trouble with predicting the future is that it is very hard." It looks like Carr read everyone else's Internet/computing predictions, mixed them up a bit, and regurgitated.

OK, I'm being a bit hard on him. Where Carr knows something about an industry -- publishing, for instance -- he has some sharp observations on the migration of newspapers online, and the consequent unbundling of the paper package you buy at the corner for a dollar. For other stuff, he's so scattershot, you'd be you'd be better off to read some of the original critics and prophets -- Carr has nothing new to add, and ends up confusing the reader (and probably himself).

So: read the history, the economics, and the publishing stuff, and skim or skip the rest -- that's my advice.

Happy reading--
Peter D. Tillman

3.0 out of 5 stars The Big Switch in Many Ways, December 29, 2007

By M. McDonald (Chicago, IL United States) - See all my reviews
(REAL NAME)   
Nicholas Carr's latest book The Big Switch is not the book that many would expect, in fact its better. Carr, who made his fame by making the assertion that IT doesn't Matter and then asking the question Does IT matter? deals with this subject for about 10% of the book. The remainder concentrates on Carr's looking forward to business, society, politics and the world we are creating. It's a welcome switch as it enables Carr to discuss broader issues rather than hammering on a narrow point.

The net score of three stars is based on the following logic. This book gets four stars as it's is a good anthological review of broader issues that have been in the marketplace for some time. It loses one star because that is all it is, a discussion, without analysis, ideas, alternatives or business applications the book discusses rather than raises issues for the future.

Ostensibly the big switch is between today's corporate computing which has islands of individual automation to what Carr calls the world wide computer - basically the programmable internet. Carr's attempt to coin a new phrase - world wide computer, is one of the things that does not work in this book. It feels contrived and while the internet is undergoing fundamental change, the attempt at rebranding is an unnecessary distraction.

Overall, this is a good book and should be considered as part of the overall future of economics and business genre rather than a discussion of IT or technology. Carr is an editor at heart and that shows through in this book. 80% of the book is reviews and discussions of the works of other people. I counted at least 30 other books and authors that I have read and Carr uses to support his basic argument.

The book's primary weakness is in its lack of attention to business issues, strategies and business recommendations. As an editor, it's understandable that Carr would not know first hand how to run a company. But I would have expected a more balanced analysis of the issues. Carr almost exclusively talks with companies that are vendors of this new solution - the supply side. He is a booster for Google - not a bad thing in itself - but something that leaves the book unbalanced. Without case examples, a discussion of business decisions, and alternatives - the book is too general to be something to organize my company's future around.

As an anthology about technology's influence on the future it's pretty good. The book does not deliver on groundbreaking new ideas that will drive strategy - particularly not for people who have followed the development of the internet. If you have read Gilder, Negroponte, Davenport and Harris, Peters, Lewis, Tapscott, among others, then you will recognize many of the ideas in this book.

Carr's book is in fact a prime example of the future world he describes where individuals garner attention, form a social group and then extract value from that group. Carr garnered attention with IT Doesn't Matter, used that to polarize the business community into IT supporters and detractors - creating even more attention, and finally extracting value from the group in the form of speaking engagements and this book. So Carr has made the big switch and it is from traditional media to a new attention driven economy. (Read Davenport and Beck's book Attention Economy if you want to understand more)


Chapter by Chapter Review

The book is divided in to two parts. The first uses historical analysis to build the ideas that the Internet is following the same developmental path as electric power did 100 years ago. This idea is one of Carr's obsessions and featured throughout his writing. The second section discusses the economic, social and other issues associated with the Internet becoming the platform and marketplace for commerce.

Chapter 1: Burden's Wheel lays out Carr's overall argument from an academic perspective. It starts with the historical position of water power, the precursor to electricity, and then explains conceptually what these different technologies mean. This is a clear statement and one that is important to the book. Carr points out the unique economic impact of general purpose technologies - the few technologies that are the basis for a multitude of other economic activity.

Chapter 2: The Inventor and His Clear is a historical account of the early days of electricity. Well researched, this chapter is good reading for the business history buff than one looking to understand the arguments Carr is making. The chapter focuses largely on the development and adoption of electric power. It points out that electric power had some false starts such as Edison's instance on local DC plants and that it needed the development of some additional technologies to take off. As an analogy to computing and the internet, these examples fit very neatly - almost too neatly into Carr's argument.

Chapter 3: Digital Millwork discusses the recent history of the computer. This is intended to give the reader the opportunity to connect the history of the electricity at the turn of the 20th century with the development of computing at the turn of the 21st century. It works to a point. Straight comparisons between client service computing and DC power generation among others are partially accurate, but incomplete. Carr sees bandwidth as the savior of computing much in the same way that the dynamo and Tesla's AC power turned electric plants into regional power companies.

This chapter communicates Carr's basic complaint with current information technology - at least in this book. His complain on page 56 and 57 is that IT costs too much for what it delivers. Latter he talks about excess capacity in servers and computing capacity. This basic cost economics argument does not take into account the value generated by the existence of the applications that run on those servers and the fact that at the time business leaders, like their grand fathers before them did not have another choice.

Chapter 4: Goodbye, Mr. Gates holds his explanation of the future world - a future of virtual computing where physical location and therefore device based software licensing no longer exists. In the chapter, Mr. Carr is late to the game. Grid computing has been a developing factor for more than 10 years and will accelerate as this book popularizes the idea. The comments in this chapter are not particularly new for the technology aware but they are almost unabashedly positive in favor of Google, something that will continue for the rest of the book

Chapter 5: The White City turns away from a continued development of the technical ideas of virtualization and grid computing and moves back into a historical discussion of how electricity changed people's lives and societies. Again Carr is providing information to set the reader up to make a comparison to what the switch to the Internet might be. His discussion of Insull and Ford are interesting if brief.

Part Two of the book takes a curious turn ad Carr finishes his arguments about the programmable internet and then seeks to systematically undermine the value of that environment on which he says the future is based. He offers few ideas or solutions, just criticism or more appropriately the criticism of others.

Chapter 6 World Wide Computer returns to the notion of what the unbridled possibilities of the programmable internet might be. This chapter concentrates on how wonderful this world will be for the individual with infinite information and computing power available to them. Carr provides a clear example of a Ford Mustang enthusiast's ability to create their own multi-media blog/website/advertising site as an example of how wonderful the world will be. This chapter is the utopian chapter where we all can benefit; Carr will destroy most of those notions in latter chapters.

Here is where Carr discusses the future of corporate computing; giving the topic all of four paragraphs p. 117-118. The basic idea is that today's IT will fade away in the face of `business units and individuals who will be able to control the processing of information directly." For IT people, this is the end user computing argument. This is also the last word he makes on the subject of IT in the book.

Chapter 7: From Many to the Few is a discussion of the social impacts of a programmable internet where each runs their own personal business. Think Tom Peters and personal brand. This is the best chapter of the book and the most unusual Carr sets out to systematically point out the negative consequences of the assertions he makes in the previous chapters. Here he talks about the fact that fewer and fewer people will need to work in a global world of the programmable internet, that the utopia of equality and cottage industries envisioned by the web will not come to pass.

Chapter 8: The Great Unbundling talks about the move from mass markets to markets of one. The chapter also talks about the social implications of a web that connects like people creating a tribal and increasingly multi-polar world, rather than the world wide consciousness assumed to arise when education and communications levels increase.

Chapter 9 Fighting the Net discusses the weaknesses and vulnerabilities of free flowing information and the structural integrity of the net. This chapter again tears away at the foundation of the future that Carr lays out earlier. Normally in a book there would be public policy recommendations to address these points. They are not here giving this chapter more the feeling of journalism rather than analysis and insight.

Chapter 10 A Spider's Web addresses the personal privacy issues associated with the web and the realization that as Richard Hunter says "we live in a world without secrets". This chapter is a warning about the issues of privacy and what it means to do business where everything is recorded and tracked.

Chapter 11 iGod is the far out chapter talking about the fusion of human and machine consciousness. What is possible when the human brain can immediately access infinite information and the machine gains artificial intelligence? These are the questions raised but unaddressed in this chapter. In possibly setting up his next book, Carr provides a journalistic survey of the work that is being driven to bring man and machine together.