Softpanorama

May the source be with you, but remember the KISS principle ;-)
Home Switchboard Unix Administration Red Hat TCP/IP Networks Neoliberalism Toxic Managers
(slightly skeptical) Educational society promoting "Back to basics" movement against IT overcomplexity and  bastardization of classic Unix

Webliography of problems with "pure" cloud environment

News

Hybrid Cloud as Alternative to "Pure" Cloud Computing

Recommended Links

Unix Configuration Management Tools

Unix System Monitoring

 Enterprise Job schedulers
Bandwidth communism Problem of loyalty Issues of security and trust in "pure" cloud environment Questionable costs efficiency of pure cloud Lock in related problems Cloud Mythology
Is Google evil ? Cloud providers as intelligence collection hubs National Security State Big Uncle is Watching You Damage to the US tech companies Agile -- Fake Solution to an Important Problem
Troubleshooting Remote Autonomous Servers Configuring Low End Authonomous Servers Review of Remote Management Systems Virtual Software Appliances Bosos or Empty Suits (Aggressive Incompetent Managers) Cargo cult programming

Typical problems with IT infrastructure

Heterogeneous Unix server farms Conway Law Sysadmin Horror Stories Humor Etc

Introduction

From the historical standpoint "pure cloud" providers represent the return to the mainframe era on a new technology level. Cloud providers are new "glass data centers" and due to overcentralization will soon will be hated as much if not more.  The bottleneck is WAN links, and they are costly both in public and private clouds.

An interesting technology that now is developing as a part of "hybrid cloud" technologies is so called "edge computing".  Of course, now the centralization train is still running at full speed, but in two to three years people might start asking questions "Why my application freezes so often".

There a synergy between "edge computing" and "grid computing": the latter can be viewed as grid computing at the edges of the network with the central management node (the headnode in “HPC speak”) software like Puppet, Chef or Ansible instead of HPC scheduler.

As WAN bandwidth provided at night is several times higher than during the day, one important task that can be performed by "edge computing" and the “time shift” activities: the delay of some transmissions and synchronization of data to the night time. UDP based tools for night allow to transfer large emopunt of data by saturating WAN bandwidth, but even TCP is can be usable at night on high latency links, if you can split the data stream into several sub streams.  I reached comparable to UDB speeds when I was able to split the data stream into 8-16 sub-streams.  Those "time shift" schemes does not require large investment.  All “edge” resources should use unified images for VM and unified patching scheme.

I would define "hybrid cloud"  as a mixture of private cloud, public cloud and "edge" computing with the emphasis on an optimizing consumed WAN bandwidth and offloading processes that consume WAN bandwidth at day time (One perverted example of WAN bandwidth usage is Microsoft Drive synchronization of user data)  to the edge of the network via installation of "autonomous datacenters." Which can be just one remotely controlled.  No local personnel is needed, so this is still "cloud-style organization" as all the staff is at central location.  For the specific task of “shifting synchronization of user data” you need just one server with storage oer site.

From Wikipedia("edge computing"):

Edge computing is a distributed computing paradigm in which computation is largely or completely performed on distributed device nodes known as smart devices or edge devices as opposed to primarily taking place in a centralized cloud environment. The eponymous "edge" refers to the geographic distribution of computing nodes in the network as Internet of Things devices, which are at the "edge" of an enterprise, metropolitan or other network. The motivation is to provide server resources, data analysis and artificial intelligence ("ambient intelligence") closer to data collection sources and cyber-physical systems such as smart sensors and actuators.[1] Edge computing is seen as important in the realization of physical computing, smart cities, ubiquitous computing and the Internet of Things.

Edge computing about centrally (remotely) managed autonomous mini datacenters. Like in classic cloud environment staff is centralized and all computers are centrally managed. But they are located on sites not in central datacenter and that alleviate WAN bottleneck, for certain applications eliminating it completely. Some hardware producers already have product that are designed for such use cases.

... ... ...

Edge computing and grid computing are related.

IBM acquisition of Red Hat is partially designed to capture large enterprises interest in "private cloud" space, when virtual machines that constitute cloud are running on local premises in para-virtualization mode (Solaris x86 Zones, Docker containers).  This way one has the flexibility of "classic" (off site) cloud without prohibitive costs overruns. On Azure a minimal server (2 core CPU, 32GB of space and 8GB of RAM) with minimal, experimental usage costs about $100 a month.  You can lease $3000 Dell server with full hardware warranty for three years for $98.19 a month.  With promotion you can even get free delivery and installation for this amount of money and free removal of hardware after three years of usage.  So like with cloud you do not touch hardware, but you do not pay extra to Microsoft for this privilege either (you do have air-conditioning and electricity costs, though). And such "autonomous, remotely controlled,  mini-datacenter" can be installed anywhere where Dell (or HP) provide services.

There are now several hardware offerings for edge computing. See for example:

Unfortunately, centralization drive now still rules. But having some "autonomous units" on the edge of network is an attractive idea that has future. First of all, because it allows to cut the required WAN bandwidth. Second, if implemented as small units with full remote management ("autonomous"), it allows to avoid many problems with "classic cloud" WAN bottleneck and problems typical for corporate datacenters. 

Sooner or later the age of centralized cloud will come to the logical end.  Something will replace it. As soon as the top management realizes how much they are paying for WAN bandwidth there will be a backlash. I hope that hybrid cloud might become a viable technical policy in two to three years’ timeframe.

And older folks remember quite well how much IBM was hated in 60th and 70th (and this despite excellent compilers they provided innovative VM/360 OS which pioneered multitasking in the USA; base OS of VM/360 used REXX as shell) and how much energy and money people have spent trying to free themselves from this particular "central provider of services" model: "glass datacenter".  It is not unfeasible that cloud provider will repeat a similar path on a new level. Their strong point: delivering centralized services for mobile users is also under scrutiny as NSA and other intelligence agencies have free access to all user emails. 

Already now in many large enterprises WAN constitute a bottleneck, with Office 365 applications periodically frozen,  and quality of service deteriorates to the level of user discontent, if not a revolt.

People younger then 40 probably can't understand to what extent rapid adoption of IBM PC was driven by the desire to send a particular "central service provider" to hell. That historical fact raises a legitimate question of the user resistance to "public cloud" model. Now security consideration and widespread NSA interception of traffic, geo-tracking of mobile phones and unlimited unrestricted access to major cloud providers webmail also became hotly debated issues.

 

Eight classic fallacies of cloud model

They were originated by Peter Deitch  and his "eight classic fallacies" can be summarized as following:

  1. The network is reliable.
  2. Latency is zero.
  3. Bandwidth is infinite.
  4. The network is secure.
  5. Topology doesn't change.
  6. There is one administrator.
  7. Transport cost is zero.
  8. The network is homogeneous.

Giants of cloud computing such as Amazon, Google, Yahoo push the idea of "pure" centralized "cloud-based" services to the extreme. They essentially are trying to replicate mainframe environment on a new level. Replicating all the shortcoming of such an approach. Many companies in such industries as aerospace, financial services, chemical and pharmaceutical industries are likely to avoid "pure" cloud computing model because of their trade secrets, patented formulas and processes as well as compliance concerns. Also large companies usually run multiple datacenters and it is natural for them to be integrated into private "gated" cloud.  For them "hybrid clouds" are already a reality and was for a long time.  No rocket science here.

A complimentary trend of creating remote autonomous small datacenters controlled and managed from the well equipped central command&control center now becomes more fashionable due to DevOps hoopla. Also server remote control tool now reached the stage than you can manage servers hundreds or thousand miles from your office as well as in the case you can walk to the server room and touch them. Some differences remain, but power of remote control tools  and configuration management for remote servers is amazing.  Cluster model of running servives is also becoming more popular and such luter management tools as Bright Cluster manager allow to manage dosens of similar machines from a single image.

Having central control center instead of  centralized serve farm is more flexible approach as you can provide computer power directly were it is needed at the costs much lower that costs a typical large server hosting provider.  Distributed scheme also requires much less WAN bandwidth and is more reliable as in case of problems with WAN connectivity of local hurricane that cut electricity supply local services remain functional. The level of automation available recently increased further due to proliferation of Unix Configuration Management Tools of various complexity and quality.  Puppet now is a pretty well known name, if not widely used system.  Add to this advances in hardware (DRAC 7, blades) and software (Virtual Software Appliances) which now make possible to run "autonomous servers". By autonomous server we will understand a server installed in location where there is no IT staff. There can be local staff knowledgeable on the level of a typical PC user, but that's it. Such servers can be managed centrally from a specialized location (aka control Center) which retains highly qualified staff and 24 x 7 operators. Such location can be even on a different continent.  This is a model used by some outsourcers, such as HPe.

Advantages of hybrid cloud model

This type of infrastructure is pretty common for large chemical companies, banks (with multiple branches) and many other companies with distributed infrastructure. Details of implementation of course differ. Here  we will concentrate on common features rather then differences. The key advantage of local autonomous servers (and by extension small local autonomous datacenters)  is that local services and local traffic remains local. The latter provides several important advantages. Among them:

The attraction of hybrid cloud environment is directly connected with advances of modern hardware and software:

Problems with "pure" cloud computing

Cloud computing in it  "pure" form (100% functionality on remote datacenter) is centralized mainframe-style solution and as such suffers from all limitations inherent in centralization.  Cost is high, but they are attractive for higher IT management as a "stealth" outsourcing solution. As any outsourced solution it suffers from loyalty problem.  Cloud computing as a technology is already more then ten years old and now we know quite a bit about its limitations:

It is important to understand that this technology can be implemented in many flavors: can be a pure play (everything done on the servers of the cloud provider, for example Amazon or Google) or mixed approach with a significant part of the processing done at the client (more common and more practical approach). In no way it presuppose simplistic variant  in which the everything should be done of the server while for some strange reasons rich client considered to be not kosher.

In no way "in the cloud" software services model presuppose the everything should be done of the server and that rich powerful client is not kosher.  Also it ignores low cost of modern server and their excellent remote administration capabilities

Software as a Service (SaaS) also helps to make this environment more flexisble.  SaaS  is a model of software deployment where an application is hosted on a virtual instance of OS that can be run on multiple servers and migrated from one server to another, including, if necessary, to user laptops.  This way you essentially create virtual software appliances.  By eliminating the need to install and run the application on the customer's own computer, SaaS alleviates the customer's burden of software maintenance  and support.  New "groupware" technologies (blogs, wikis, web-conferencing, etc) can he hosted on the local network cheaper and more reliably then on Google or Amazon and they enjoy tremendously better bandwidth due to usage of local network (which now is often 10Gbit/sec) instead of much slower (and in case of private networks more expensive) WAN.   Web hosting providers emerged as a strong, growing industry that essentially pioneered the commercialization of SaaS model and convincingly proved its commercial viability. As Kismore Swaminathan aptly noted in his response to Alastair McAulay’s article

.... as presently constituted, cloud computing may not be a panacea for every organization. The hardware, software and desktop clouds are mature enough for early adopters. Amazon, for example, can already point to more than 10 billion objects on its storage cloud; Salesforce.com generated more than $740 million in revenues for its 2008 fiscal year; and Google includes General Electric and Procter & Gamble among the users of its desktop cloud.

However, several issues must still be addressed and these involve three critical matters: where data resides, the security of software and data against malicious attacks, and performance requirements.

Hillary Clinton private emails server scandal and Podesta email hack provides additional interesting view on the problems of cloud computing,

Actually the fact that, for example, General Electric and  Procter & Gamble use Google arises strong suspicion about quality of high IT management in those organizations. IMHO this is too risky gamble to play for any competent IT architect.  For a large company IT costs already are reduced to around 1% or less, so there are no big saving in going further in cost saving direction. But there are definitely huge risks, as as some point quantity of cost cutting turns in real quality of service issue. 

I would argue that "in the cloud" paradise looks more like software demo from a popular anecdote about the difference between paradise and hell ;-). It turns out that for the last five years there are several competing technologies such as usage of virtual appliances and autonomous datacenters or as they are sometimes called "datacenter in the box".  

Usage of a local network eliminates the main problem of keeping all your data "in the cloud": possibility of network outages and slowdowns. In this case all local services will continue to function, while in the "pure" cloud services are dead.  From the end user perspective, it doesn’t make a difference if a server glitch is caused by a web host or a software service provider. Your data is in the cloud, and not on your local PC. If the cloud evaporates you can’t get to your data. This is well known to Google users. If the service or site goes down, you can’t get to your data and unless you are really big organization have nobody to contact. And even if you have somebody to contact as this is a different organization, they have their own priorities and challenges. 

Software as a service also allow licensing savings, as for example in this model Microsoft charges for actual number of users.  Microsoft Live Mesh might be a step in right direction as it provides useful middle grown by synchronizing data across multiple computers belonging to a users (home/office or office1/office2, etc).

Only some services, for example Web sites and  email, as well as others with limited I/O (both ways) are suitable for this deployment mode. Attempting to do company wide videoconference via cloud, or backups  is a very risky proposition.

That does not means that the list of services that are OK for centralization and can benefit from it is short. Among "limited I/O" services we can mention payroll and enterprise benefits services -- another area where pure "in the cloud" model definitely makes sense.  But most I/O intensive enterprise processing like file sharing is more efficiently done on a local level. That includes most desktop Office related tasks, ERP tasks to name just a few.  Sometimes it is more efficient to implement "in the cloud" approach   on a local level over LAN  instead of WAN ("mini cloud" or "cloud in the box").

another problem is that cloud providers such as Amazon  are mainly interesting, if you experience substantial, unpredictable peaks in your workloads and/or bandwidth use. For stable, consistent workloads you usually end paying too much . Spectrum of available services is  limited and outside running your own virtual servers it is difficult to replace services provided by a real datacenter.  The most commercially viable part is represented by Web hosting and rack hosting providers.  But for web hosting providers the advantages are quickly diminishing with the complexity of website. Among Amazon Web services only S3 storage currently can be called a successful, viable service. Microsoft Live Mesh is mostly a data synchronization service. It presuppose existence of local computers and initially provides syncing files between multiple instances of local computers belonging to the same user. This is an important service which represents a more realistic "mixed" approach.

Backup in the cloud controversies

Costs of cloud servers are high. Using a small one socket "disposable" server and automatic provisioning and remote control tools (DRAC, ILO) you can achieve the same result in remote datacenters at a fraction of costs. This new idea of disposable server (a small 1U server now costs $1000 or less) is a novel idea that gives the second life to the idea of "autonomous datacenter" (at one point promoted by IBM,  but soon completely forgotten as modern IBM is a fashion driven company ;-)  and is a part of hybrid cloud model when some services like email are highly centralized, but other like file services are not.

The problem with "pure cloud" is the bandwidth cost money.  That's why the idea of "backup to the cloud" (which in reality simply means backup over WAN) is such a questionable idea. Unless you can do it at night and did not splash to working hours  you compete for bandwidth  with other people and applications. It still can be used for private backups if you want to say good buy to your privacy.

But in an enterprise environment if such a backup spills over to morning hours you can make life of your staff really miserable, because they are depending of other applications which are also "in the cloud". This is classic Catch 22 situation with such a backup strategy. It is safer to do it "on site" and if due to regulations you need to have off site storage is probably cheaper to buy a private optical link (see, for example, AT&T Optical Private Link Service) to a suitable nearby building. BTW a large attached storage box from, say, Dell costs around $40K for 280TB, $80K for 540TB and so on, while doubling the bandwidth of your WAN connection can run you a million a year.

For this amount of money you move such a unit to a remote site each week (after full backup is done) using a mid size SUV  (the cost of SUV is included ;-). For more fancy setup you can use ideas from container based datacenters  and use two cars instead of one (one on remote site and the other at main datacenter) for weekly full backup (Modular data center - Wikipedia ). Differential backup usually are not that  large and can be handles via  wire. If not, then USPS or FedEx is your  friend. You will still money left of a a couple of lavish parties in comparison with your "in the cloud" costs.

Yes there are some crazy people who are trying WAN transferees of, say, 50-100TB of data thinking that this is a new way to do backups or synch two remote datacenters. They typically pay arm-and-leg for some overhyped and overpriced software from companies like IBM that uses UDP to speed transfer over lines with  high latency. But huge site-to-site transfer still are a challenging tals with with the best of UDP-based transfer software, no matter what presentation IBM and other vendors will give to your top brass (which usually does not understands WAN networking, or networking at all; the deficiently on which IBM relay for a long long time ;-).

If you can shift the transfer at night and not overflow into day hours you are fine, but if you you overflow into morning working hours you disrupt work on many people.

Sill there is no question that multicast feature of such software is a real breakthrough and if you need to push the same file (for example a large videofile) to multiple sites for all employees to watch,  it is really great way to accomplish the task.

But you can't change basic limitations of WAN. Yes some gains can be achieved by using UDP instead of TCP/IP, better compression, and using rsync-style protocol to avoid moving of identical blocks of data. But all those methods are perfectly applicable to local data transfers. And on WAN you typically are facing 30-50MB/sec links (less then 10% fraction of 1Gbit link in best case; bandwidth typically doubles at night so you get  your 10%, but not more). Now please calculate how much time it will take to transferee 50-100TB of data over such a link.  On local 10Mbit link you can probably get 500 MB/sec. So the difference in speed is 10 times, give of take.

Move to pure cloud exacerbate problems of bureaucratization and dilbertalization of IT

Often move to cloud is a stealth way to "neoliberalize" a company, to outsource major IT  functions.  Like always with neoliberalism this most often is a "fake gold". The real but hidden under techno speak motivation are short term financial gains and bonus for top honchos running the company. Also this is a fashionable thing to do and you should not underestimate the role of fashion in IT.

If we view move tot he cloud as a forma of outsourcing it is clear that it increases not decrease bureaucratization problem. And the latter also should not be discounted as it intrinsically linked to the problem of alienation and loyalty.  See

Any large service provider is the same bureaucratic monster as a typical large datacenter that share with the latter all the spectrum of Dilbertalization problems. "Vendor lock-in" problem is real because you lose critical It and no longer able to run your own infrastructure.  So after several year (typically three to five) walk-out is a less viable option, especially when company IT employee transferred to outsourcing company are laid off (typically after two years)  and that gives supplier of cloud service substantial amount of leverage over the customer.  Still as hidden outsourcing "move to the cloud" definitely remain a very a popular model. 

Reversal of decentralization trend will be in turn reversed

It is essentially a reversal of the 30 years old trend toward decentralization, which drove rapid adoption of PC  and elevated laptops to the status of essential corporate tool --  the desire to send a particular "central service provider" to hell.  But laptop also needs centralized services and that created drive toward centralization, which started with Web mail providers such as Hotmail (later bought by Microsoft) and remote disk space providers (which allow to access you file from multiple Pcs, or Pc and tablet, etc). With the rising power of smartphones there is further drive to provide access to data from multiple points. But in no way corporate users are ready to abandon their laptops for dump terminal of a new generation, such as like Chomebooks.

Chromebooks failed miserably to penetrate corporate environment. That failure raises a legitimate question about users resistance to "pure cloud" model. In addition, security consideration and widespread NSA interception of traffic and access of multiple agencies to major cloud providers webmail also became hotly debated issues.

Only pretty reckless IT management now would argue that using Google mail for corporate mail is a way to go, not matter what are the cost savings. Even universities for which it is a real cost saving measure, start to shun Gmail.  

Return on a new level to the centralization of services, which is at the heart of "cloud model"  along with solving some old problems inherent in decentralized environment,  bring backs old problem connected with centralization, which we hotly discussed at the era of mainframes and first of all the centralization of failures. Proponents often exaggerate positive features and underestimate possible problems and possible losses. The vision of the IT future based on a remote centralized and outsourced datacenter that provides services via "cloud" using high speed fiber links is utopian.  In it like neoliberal dream of "free markets" which never existed and will never exist.

Fiber optic lines made "in the cloud" computing more acceptable, including for some brave companies transatlantic traffic,  and some business models impossible in the past quite possible (Netflix).  But that does not mean that this technology is a "universal bottle opener".  Bottlenecks remain.  Replacement of LAN with WAN has its limits. According to Wikipedia "The Fallacies of Distributed Computing" are a set of common but flawed assumptions made by programmers in development of  distributed applications.


Top Visited
Switchboard
Latest
Past week
Past month

NEWS CONTENTS

Old News ;-)

[Jun 12, 2021] A Big Chunk of the Internet Goes Offline Because of a Faulty CDN Provider

Jun 10, 2021 | tech.slashdot.org

(techcrunch.com) 154 Countless popular websites including Reddit, Spotify, Twitch, Stack Overflow, GitHub, gov.uk, Hulu, HBO Max, Quora, PayPal, Vimeo, Shopify, Stripe, and news outlets CNN, The Guardian, The New York Times, BBC and Financial Times are currently facing an outage . A glitch at Fastly, a popular CDN provider, is thought to be the reason, according to a product manager at Financial Times. Fastly has confirmed it's facing an outage on its status website.

[Jun 08, 2021] Technical Evaluations- 6 questions to ask yourself

Average but still useful enumeration of factors what should be considered. One question stands out "Is that SaaS app really cheaper than more headcount?" :-)
Notable quotes:
"... You may decide that this is not a feasible project for the organization at this time due to a lack of organizational knowledge around containers, but conscientiously accepting this tradeoff allows you to put containers on a roadmap for the next quarter. ..."
"... Bells and whistles can be nice, but the tool must resolve the core issues you identified in the first question. ..."
"... Granted, not everything has to be a cost-saving proposition. Maybe it won't be cost-neutral if you save the dev team a couple of hours a day, but you're removing a huge blocker in their daily workflow, and they would be much happier for it. That happiness is likely worth the financial cost. Onboarding new developers is costly, so don't underestimate the value of increased retention when making these calculations. ..."
Apr 21, 2021 | www.redhat.com

When introducing a new tool, programming language, or dependency into your environment, what steps do you take to evaluate it? In this article, I will walk through a six-question framework I use to make these determinations.

What problem am I trying to solve?

We all get caught up in the minutiae of the immediate problem at hand. An honest, critical assessment helps divulge broader root causes and prevents micro-optimizations.

[ You might also like: Six deployment steps for Linux services and their related tools ]

Let's say you are experiencing issues with your configuration management system. Day-to-day operational tasks are taking longer than they should, and working with the language is difficult. A new configuration management system might alleviate these concerns, but make sure to take a broader look at this system's context. Maybe switching from virtual machines to immutable containers eases these issues and more across your environment while being an equivalent amount of work. At this point, you should explore the feasibility of more comprehensive solutions as well. You may decide that this is not a feasible project for the organization at this time due to a lack of organizational knowledge around containers, but conscientiously accepting this tradeoff allows you to put containers on a roadmap for the next quarter.

This intellectual exercise helps you drill down to the root causes and solve core issues, not the symptoms of larger problems. This is not always going to be possible, but be intentional about making this decision.

In the cloud Does this tool solve that problem?

Now that we have identified the problem, it is time for critical evaluation of both ourselves and the selected tool.

A particular technology might seem appealing because it is new because you read a cool blog post about it or you want to be the one giving a conference talk. Bells and whistles can be nice, but the tool must resolve the core issues you identified in the first question.

What am I giving up?

The tool will, in fact, solve the problem, and we know we're solving the right problem, but what are the tradeoffs?

These considerations can be purely technical. Will the lack of observability tooling prevent efficient debugging in production? Does the closed-source nature of this tool make it more difficult to track down subtle bugs? Is managing yet another dependency worth the operational benefits of using this tool?

Additionally, include the larger organizational, business, and legal contexts that you operate under.

Are you giving up control of a critical business workflow to a third-party vendor? If that vendor doubles their API cost, is that something that your organization can afford and is willing to accept? Are you comfortable with closed-source tooling handling a sensitive bit of proprietary information? Does the software licensing make this difficult to use commercially?

While not simple questions to answer, taking the time to evaluate this upfront will save you a lot of pain later on.

Is the project or vendor healthy?

This question comes with the addendum "for the balance of your requirements." If you only need a tool to get your team over a four to six-month hump until Project X is complete, this question becomes less important. If this is a multi-year commitment and the tool drives a critical business workflow, this is a concern.

When going through this step, make use of all available resources. If the solution is open source, look through the commit history, mailing lists, and forum discussions about that software. Does the community seem to communicate effectively and work well together, or are there obvious rifts between community members? If part of what you are purchasing is a support contract, use that support during the proof-of-concept phase. Does it live up to your expectations? Is the quality of support worth the cost?

Make sure you take a step beyond GitHub stars and forks when evaluating open source tools as well. Something might hit the front page of a news aggregator and receive attention for a few days, but a deeper look might reveal that only a couple of core developers are actually working on a project, and they've had difficulty finding outside contributions. Maybe a tool is open source, but a corporate-funded team drives core development, and support will likely cease if that organization abandons the project. Perhaps the API has changed every six months, causing a lot of pain for folks who have adopted earlier versions.

What are the risks?

As a technologist, you understand that nothing ever goes as planned. Networks go down, drives fail, servers reboot, rows in the data center lose power, entire AWS regions become inaccessible, or BGP hijacks re-route hundreds of terabytes of Internet traffic.

Ask yourself how this tooling could fail and what the impact would be. If you are adding a security vendor product to your CI/CD pipeline, what happens if the vendor goes down?

Kubernetes and OpenShift

This brings up both technical and business considerations. Do the CI/CD pipelines simply time out because they can't reach the vendor, or do you have it "fail open" and allow the pipeline to complete with a warning? This is a technical problem but ultimately a business decision. Are you willing to go to production with a change that has bypassed the security scanning in this scenario?

Obviously, this task becomes more difficult as we increase the complexity of the system. Thankfully, sites like k8s.af consolidate example outage scenarios. These public postmortems are very helpful for understanding how a piece of software can fail and how to plan for that scenario.

What are the costs?

The primary considerations here are employee time and, if applicable, vendor cost. Is that SaaS app cheaper than more headcount? If you save each developer on the team two hours a day with that new CI/CD tool, does it pay for itself over the next fiscal year?

Granted, not everything has to be a cost-saving proposition. Maybe it won't be cost-neutral if you save the dev team a couple of hours a day, but you're removing a huge blocker in their daily workflow, and they would be much happier for it. That happiness is likely worth the financial cost. Onboarding new developers is costly, so don't underestimate the value of increased retention when making these calculations.

[ A free guide from Red Hat: 5 steps to automate your business . ]

Wrap up

I hope you've found this framework insightful, and I encourage you to incorporate it into your own decision-making processes. There is no one-size-fits-all framework that works for every decision. Don't forget that, sometimes, you might need to go with your gut and make a judgment call. However, having a standardized process like this will help differentiate between those times when you can critically analyze a decision and when you need to make that leap.

[May 03, 2021] Do You Replace Your Server Or Go To The Cloud- The Answer May Surprise You

May 03, 2021 | www.forbes.com

Is your server or servers getting old? Have you pushed it to the end of its lifespan? Have you reached that stage where it's time to do something about it? Join the crowd. You're now at that decision point that so many other business people are finding themselves this year. And the decision is this: do you replace that old server with a new server or do you go to: the cloud.

Everyone's talking about the cloud nowadays so you've got to consider it, right? This could be a great new thing for your company! You've been told that the cloud enables companies like yours to be more flexible and save on their IT costs. It allows free and easy access to data for employees from wherever they are, using whatever devices they want to use. Maybe you've seen the recent survey by accounting software maker MYOB that found that small businesses that adopt cloud technologies enjoy higher revenues. Or perhaps you've stumbled on this analysis that said that small businesses are losing money as a result of ineffective IT management that could be much improved by the use of cloud based services. Or the poll of more than 1,200 small businesses by technology reseller CDW which discovered that " cloud users cite cost savings, increased efficiency and greater innovation as key benefits" and that " across all industries, storage and conferencing and collaboration are the top cloud services and applications."

So it's time to chuck that old piece of junk and take your company to the cloud, right? Well just hold on.

There's no question that if you're a startup or a very small company or a company that is virtual or whose employees are distributed around the world, a cloud based environment is the way to go. Or maybe you've got high internal IT costs or require more computing power. But maybe that's not you. Maybe your company sells pharmaceutical supplies, provides landscaping services, fixes roofs, ships industrial cleaning agents, manufactures packaging materials or distributes gaskets. You are not featured in Fast Company and you have not been invited to presenting at the next Disrupt conference. But you know you represent the very core of small business in America. I know this too. You are just like one of my company's 600 clients. And what are these companies doing this year when it comes time to replace their servers?

These very smart owners and managers of small and medium sized businesses who have existing applications running on old servers are not going to the cloud. Instead, they've been buying new servers.

Wait, buying new servers? What about the cloud?

At no less than six of my clients in the past 90 days it was time to replace servers. They had all waited as long as possible, conserving cash in a slow economy, hoping to get the most out of their existing machines. Sound familiar? But the servers were showing their age, applications were running slower and now as the companies found themselves growing their infrastructure their old machines were reaching their limit. Things were getting to a breaking point, and all six of my clients decided it was time for a change. So they all moved to cloud, right?

PROMOTED

https://642be7d830a988d07ed5dd23076ca4e7.safeframe.googlesyndication.com/safeframe/1-0-38/html/container.html

https://642be7d830a988d07ed5dd23076ca4e7.safeframe.googlesyndication.com/safeframe/1-0-38/html/container.html

https://642be7d830a988d07ed5dd23076ca4e7.safeframe.googlesyndication.com/safeframe/1-0-38/html/container.html

Nope. None of them did. None of them chose the cloud. Why? Because all six of these small business owners and managers came to the same conclusion: it was just too expensive. Sorry media. Sorry tech world. But this is the truth. This is what's happening in the world of established companies.

Consider the options. All of my clients' evaluated cloud based hosting services from Amazon , Microsoft and Rackspace . They also interviewed a handful of cloud based IT management firms who promised to move their existing applications (Office, accounting, CRM, databases) to their servers and manage them offsite. All of these popular options are viable and make sense, as evidenced by their growth in recent years. But when all the smoke cleared, all of these services came in at about the same price: approximately $100 per month per user. This is what it costs for an existing company to move their existing infrastructure to a cloud based infrastructure in 2013. We've got the proposals and we've done the analysis.

You're going through the same thought process, so now put yourself in their shoes. Suppose you have maybe 20 people in your company who need computer access. Suppose you are satisfied with your existing applications and don't want to go through the agony and enormous expense of migrating to a new cloud based application. Suppose you don't employ a full time IT guy, but have a service contract with a reliable local IT firm.

Now do the numbers: $100 per month x 20 users is $2,000 per month or $24,000 PER YEAR for a cloud based service. How many servers can you buy for that amount? Imagine putting that proposal out to an experienced, battle-hardened, profit generating small business owner who, like all the smart business owners I know, look hard at the return on investment decision before parting with their cash.

For all six of these clients the decision was a no-brainer: they all bought new servers and had their IT guy install them. But can't the cloud bring down their IT costs? All six of these guys use their IT guy for maybe half a day a month to support their servers (sure he could be doing more, but small business owners always try to get away with the minimum). His rate is $150 per hour. That's still way below using a cloud service.

No one could make the numbers work. No one could justify the return on investment. The cloud, at least for established businesses who don't want to change their existing applications, is still just too expensive.

Please know that these companies are, in fact, using some cloud-based applications. They all have virtual private networks setup and their people access their systems over the cloud using remote desktop technologies. Like the respondents in the above surveys, they subscribe to online backup services, share files on DropBox and Microsoft 's file storage, make their calls over Skype, take advantage of Gmail and use collaboration tools like Google Docs or Box. Many of their employees have iPhones and Droids and like to use mobile apps which rely on cloud data to make them more productive. These applications didn't exist a few years ago and their growth and benefits cannot be denied.

Paul-Henri Ferrand, President of Dell North America, doesn't see this trend continuing. "Many smaller but growing businesses are looking and/or moving to the cloud," he told me. "There will be some (small businesses) that will continue to buy hardware but I see the trend is clearly toward the cloud. As more business applications become more available for the cloud, the more likely the trend will continue."

He's right. Over the next few years the costs will come down. Your beloved internal application will become out of date and your only option will be to migrate to a cloud based application (hopefully provided by the same vendor to ease the transition). Your technology partners will help you and the process will be easier, and less expensive than today. But for now, you may find it makes more sense to just buy a new server. It's OK. You're not alone.

Besides Forbes, Gene Marks writes weekly for The New York Times and Inc.com .

Related on Forbes:

[Mar 05, 2021] Edge servers can be strategically placed within the topography of a network to reduce the latency of connecting with them and serve as a buffer to help mitigate overloading a data center

Mar 05, 2021 | opensource.com

... Edge computing is a model of infrastructure design that places many "compute nodes" (a fancy word for a server ) geographically closer to people who use them most frequently. It can be part of the open hybrid-cloud model, in which a centralized data center exists to do all the heavy lifting but is bolstered by smaller regional servers to perform high frequency -- but usually less demanding -- tasks...

Historically, a computer was a room-sized device hidden away in the bowels of a university or corporate head office. Client terminals in labs would connect to the computer and make requests for processing. It was a centralized system with access points scattered around the premises. As modern networked computing has evolved, this model has been mirrored unexpectedly. There are centralized data centers to provide serious processing power, with client computers scattered around so that users can connect. However, the centralized model makes less and less sense as demands for processing power and speed are ramping up, so the data centers are being augmented with distributed servers placed on the "edge" of the network, closer to the users who need them.

The "edge" of a network is partly an imaginary place because network boundaries don't exactly map to physical space. However, servers can be strategically placed within the topography of a network to reduce the latency of connecting with them and serve as a buffer to help mitigate overloading a data center.

... ... ...

While it's not exclusive to Linux, container technology is an important part of cloud and edge computing. Getting to know Linux and Linux containers helps you learn to install, modify, and maintain "serverless" applications. As processing demands increase, it's more important to understand containers, Kubernetes and KubeEdge , pods, and other tools that are key to load balancing and reliability.

... ... ...

The cloud is largely a Linux platform. While there are great layers of abstraction, such as Kubernetes and OpenShift, when you need to understand the underlying technology, you benefit from a healthy dose of Linux knowledge. The best way to learn it is to use it, and Linux is remarkably easy to try . Get the edge on Linux so you can get Linux on the edge.

[Mar 29, 2020] Why Didn't We Test Our Trade's 'Antifragility' Before COVID-19 by Gene Callahan and Joe Norman

Highly recommended!
Mar 28, 2020 | www.theamericanconservative.com

On April 21, 2011, the region of Amazon Web Services covering eastern North America crashed. The crash brought down the sites of large customers such as Quora, Foursquare, and Reddit. It took Amazon over a week to bring its system fully back online, and some customer data was lost permanently.

But one company whose site did not crash was Netflix. It turns out that Netflix had made themselves "antifragile" by employing software they called "Chaos Monkey," which regularly and randomly brought down Netflix servers. By continually crashing their own servers, Netflix learned how to nevertheless keep other portions of their network running. And so when Amazon US-East crashed, Netflix ran on, unfazed.

This phenomenon is discussed by Nassim Taleb in his book Antifragile : a system that depends on the absence of change is fragile. The companies that focused on keeping all of their servers up and running all the time went completely offline when Amazon crashed from under them. But the company that had exposed itself to lots of little crashes could handle the big crash. That is because the minor, "undesirable" changes stress the system in a way that can make it stronger.

The idea of antifragility does not apply only to computer networks. For instance, by trying to eliminate minor downturns in the economy, central bank policy can make that economy extremely vulnerable to a major recession. Running only on treadmills or tracks makes the joints extremely vulnerable when, say, one steps in a pothole in the sidewalk.

What does this have to do with trade policy? For many reasons, such as the recent coronavirus outbreak, flows of goods are subject to unexpected shocks.

Both a regime of "unfettered" free trade, and its opposite, that of complete autarchy, are fragile in the face of such shocks. A trade policy aimed not at complete free trade or protectionism, but at making an economy better at absorbing and adapting to rapid change, is more sane and salutary than either extreme. Furthermore, we suggest practicing for shocks can help make an economy antifragile.

Amongst academic economists, the pure free-trade position is more popular. The case for international trade, absent the artificial interference of government trade policy, is generally based upon the "principle of comparative advantage," first formulated by the English economist David Ricardo in the early 19th century. Ricardo pointed out, quite correctly, that even if, among two potential trading partners looking to trade a pair of goods, one of them is better at producing both of them, there still exist potential gains from trade -- so long as one of them is relatively better at producing one of the goods, and the other (as a consequence of this condition) relatively better at producing the other. For example, Lebron James may be better than his local house painter at playing basketball, and at painting houses, given his extreme athleticism and long reach. But he is so much more "better" at basketball that it can still make sense for him to concentrate on basketball and pay the painter to paint his house.

And so, per Ricardo, it is among nations: even if, say, Sweden can produce both cars and wool sweaters more efficiently than Scotland, if Scotland is relatively less bad at producing sweaters than cars, it still makes sense for Scotland to produce only wool sweaters, and trade with Sweden for the cars it needs.

When we take comparative advantage to its logical conclusion at the global scale, it suggests that each agent (say, nation) should focus on one major industry domestically and that no two agents should specialize in the same industry. To do so would be to sacrifice the supposed advantage of sourcing from the agent who is best positioned to produce a particular good, with no gain for anyone.

Good so far, but Ricardo's case contains two critical hidden assumptions: first, that the prices of the goods in question will remain more or less stable in the global marketplace, and second that the availability of imported goods from specialized producers will remain uninterrupted, such that sacrificing local capabilities for cheaper foreign alternatives.

So what happens in Scotland if the Swedes suddenly go crazy for yak hair sweaters (produced in Tibet) and are no longer interested in Scottish sweaters at all? The price of those sweaters crashes, and Scotland now finds itself with most of its productive capacity specialized in making a product that can only be sold at a loss.

Or what transpires if Scotland is no longer able, for whatever reason, to produce sweaters, but the Swedes need sweaters to keep warm? Swedes were perhaps once able to make their own sweaters, but have since funneled all their resources into making cars, and have even lost the knowledge of sweater-making. Now to keep warm, the Swedes have to rapidly build the infrastructure and workforce needed to make sweaters, and regain the knowledge of how to do so, as the Scots had not only been their sweater supplier, but the only global sweater supplier.

So we see that the case for extreme specialization, based on a first-order understanding of comparative advantage, collapses when faced with a second-order effect of a dramatic change in relative prices or conditions of supply.

That all may sound very theoretical, but collapses due to over-specialization, prompted by international agencies advising developing economies based on naive comparative-advantage analysis, have happened all too often. For instance, a number of African economies, persuaded to base their entire economy on a single good in which they had a comparative advantage (e.g, gold, cocoa, oil, or bauxite), saw their economies crash when the price of that commodity fell. People who had formerly been largely self-sufficient found themselves wage laborers for multinationals in good times, and dependents on foreign charity during bad times.

While the case for extreme specialization in production collapses merely by letting prices vary, it gets even worse for the "just specialize in the single thing you do best" folks once we add in considerations of pandemics, wars, extreme climate change, and other such shocks. We have just witnessed how relying on China for such a high percentage of our medical supplies and manufacturing has proven unwise when faced with an epidemic originating in China.

On a smaller scale, the great urban theorist Jane Jacobs stressed the need for economic diversity in a city if it is to flourish. Detroit's over-reliance on the automobile industry, and its subsequent collapse when that industry largely deserted it, is a prominent example of Jacobs' point. And while Detroit is perhaps the most famous example of a city collapsing due to over-specialization, it is far from the only one .

All of this suggests that trade policy, at any level, should have, as its primary goal, the encouragement of diversity in that level's economic activity. To embrace the extremes of "pure free trade" or "total self-sufficiency" is to become more susceptible to catastrophe from changing conditions. A region that can produce only a few goods is fragile in the face of an event, like the coronavirus, that disrupts the flow of outside goods. On the other hand, turning completely inward, and cutting the region off from the outside, leaves it without outside help when confronting a local disaster, like an extreme drought.

To be resilient as a social entity, whether a nation, region, city, or family, will have a diverse mix of internal and external resources it can draw upon for sustenance. Even for an individual, total specialization and complete autarchy are both bad bets. If your only skill is repairing Sony Walkmen, you were probably pretty busy in 2000, but by today you likely don't have much work. Complete individual autarchy isn't ever really even attempted: if you watch YouTube videos of supposedly "self-reliant" people in the wilderness, you will find them using axes, radios, saws, solar panels, pots and pans, shirts, shoes, tents, and many more goods produced by others.

In the technical literature, having such diversity at multiple scales is referred to as "multiscale variety." In a system that displays multiscale variety, no single scale accounts for all of the diversity of behavior in the system. The practical importance of this is related to the fact that shocks themselves come at different scales. Some shocks might be limited to a town or a region, for instance local weather events, while others can be much more widespread, such as the coronavirus pandemic we are currently facing.

A system with multiscale variety is able to respond to shocks at the scale at which they occur: if one region experiences a drought while a neighboring region does not, agricultural supplementation from the currently abundant region can be leveraged. At a smaller scale, if one field of potatoes becomes infested with a pest, while the adjacent cows in pasture are spared, the family who owns the farm will still be able to feed themselves and supply products to the market.

Understanding this, the question becomes how can trade policy, conceived broadly, promote the necessary variety and resiliency to mitigate and thrive in the face of the unexpected? Crucially, we should learn from the tech companies: practice disconnecting, and do it randomly. In our view there are two important components to the intentional disruption: (1) it is regular enough to generate "muscle memory" type responses; and (2) it is random enough that responses are not "overfit" to particular scenarios.

For an individual or family, implementing such a policy might create some hardships, but there are few institutional barriers to doing so. One week, simply declare, "Let's pretend all of the grocery stores are empty, and try getting by only on what we can produce in the yard or have stockpiled in our house!" On another occasion, perhaps, see if you can keep your house warm for a few days without input from utility companies.

Businesses are also largely free of institutional barriers to practicing disconnecting. A company can simply say, "We are awfully dependent on supplier X: this week, we are not going to order from them, and let's see what we can do instead!" A business can also seek out external alternatives to over-reliance on crucial internal resources: for instance, if your top tech guy can hold your business hostage, it is a good idea to find an outside consulting firm that could potentially fill his role.

When we get up to the scale of the nation, things become (at least institutionally) trickier. If Freedonia suddenly bans the import of goods from Ruritania, even for a week, Ruritania is likely to regard this as a "trade war," and may very well go to the WTO and seek relief. However, the point of this reorientation of trade policy is not to promote hostility to other countries, but to make one's own country more resilient. A possible solution to this problem is that a national government could periodically, at random times, buy all of the imports of some good from some other country, and stockpile them. Then the foreign supplier would have no cause for complaint: its goods are still being purchased! But domestic manufacturers would have to learn to adjust to a disappearance of the supply of palm oil from Indonesia, or tin from China, or oil from Norway.

Critics will complain that such government management of trade flows, even with the noble aim of rendering an economy antifragile, will inevitably be turned to less pure purposes, like protecting politically powerful industrialists. But so what? It is not as though the pursuit of free trade hasn't itself yielded perverse outcomes, such as the NAFTA trade agreement that ran to over one thousand pages. Any good aim is likely to suffer diversion as it passes through the rough-and-tumble of political reality. Thus, we might as well set our sites on an ideal policy, even though it won't be perfectly realized.

We must learn to deal with disruptions when success is not critical to survival. The better we become at responding to unexpected shocks, the lower the cost will be each time we face an event beyond our control that demands an adaptive response. To wait until adaptation is necessary makes us fragile when a real crisis appears. We should begin to develop an antifragile economy today, by causing our own disruptions and learning to overcome them. Deliberately disrupting our own economy may sound crazy. But then, so did deliberately crashing one's own servers, until Chaos Monkey proved that it works.

Gene Callahan teaches at the Tandon School of Engineering at New York University. Joe Norman is a data scientist and researcher at the New England Complex Systems Institute.

My Gana 20 hours ago
Most disruptive force is own demographic change of which govts have known for decades. Caronovirus challenge is nothing compared to what will happen because US ed system discriminated against the poor who will be the majority!
PierrePaul 12 hours ago
What Winston Churchill once said about the Americans is in fact true of all humans: "Americans always end up doing
the right thing once they have exhausted all other options". That's just as true of the French (I write from France) since our government stopped stocking a strategic reserve of a billion breathing-masks in 2013 because "we could buy them in Chine for a lower costs". Now we can't produce enough masks even for our hospitals.

[Mar 05, 2020] Micro data center

Mar 05, 2020 | en.wikipedia.org

A micro data center ( MDC ) is a smaller or containerized (modular) data center architecture that is designed for computer workloads not requiring traditional facilities. Whereas the size may vary from rack to container, a micro data center may include fewer than four servers in a single 19-inch rack. It may come with built-in security systems, cooling systems, and fire protection. Typically there are standalone rack-level systems containing all the components of a 'traditional' data center, [1] including in-rack cooling, power supply, power backup, security, fire and suppression. Designs exist where energy is conserved by means of temperature chaining , in combination with liquid cooling. [2]

In mid-2017, technology introduced by the DOME project was demonstrated enabling 64 high-performance servers, storage, networking, power and cooling to be integrated in a 2U 19" rack-unit. This packaging, sometimes called 'datacenter-in-a-box' allows deployments in spaces where traditional data centers do not fit, such as factory floors ( IOT ) and dense city centers, especially for edge-computing and edge-analytics.

MDCs are typically portable and provide plug and play features. They can be rapidly deployed indoors or outdoors, in remote locations, for a branch office, or for temporary use in high-risk zones. [3] They enable distributed workloads , minimizing downtime and increasing speed of response.

[Mar 05, 2020] What's next for data centers Think micro data centers by Larry Dignan

Apr 14, 2019 | www.zdnet.com

A micro data center, a mini version of a data center rack, could work as edge computing takes hold in various industries. Here's a look at the moving parts behind the micro data center concept.

[Nov 08, 2019] Multiple Linux sysadmins working as root

No new interesting ideas for such an important topic whatsoever. One of the main problems here is documenting actions of each administrator in such a way that the set of actions was visible to everybody in a convenient and transparent matter. With multiple terminal opened Unix history is not the file from which you can deduct each sysadmin actions as parts of the history from additional terminals are missing. , not smooch access. Actually Solaris has some ideas implemented in Solaris 10, but they never made it to Linux
May 21, 2012 | serverfault.com

In our team we have three seasoned Linux sysadmins having to administer a few dozen Debian servers. Previously we have all worked as root using SSH public key authentication. But we had a discussion on what is the best practice for that scenario and couldn't agree on anything.

Everybody's SSH public key is put into ~root/.ssh/authorized_keys2

Using personalized accounts and sudo

That way we would login with personalized accounts using SSH public keys and use sudo to do single tasks with root permissions. In addition we could give ourselves the "adm" group that allows us to view log files.

Using multiple UID 0 users

This is a very unique proposal from one of the sysadmins. He suggest to create three users in /etc/passwd all having UID 0 but different login names. He claims that this is not actually forbidden and allow everyone to be UID 0 but still being able to audit.

Comments:

The second option is the best one IMHO. Personal accounts, sudo access. Disable root access via SSH completely. We have a few hundred servers and half a dozen system admins, this is how we do it.

How does agent forwarding break exactly?

Also, if it's such a hassle using sudo in front of every task you can invoke a sudo shell with sudo -s or switch to a root shell with sudo su -

thepearson thepearson 775 8 8 silver badges 18 18 bronze badges

add a comment | 9 With regard to the 3rd suggested strategy, other than perusal of the useradd -o -u userXXX options as recommended by @jlliagre, I am not familiar with running multiple users as the same uid. (hence if you do go ahead with that, I would be interested if you could update the post with any issues (or sucesses) that arise...)

I guess my first observation regarding the first option "Everybody's SSH public key is put into ~root/.ssh/authorized_keys2", is that unless you absolutely are never going to work on any other systems;

  1. then at least some of the time, you are going to have to work with user accounts and sudo

The second observation would be, that if you work on systems that aspire to HIPAA, PCI-DSS compliance, or stuff like CAPP and EAL, then you are going to have to work around the issues of sudo because;

  1. It an industry standard to provide non-root individual user accounts, that can be audited, disabled, expired, etc, typically using some centralized user database.

So; Using personalized accounts and sudo

It is unfortunate that as a sysadmin, almost everything you will need to do on a remote machine is going to require some elevated permissions, however it is annoying that most of the SSH based tools and utilities are busted while you are in sudo

Hence I can pass on some tricks that I use to work-around the annoyances of sudo that you mention. The first problem is that if root login is blocked using PermitRootLogin=no or that you do not have the root using ssh key, then it makes SCP files something of a PITA.

Problem 1 : You want to scp files from the remote side, but they require root access, however you cannot login to the remote box as root directly.

Boring Solution : copy the files to home directory, chown, and scp down.

ssh userXXX@remotesystem , sudo su - etc, cp /etc/somefiles to /home/userXXX/somefiles , chown -R userXXX /home/userXXX/somefiles , use scp to retrieve files from remote.

Less Boring Solution : sftp supports the -s sftp_server flag, hence you can do something like the following (if you have configured password-less sudo in /etc/sudoers );

sftp  -s '/usr/bin/sudo /usr/libexec/openssh/sftp-server' \
userXXX@remotehost:/etc/resolv.conf

(you can also use this hack-around with sshfs, but I am not sure its recommended... ;-)

If you don't have password-less sudo rights, or for some configured reason that method above is broken, I can suggest one more less boring file transfer method, to access remote root files.

Port Forward Ninja Method :

Login to the remote host, but specify that the remote port 3022 (can be anything free, and non-reserved for admins, ie >1024) is to be forwarded back to port 22 on the local side.

 [localuser@localmachine ~]$ ssh userXXX@remotehost -R 3022:localhost:22
Last login: Mon May 21 05:46:07 2012 from 123.123.123.123
------------------------------------------------------------------------
This is a private system; blah blah blah
------------------------------------------------------------------------

Get root in the normal fashion...

-bash-3.2$ sudo su -
[root@remotehost ~]#

Now you can scp the files in the other direction avoiding the boring boring step of making a intermediate copy of the files;

[root@remotehost ~]#  scp -o NoHostAuthenticationForLocalhost=yes \
 -P3022 /etc/resolv.conf localuser@localhost:~
localuser@localhost's password: 
resolv.conf                                 100%  
[root@remotehost ~]#

Problem 2: SSH agent forwarding : If you load the root profile, e.g. by specifying a login shell, the necessary environment variables for SSH agent forwarding such as SSH_AUTH_SOCK are reset, hence SSH agent forwarding is "broken" under sudo su - .

Half baked answer :

Anything that properly loads a root shell, is going to rightfully reset the environment, however there is a slight work-around your can use when you need BOTH root permission AND the ability to use the SSH Agent, AT THE SAME TIME

This achieves a kind of chimera profile, that should really not be used, because it is a nasty hack , but is useful when you need to SCP files from the remote host as root, to some other remote host.

Anyway, you can enable that your user can preserve their ENV variables, by setting the following in sudoers;

 Defaults:userXXX    !env_reset

this allows you to create nasty hybrid login environments like so;

login as normal;

[localuser@localmachine ~]$ ssh userXXX@remotehost 
Last login: Mon May 21 12:33:12 2012 from 123.123.123.123
------------------------------------------------------------------------
This is a private system; blah blah blah
------------------------------------------------------------------------
-bash-3.2$ env | grep SSH_AUTH
SSH_AUTH_SOCK=/tmp/ssh-qwO715/agent.1971

create a bash shell, that runs /root/.profile and /root/.bashrc . but preserves SSH_AUTH_SOCK

-bash-3.2$ sudo -E bash -l

So this shell has root permissions, and root $PATH (but a borked home directory...)

bash-3.2# id
uid=0(root) gid=0(root) groups=0(root),1(bin),2(daemon),3(sys),4(adm),6(disk),10(wheel) context=user_u:system_r:unconfined_t
bash-3.2# echo $PATH
/usr/kerberos/sbin:/usr/local/sbin:/usr/sbin:/sbin:/home/xtrabm/xtrabackup-manager:/usr/kerberos/bin:/opt/admin/bin:/usr/local/bin:/bin:/usr/bin:/opt/mx/bin

But you can use that invocation to do things that require remote sudo root, but also the SSH agent access like so;

bash-3.2# scp /root/.ssh/authorized_keys ssh-agent-user@some-other-remote-host:~
/root/.ssh/authorized_keys              100%  126     0.1KB/s   00:00    
bash-3.2#

Tom H Tom H 8,793 3 3 gold badges 34 34 silver badges 57 57 bronze badges

add a comment | 2 The 3rd option looks ideal - but have you actually tried it out to see what's happenning? While you might see the additional usernames in the authentication step, any reverse lookup is going to return the same value.

Allowing root direct ssh access is a bad idea, even if your machines are not connected to the internet / use strong passwords.

Usually I use 'su' rather than sudo for root access.

symcbean symcbean 18.8k 1 1 gold badge 24 24 silver badges 40 40 bronze badges

add a comment | 2 I use (1), but I happened to type

rm -rf / tmp *

on one ill-fated day.I can see to be bad enough if you have more than a handful admins.

(2) Is probably more engineered - and you can become full-fledged root through sudo su -. Accidents are still possible though.

(3) I would not touch with a barge pole. I used it on Suns, in order to have a non-barebone-sh root account (if I remember correctly) but it was never robust - plus I doubt it would be very auditable.

add a comment | 2 Definitely answer 2.
  1. Means that you're allowing SSH access as root . If this machine is in any way public facing, this is just a terrible idea; back when I ran SSH on port 22, my VPS got multiple attempts hourly to authenticate as root. I had a basic IDS set up to log and ban IPs that made multiple failed attempts, but they kept coming. Thankfully, I'd disabled SSH access as the root user as soon as I had my own account and sudo configured. Additionally, you have virtually no audit trail doing this.
  2. Provides root access as and when it is needed. Yes, you barely have any privileges as a standard user, but this is pretty much exactly what you want; if an account does get compromised, you want it to be limited in its abilities. You want any super user access to require a password re-entry. Additionally, sudo access can be controlled through user groups, and restricted to particular commands if you like, giving you more control over who has access to what. Additionally, commands run as sudo can be logged, so it provides a much better audit trail if things go wrong. Oh, and don't just run "sudo su -" as soon as you log in. That's terrible, terrible practice.
  3. Your sysadmin's idea is bad. And he should feel bad. No, *nix machines probably won't stop you from doing this, but both your file system, and virtually every application out there expects each user to have a unique UID. If you start going down this road, I can guarantee that you'll run into problems. Maybe not immediately, but eventually. For example, despite displaying nice friendly names, files and directories use UID numbers to designate their owners; if you run into a program that has a problem with duplicate UIDs down the line, you can't just change a UID in your passwd file later on without having to do some serious manual file system cleanup.

sudo is the way forward. It may cause additional hassle with running commands as root, but it provides you with a more secure box, both in terms of access and auditing.

Rohaq Rohaq 121 3 3 bronze badges

Definitely option 2, but use groups to give each user as much control as possible without needing to use sudo. sudo in front of every command loses half the benefit because you are always in the danger zone. If you make the relevant directories writable by the sysadmins without sudo you return sudo to the exception which makes everyone feel safer.

Julian Julian 121 4 4 bronze badges

In the old days, sudo did not exist. As a consequence, having multiple UID 0 users was the only available alternative. But it's still not that good, notably with logging based on the UID to obtain the username. Nowadays, sudo is the only appropriate solution. Forget anything else.

It is documented permissible by fact. BSD unices have had their toor account for a long time, and bashroot users tend to be accepted practice on systems where csh is standard (accepted malpractice ;)

add a comment | 0 Perhaps I'm weird, but method (3) is what popped into my mind first as well. Pros: you'd have every users name in logs and would know who did what as root. Cons: they'd each be root all the time, so mistakes can be catastrophic.

I'd like to question why you need all admins to have root access. All 3 methods you propose have one distinct disadvantage: once an admin runs a sudo bash -l or sudo su - or such, you lose your ability to track who does what and after that, a mistake can be catastrophic. Moreover, in case of possible misbehaviour, this even might end up a lot worse.

Instead you might want to consider going another way:

This way, martin would be able to safely handle postfix, and in case of mistake or misbehaviour, you'd only lose your postfix system, not entire server.

Same logic can be applied to any other subsystem, such as apache, mysql, etc.

Of course, this is purely theoretical at this point, and might be hard to set up. It does look like a better way to go tho. At least to me. If anyone tries this, please let me know how it went.

Tuncay Göncüoğlu Tuncay Göncüoğlu 561 3 3 silver badges 9 9 bronze badges

[Oct 22, 2019] Bank of America Says It Saves $2 Billion Per Year By Ignoring Amazon and Microsoft and Building Its Own Cloud Instead

Oct 22, 2019 | slashdot.org

(businessinsider.com) 121 building its own private cloud software rather than outsourcing to companies like Amazon, Microsoft, and Google. From a report: The investment, including a $350 million charge in 2017, hasn't been cheap, but it has had a striking payoff, CEO Brian Moynihan said during the company's third-quarter earnings call. He said the decision helped reduce the firm's servers to 70,000 from 200,000 and its data centers to 23 from 60, and it has resulted in $2 billion in annual infrastructure savings.

[Aug 22, 2019] Why Micro Data Centers Deliver Good Things in Small Packages by Calvin Hennick

Aug 22, 2019 | solutions.cdw.com

Enterprises are deploying self-contained micro data centers to power computing at the network edge.

Calvin Hennick is a freelance journalist who specializes in business and technology writing. He is a contributor to the CDW family of technology magazines.

The location for data processing has changed significantly throughout the history of computing. During the mainframe era, data was processed centrally, but client/server architectures later decentralized computing. In recent years, cloud computing centralized many processing workloads, but digital transformation and the Internet of Things are poised to move computing to new places, such as the network edge .

"There's a big transformation happening," says Thomas Humphrey, segment director for edge computing at APC . "Technologies like IoT have started to require that some local computing and storage happen out in that distributed IT architecture."

For example, some IoT systems require processing of data at remote locations rather than a centralized data center , such as at a retail store instead of a corporate headquarters.

To meet regulatory requirements and business needs, IoT solutions often need low latency, high bandwidth, robust security and superior reliability . To meet these demands, many organizations are deploying micro data centers: self-contained solutions that provide not only essential infrastructure, but also physical security, power and cooling and remote management capabilities.

"Digital transformation happens at the network edge, and edge computing will happen inside micro data centers ," says Bruce A. Taylor, executive vice president at Datacenter Dynamics . "This will probably be one of the fastest growing segments -- if not the fastest growing segment -- in data centers for the foreseeable future."

What Is a Micro Data Center?

Delivering the IT capabilities needed for edge computing represents a significant challenge for many organizations, which need manageable and secure solutions that can be deployed easily, consistently and close to the source of computing . Vendors such as APC have begun to create comprehensive solutions that provide these necessary capabilities in a single, standardized package.

"From our perspective at APC, the micro data center was a response to what was happening in the market," says Humphrey. "We were seeing that enterprises needed more robust solutions at the edge."

Most micro data center solutions rely on hyperconverged infrastructure to integrate computing, networking and storage technologies within a compact footprint . A typical micro data center also incorporates physical infrastructure (including racks), fire suppression, power, cooling and remote management capabilities. In effect, the micro data center represents a sweet spot between traditional IT closets and larger modular data centers -- giving organizations the ability to deploy professional, powerful IT resources practically anywhere .

Standardized Deployments Across the Country

Having robust IT resources at the network edge helps to improve reliability and reduce latency, both of which are becoming more and more important as analytics programs require that data from IoT deployments be processed in real time .

"There's always been edge computing," says Taylor. "What's new is the need to process hundreds of thousands of data points for analytics at once."

Standardization, redundant deployment and remote management are also attractive features, especially for large organizations that may need to deploy tens, hundreds or even thousands of micro data centers. "We spoke to customers who said, 'I've got to roll out and install 3,500 of these around the country,'" says Humphrey. "And many of these companies don't have IT staff at all of these sites." To address this scenario, APC designed standardized, plug-and-play micro data centers that can be rolled out seamlessly. Additionally, remote management capabilities allow central IT departments to monitor and troubleshoot the edge infrastructure without costly and time-intensive site visits.

In part because micro data centers operate in far-flung environments, security is of paramount concern. The self-contained nature of micro data centers ensures that only authorized personnel will have access to infrastructure equipment , and security tools such as video surveillance provide organizations with forensic evidence in the event that someone attempts to infiltrate the infrastructure.

How Micro Data Centers Can Help in Retail, Healthcare

Micro data centers make business sense for any organization that needs secure IT infrastructure at the network edge. But the solution is particularly appealing to organizations in fields such as retail, healthcare and finance , where IT environments are widely distributed and processing speeds are often a priority.

In retail, for example, edge computing will become more important as stores find success with IoT technologies such as mobile beacons, interactive mirrors and real-time tools for customer experience, behavior monitoring and marketing .

"It will be leading-edge companies driving micro data center adoption, but that doesn't necessarily mean they'll be technology companies," says Taylor. "A micro data center can power real-time analytics for inventory control and dynamic pricing in a supermarket."

In healthcare, digital transformation is beginning to touch processes and systems ranging from medication carts to patient records, and data often needs to be available locally; for example, in case of a data center outage during surgery. In finance, the real-time transmission of data can have immediate and significant financial consequences. And in both of these fields, regulations governing data privacy make the monitoring and security features of micro data centers even more important.

Micro data centers also have enormous potential to power smart city initiatives and to give energy companies a cost-effective way of deploying resources in remote locations , among other use cases.

"The proliferation of edge computing will be greater than anything we've seen in the past," Taylor says. "I almost can't think of a field where this won't matter."

Learn more about how solutions and services from CDW and APC can help your organization overcome its data center challenges.

Micro Data Centers Versus IT Closets

Think the micro data center is just a glorified update on the traditional IT closet? Think again.

"There are demonstrable differences," says Bruce A. Taylor, executive vice president at Datacenter Dynamics. "With micro data centers, there's a tremendous amount of computing capacity in a very small, contained space, and we just didn't have that capability previously ."

APC identifies three key differences between IT closets and micro data centers:

Difference #1: Uptime Expectations. APC notes that, of the nearly 3 million IT closets in the U.S., over 70 percent report outages directly related to human error. In an unprotected IT closet, problems can result from something as preventable as cleaning staff unwittingly disconnecting a cable. Micro data centers, by contrast, utilize remote monitoring, video surveillance and sensors to reduce downtime related to human error.

Difference #2: Cooling Configurations. The cooling of IT wiring closets is often approached both reactively and haphazardly, resulting in premature equipment failure. Micro data centers are specifically designed to assure cooling compatibility with anticipated loads.

Difference #3: Power Infrastructure. Unlike many IT closets, micro data centers incorporate uninterruptible power supplies, ensuring that infrastructure equipment has the power it needs to help avoid downtime.

[Nov 13, 2018] GridFTP : User s Guide

Notable quotes:
"... file:///path/to/my/file ..."
"... gsiftp://hostname/path/to/remote/file ..."
"... third party transfer ..."
toolkit.globus.org

Table of Contents

1. Introduction
2. Usage scenarios
2.1. Basic procedure for using GridFTP (globus-url-copy)
2.2. Accessing data in...
3. Command line tools
4. Graphical user interfaces
4.1. Globus GridFTP GUI
4.2. UberFTP
5. Security Considerations
5.1. Two ways to configure your server
5.2. New authentication options
5.3. Firewall requirements
6. Troubleshooting
6.1. Establish control channel connection
6.2. Try running globus-url-copy
6.3. If your server starts...
7. Usage statistics collection by the Globus Alliance
1. Introduction The GridFTP User's Guide provides general end user-oriented information. 2. Usage scenarios 2.1. Basic procedure for using GridFTP (globus-url-copy) If you just want the "rules of thumb" on getting started (without all the details), the following options using globus-url-copy will normally give acceptable performance:
globus-url-copy -vb -tcp-bs 2097152 -p 4 source_url destination_url
The source/destination URLs will normally be one of the following: 2.1.1. Putting files One of the most basic tasks in GridFTP is to "put" files, i.e., moving a file from your file system to the server. So for example, if you want to move the file /tmp/foo from a file system accessible to the host on which you are running your client to a file name /tmp/bar on a host named remote.machine.my.edu running a GridFTP server, you would use this command:
globus-url-copy -vb -tcp-bs 2097152 -p 4 file:///tmp/foo gsiftp://remote.machine.my.edu/tmp/bar
[Note] Note
In theory, remote.machine.my.edu could be the same host as the one on which you are running your client, but that is normally only done in testing situations.
2.1.2. Getting files A get, i.e, moving a file from a server to your file system, would just reverse the source and destination URLs:
[Tip] Tip
Remember file: always refers to your file system.
globus-url-copy -vb -tcp-bs 2097152 -p 4 gsiftp://remote.machine.my.edu/tmp/bar file:///tmp/foo
2.1.3. Third party transfers Finally, if you want to move a file between two GridFTP servers (a third party transfer ), both URLs would use gsiftp: as the protocol:
globus-url-copy -vb -tcp-bs 2097152 -p 4 gsiftp://other.machine.my.edu/tmp/foo gsiftp://remote.machine.my.edu/tmp/bar
2.1.4. For more information If you want more information and details on URLs and the command line options , the Key Concepts Guide gives basic definitions and an overview of the GridFTP protocol as well as our implementation of it. 2.2. Accessing data in... 2.2.1. Accessing data in a non-POSIX file data source that has a POSIX interface If you want to access data in a non-POSIX file data source that has a POSIX interface, the standard server will do just fine. Just make sure it is really POSIX-like (out of order writes, contiguous byte writes, etc). 2.2.2. Accessing data in HPSS The following information is helpful if you want to use GridFTP to access data in HPSS. Architecturally, the Globus GridFTP server can be divided into 3 modules: In the GT4.0.x implementation, the data transform module and the DSI have been merged, although we plan to have separate, chainable, data transform modules in the future.
[Note] Note
This architecture does NOT apply to the WU-FTPD implementation (GT3.2.1 and lower).
2.2.2.1. GridFTP Protocol Module
The GridFTP protocol module is the module that reads and writes to the network and implements the GridFTP protocol. This module should not need to be modified since to do so would make the server non-protocol compliant, and unable to communicate with other servers.
2.2.2.2. Data Transform Functionality
The data transform functionality is invoked by using the ERET (extended retrieve) and ESTO (extended store) commands. It is seldom used and bears careful consideration before it is implemented, but in the right circumstances can be very useful. In theory, any computation could be invoked this way, but it was primarily intended for cases where some simple pre-processing (such as a partial get or sub-sampling) can greatly reduce the network load. The disadvantage to this is that you remove any real option for planning, brokering, etc., and any significant computation could adversely affect the data transfer performance. Note that the client must also support the ESTO/ERET functionality as well.
2.2.2.3. Data Storage Interface (DSI) / Data Transform module
The Data Storage Interface (DSI) / Data Transform module knows how to read and write to the "local" storage system and can optionally transform the data. We put local in quotes because in a complicated storage system, the storage may not be directly attached, but for performance reasons, it should be relatively close (for instance on the same LAN). The interface consists of functions to be implemented such as send (get), receive (put), command (simple commands that simply succeed or fail like mkdir), etc.. Once these functions have been implemented for a specific storage system, a client should not need to know or care what is actually providing the data. The server can either be configured specifically with a specific DSI, i.e., it knows how to interact with a single class of storage system, or one particularly useful function for the ESTO/ERET functionality mentioned above is to load and configure a DSI on the fly.
2.2.2.4. HPSS info
Last Update: August 2005 Working with Los Alamos National Laboratory and the High Performance Storage System (HPSS) collaboration ( http://www.hpss-collaboration.org ), we have written a Data Storage Interface (DSI) for read/write access to HPSS. This DSI would allow an existing application that uses a GridFTP compliant client to utilize an HPSS data resources. This DSI is currently in testing. Due to changes in the HPSS security mechanisms, it requires HPSS 6.2 or later, which is due to be released in Q4 2005. Distribution for the DSI has not been worked out yet, but it will *probably* be available from both Globus and the HPSS collaboration. While this code will be open source, it requires underlying HPSS libraries which are NOT open source (proprietary).
[Note] Note
This is a purely server side change, the client does not know what DSI is running, so only a site that is already running HPSS and wants to allow GridFTP access needs to worry about access to these proprietary libraries.
2.2.3. Accessing data in SRB The following information is helpful if you want to use GridFTP to access data in SRB. Architecturally, the Globus GridFTP server can be divided into 3 modules: In the GT4.0.x implementation, the data transform module and the DSI have been merged, although we plan to have separate, chainable, data transform modules in the future.
[Note] Note
This architecture does NOT apply to the WU-FTPD implementation (GT3.2.1 and lower).
2.2.3.1. GridFTP Protocol Module
The GridFTP protocol module is the module that reads and writes to the network and implements the GridFTP protocol. This module should not need to be modified since to do so would make the server non-protocol compliant, and unable to communicate with other servers.
2.2.3.2. Data Transform Functionality
The data transform functionality is invoked by using the ERET (extended retrieve) and ESTO (extended store) commands. It is seldom used and bears careful consideration before it is implemented, but in the right circumstances can be very useful. In theory, any computation could be invoked this way, but it was primarily intended for cases where some simple pre-processing (such as a partial get or sub-sampling) can greatly reduce the network load. The disadvantage to this is that you remove any real option for planning, brokering, etc., and any significant computation could adversely affect the data transfer performance. Note that the client must also support the ESTO/ERET functionality as well.
2.2.3.3. Data Storage Interface (DSI) / Data Transform module
The Data Storage Interface (DSI) / Data Transform module knows how to read and write to the "local" storage system and can optionally transform the data. We put local in quotes because in a complicated storage system, the storage may not be directly attached, but for performance reasons, it should be relatively close (for instance on the same LAN). The interface consists of functions to be implemented such as send (get), receive (put), command (simple commands that simply succeed or fail like mkdir), etc.. Once these functions have been implemented for a specific storage system, a client should not need to know or care what is actually providing the data. The server can either be configured specifically with a specific DSI, i.e., it knows how to interact with a single class of storage system, or one particularly useful function for the ESTO/ERET functionality mentioned above is to load and configure a DSI on the fly.
2.2.3.4. SRB info
Last Update: August 2005 Working with the SRB team at the San Diego Supercomputing Center, we have written a Data Storage Interface (DSI) for read/write access to data in the Storage Resource Broker (SRB) (http://www.npaci.edu/DICE/SRB). This DSI will enable GridFTP compliant clients to read and write data to an SRB server, similar in functionality to the sput/sget commands. This DSI is currently in testing and is not yet publicly available, but will be available from both the SRB web site (here) and the Globus web site (here). It will also be included in the next stable release of the toolkit. We are working on performance tests, but early results indicate that for wide area network (WAN) transfers, the performance is comparable. When might you want to use this functionality: 2.2.4. Accessing data in some other non-POSIX data source The following information is helpful If you want to use GridFTP to access data in a non-POSIX data source. Architecturally, the Globus GridFTP server can be divided into 3 modules: In the GT4.0.x implementation, the data transform module and the DSI have been merged, although we plan to have separate, chainable, data transform modules in the future.
[Note] Note
This architecture does NOT apply to the WU-FTPD implementation (GT3.2.1 and lower).
2.2.4.1. GridFTP Protocol Module
The GridFTP protocol module is the module that reads and writes to the network and implements the GridFTP protocol. This module should not need to be modified since to do so would make the server non-protocol compliant, and unable to communicate with other servers.
2.2.4.2. Data Transform Functionality
The data transform functionality is invoked by using the ERET (extended retrieve) and ESTO (extended store) commands. It is seldom used and bears careful consideration before it is implemented, but in the right circumstances can be very useful. In theory, any computation could be invoked this way, but it was primarily intended for cases where some simple pre-processing (such as a partial get or sub-sampling) can greatly reduce the network load. The disadvantage to this is that you remove any real option for planning, brokering, etc., and any significant computation could adversely affect the data transfer performance. Note that the client must also support the ESTO/ERET functionality as well.
2.2.4.3. Data Storage Interface (DSI) / Data Transform module
Nov 13, 2018 | toolkit.globus.org

The Data Storage Interface (DSI) / Data Transform module knows how to read and write to the "local" storage system and can optionally transform the data. We put local in quotes because in a complicated storage system, the storage may not be directly attached, but for performance reasons, it should be relatively close (for instance on the same LAN).

The interface consists of functions to be implemented such as send (get), receive (put), command (simple commands that simply succeed or fail like mkdir), etc..

Once these functions have been implemented for a specific storage system, a client should not need to know or care what is actually providing the data. The server can either be configured specifically with a specific DSI, i.e., it knows how to interact with a single class of storage system, or one particularly useful function for the ESTO/ERET functionality mentioned above is to load and configure a DSI on the fly. 3. Command line tools

Please see the GridFTP Command Reference .

[Nov 12, 2018] Edge Computing vs. Cloud Computing What's the Difference by Andy Patrizio ,

"... Download the authoritative guide: Cloud Computing 2018: Using the Cloud to Transform Your Business ..."
Notable quotes:
"... Download the authoritative guide: Cloud Computing 2018: Using the Cloud to Transform Your Business ..."
"... Edge computing is a term you are going to hear more of in the coming years because it precedes another term you will be hearing a lot, the Internet of Things (IoT). You see, the formally adopted definition of edge computing is a form of technology that is necessary to make the IoT work. ..."
"... Tech research firm IDC defines edge computing is a "mesh network of micro data centers that process or store critical data locally and push all received data to a central data center or cloud storage repository, in a footprint of less than 100 square feet." ..."
Jan 23, 2018 | www.datamation.com
Download the authoritative guide: Cloud Computing 2018: Using the Cloud to Transform Your Business

The term cloud computing is now as firmly lodged in our technical lexicon as email and Internet, and the concept has taken firm hold in business as well. By 2020, Gartner estimates that a "no cloud" policy will be as prevalent in business as a "no Internet" policy. Which is to say no one who wants to stay in business will be without one.

You are likely hearing a new term now, edge computing . One of the problems with technology is terms tend to come before the definition. Technologists (and the press, let's be honest) tend to throw a word around before it is well-defined, and in that vacuum come a variety of guessed definitions, of varying accuracy.

Edge computing is a term you are going to hear more of in the coming years because it precedes another term you will be hearing a lot, the Internet of Things (IoT). You see, the formally adopted definition of edge computing is a form of technology that is necessary to make the IoT work.

Tech research firm IDC defines edge computing is a "mesh network of micro data centers that process or store critical data locally and push all received data to a central data center or cloud storage repository, in a footprint of less than 100 square feet."

It is typically used in IoT use cases, where edge devices collect data from IoT devices and do the processing there, or send it back to a data center or the cloud for processing. Edge computing takes some of the load off the central data center, reducing or even eliminating the processing work at the central location.

IoT Explosion in the Cloud Era

To understand the need for edge computing you must understand the explosive growth in IoT in the coming years, and it is coming on big. There have been a number of estimates of the growth in devices, and while they all vary, they are all in the billions of devices.

This is taking place in a number of areas, most notably cars and industrial equipment. Cars are becoming increasingly more computerized and more intelligent. Gone are the days when the "Check engine" warning light came on and you had to guess what was wrong. Now it tells you which component is failing.

The industrial sector is a broad one and includes sensors, RFID, industrial robotics, 3D printing, condition monitoring, smart meters, guidance, and more. This sector is sometimes called the Industrial Internet of Things (IIoT) and the overall market is expected to grow from $93.9 billion in 2014 to $151.01 billion by 2020.

All of these sensors are taking in data but they are not processing it. Your car does some of the processing of sensor data but much of it has to be sent in to a data center for computation, monitoring and logging.

The problem is that this would overload networks and data centers. Imaging the millions of cars on the road sending in data to data centers around the country. The 4G network would be overwhelmed, as would the data centers. And if you are in California and the car maker's data center is in Texas, that's a long round trip.

[Nov 09, 2018] Cloud-hosted date must be accessed by users over existing WAN which creates performance issues due to bandwidth and latency constraints

Notable quotes:
"... Congestion problems lead to miserable performance. We have one WAN pipe, typically 1.5 Mbps to 10 MBps ..."
Nov 09, 2018 | www.eiseverywhere.com

However, cloud-hosted information assets must still be accessed by users over existing WAN infrastructures, where there are performance issues due to bandwidth and latency constraints.

THE EXTREMELY UNFUNNY PART - UP TO 20x SLOWER

Public/Private

Cloud

Thousands of companies
Millions of users
Varied bandwidth

♦ Per-unit provisioning costs does not decrease much with size after, say, 100 units.

> Cloud data centers are potentially "far away"

♦ Cloud infrastructure supports many enterprises

♦ Large scale drives lower per-unit cost for data center
services

> All employees will be "remote" from their data

♦ Even single-location companies will be remote from their data

♦ HQ employees previously local to servers, but not with Cloud model

> Lots of data needs to be sent over limited WAN bandwidth

Congestion problems lead to miserable performance. We have one WAN pipe, typically 1.5 Mbps to 10 MBps

> Disk-based deduplication technology

♦ Identify redundant data at the byte level, not application (e.g., file) level

♦ Use disks to store vast dictionaries of byte sequences for long periods of time

♦ Use symbols to transfer repetitive sequences of byte-level raw data

♦ Only deduplicated data stored on disk

[Nov 09, 2018] Troubleshoot WAN Performance Issues SD Wan Experts by Steve Garson

Feb 08, 2013 | www.sd-wan-experts.com

Troubleshooting MPLS Networks

How should you troubleshoot WAN performance issues. Your MPLS or VPLS network and your clients in field offices are complaining about slow WAN performance. Your network should be performing better and you can't figure out what the problem is. You can contact SD-WAN-Experts to have their engineers solve your problem, but you want to try to solve the problems yourself.

  1. The first thing to check, seems trivial, but you need to confirm that the ports on your router and switch ports are configured for the same speed and duplex. Log into your switches and check the logs for mismatches of speed or duplex. Auto-negotiation sometimes does not work properly, so a 10M port connected to a 100M port is mismatched. Or you might have a half-duplex port connected to a full-duplex port. Don't assume that a 10/100/1000 port is auto-negotiating correctly!
  2. Is your WAN performance problem consistent? Does it occur at roughly the same time of day? Or is it completely random? If you don't have the monitoring tools to measure this, you are at a big disadvantage in resolving the issues on your own.
  3. Do you have Class of Service configured on your WAN? Do you have DSCP configured on your LAN? What is the mapping of your DSCP values to CoS?
  4. What kind of applications are traversing your WAN? Are there specific apps that work better than others?
  5. Have your reviewed bandwidth utilization on your carrier's web portal to determine if you are saturating the MPLS port of any locations? Even brief peaks will be enough to generate complaints. Large files, such as CAD drawings, can completely saturate a WAN link.
  6. Are you backing up or synchronizing data over the WAN? Have you confirmed 100% that this work is completed before the work day begins.
  7. Might your routing be taking multiple paths and not the most direct path? Look at your routing tables.
  8. Next, you want to see long term trend statistics. This means monitoring the SNMP streams from all your routers, using tools such as MRTG, NTOP or Cacti. A two week sampling should provide a very good picture of what is happening on your network to help troubleshoot your WAN.

NTOP allows you to

MRTG (Multi-Router Traffic Grapher) provides easy to understand graphs of your network bandwidth utilization.

MRTG Picture

Cacti requires a MySQL database. It is a complete network graphing solution designed to harness the power of RRDTool 's data storage and graphing functionality. Cacti provides a fast poller, advanced graph templating, multiple data acquisition methods, and user management features out of the box. All of this is wrapped in an intuitive, easy to use interface that makes sense for LAN-sized installations up to complex networks with hundreds of devices.

Both NTOP and MRTG are freeware applications to help troubleshoot your WAN that will run on the freeware versions of Linux. As a result, they can be installed on almost any desktop computer that has out-lived its value as a Windows desktop machine. If you are skilled with Linux and networking, and you have the time, you can install this monitoring system on your own. You will need to get your carrier to provide read-only access to your router SNMP traffic.

But you might find it more cost effective to have the engineers at SD-WAN-Experts do the work for you. All you need to do is provide an available machine with a Linux install (Ubuntu, CentOS, RedHat, etc) with remote access via a VPN. Our engineers will then download all the software remotely, install and configure the machine. When we are done with the monitoring, beside understanding how to solve your problem (and solving it!) you will have your own network monitoring system installed for your use on a daily basis. We'll teach you how to use it, which is quite simple using the web based tools, so you can view it from any machine on your network.

If you need assistance in troubleshooting your wide area network, contact SD-WAN-Experts today !

You might also find these troubleshooting tips of interest;

Troubleshooting MPLS Network Performance Issues

Packet Loss and How It Affects Performance

Troubleshooting VPLS and Ethernet Tunnels over MPLS

[Nov 09, 2018] Storage in private clouds

Nov 09, 2018 | www.redhat.com

Storage in private clouds

Storage is one of the most popular uses of cloud computing, particularly for consumers. The user-friendly design of service-based companies have helped make "cloud" a pretty normal term -- even reaching meme status in 2016.

However, cloud storage means something very different to businesses. Big data and the Internet of Things (IoT) have made it difficult to appraise the value of data until long after it's originally stored -- when finding that piece of data becomes the key to revealing valuable business insights or unlocking an application's new feature. Even after enterprises decide where to store their data in the cloud (on-premise, off-premise, public, or private), they still have to decide how they're going to store it. What good is data that can't be found?

It's common to store data in the cloud using software-defined storage . Software-defined storage decouples storage software from hardware so you can abstract and consolidate storage capacity in a cloud. It allows you to scale beyond whatever individual hardware components your cloud is built on.

Two of the more common software-defined storage solutions include Ceph for structured data and Gluster for unstructured data. Ceph is a massively scalable, programmable storage system that works well with clouds -- particularly those deployed using OpenStack ® -- because of its ability to unify object, block, and file storage into 1 pool of resources. Gluster is designed to handle the requirements of traditional file storage and is particularly adept at provisioning and managing elastic storage for container-based applications.

[Nov 09, 2018] Cloud Computing vs Edge Computing Which Will Prevail

Notable quotes:
"... The recent widespread of edge computing in some 5G showcases, like the major sports events, has generated the ongoing discussion about the possibility of edge computing to replace cloud computing. ..."
"... For instance, Satya Nadella, the CEO of Microsoft, announced in Microsoft Build 2017 that the company will focus its strategy on edge computing. Indeed, edge computing will be the key for the success of smart home and driverless vehicles ..."
"... the edge will be the first to process and store the data generated by user devices. This will reduce the latency for the data to travel to the cloud. In other words, the edge optimizes the efficiency for the cloud. ..."
Nov 09, 2018 | www.lannerinc.com

The recent widespread of edge computing in some 5G showcases, like the major sports events, has generated the ongoing discussion about the possibility of edge computing to replace cloud computing.

In fact, there have been announcements from global tech leaders like Nokia and Huawei demonstrating increased efforts and resources in developing edge computing.

For instance, Satya Nadella, the CEO of Microsoft, announced in Microsoft Build 2017 that the company will focus its strategy on edge computing. Indeed, edge computing will be the key for the success of smart home and driverless vehicles.

... ... ...

Cloud or edge, which will lead the future?

The answer to that question is "Cloud – Edge Mixing". The cloud and the edge will complement each other to offer the real IoT experience. For instance, while the cloud coordinates all the technology and offers SaaS to users, the edge will be the first to process and store the data generated by user devices. This will reduce the latency for the data to travel to the cloud. In other words, the edge optimizes the efficiency for the cloud.

It is strongly suggested to implement open architecture white-box servers for both cloud and edge, to minimize the latency for cloud-edge synchronization and optimize the compatibility between the two. For example, Lanner Electronics offers a wide range of Intel x86 white box appliances for data centers and edge uCPE/vCPE.

http://www.lannerinc.com/telecom-datacenter-appliances/vcpe/ucpe-platforms/

[Nov 09, 2018] What is Hybrid Cloud Computing

Nov 09, 2018 | www.dummies.com

The hybrid cloud

A hybrid cloud is a combination of a private cloud combined with the use of public cloud services where one or several touch points exist between the environments. The goal is to combine services and data from a variety of cloud models to create a unified, automated, and well-managed computing environment.

Combining public services with private clouds and the data center as a hybrid is the new definition of corporate computing. Not all companies that use some public and some private cloud services have a hybrid cloud. Rather, a hybrid cloud is an environment where the private and public services are used together to create value.

A cloud is hybrid

A cloud is not hybrid

[Nov 09, 2018] Why Micro Data Centers Deliver Good Things in Small Packages by Calvin Hennick

Notable quotes:
"... "There's a big transformation happening," says Thomas Humphrey, segment director for edge computing at APC . "Technologies like IoT have started to require that some local computing and storage happen out in that distributed IT architecture." ..."
"... In retail, for example, edge computing will become more important as stores find success with IoT technologies such as mobile beacons, interactive mirrors and real-time tools for customer experience, behavior monitoring and marketing . ..."
Nov 09, 2018 | solutions.cdw.com

Enterprises are deploying self-contained micro data centers to power computing at the network edge.

The location for data processing has changed significantly throughout the history of computing. During the mainframe era, data was processed centrally, but client/server architectures later decentralized computing. In recent years, cloud computing centralized many processing workloads, but digital transformation and the Internet of Things are poised to move computing to new places, such as the network edge .

"There's a big transformation happening," says Thomas Humphrey, segment director for edge computing at APC . "Technologies like IoT have started to require that some local computing and storage happen out in that distributed IT architecture."

For example, some IoT systems require processing of data at remote locations rather than a centralized data center , such as at a retail store instead of a corporate headquarters.

To meet regulatory requirements and business needs, IoT solutions often need low latency, high bandwidth, robust security and superior reliability . To meet these demands, many organizations are deploying micro data centers: self-contained solutions that provide not only essential infrastructure, but also physical security, power and cooling and remote management capabilities.

"Digital transformation happens at the network edge, and edge computing will happen inside micro data centers ," says Bruce A. Taylor, executive vice president at Datacenter Dynamics . "This will probably be one of the fastest growing segments -- if not the fastest growing segment -- in data centers for the foreseeable future."

What Is a Micro Data Center?

Delivering the IT capabilities needed for edge computing represents a significant challenge for many organizations, which need manageable and secure solutions that can be deployed easily, consistently and close to the source of computing . Vendors such as APC have begun to create comprehensive solutions that provide these necessary capabilities in a single, standardized package.

"From our perspective at APC, the micro data center was a response to what was happening in the market," says Humphrey. "We were seeing that enterprises needed more robust solutions at the edge."

Most micro data center solutions rely on hyperconverged infrastructure to integrate computing, networking and storage technologies within a compact footprint . A typical micro data center also incorporates physical infrastructure (including racks), fire suppression, power, cooling and remote management capabilities. In effect, the micro data center represents a sweet spot between traditional IT closets and larger modular data centers -- giving organizations the ability to deploy professional, powerful IT resources practically anywhere .

Standardized Deployments Across the Country

Having robust IT resources at the network edge helps to improve reliability and reduce latency, both of which are becoming more and more important as analytics programs require that data from IoT deployments be processed in real time .

"There's always been edge computing," says Taylor. "What's new is the need to process hundreds of thousands of data points for analytics at once."

Standardization, redundant deployment and remote management are also attractive features, especially for large organizations that may need to deploy tens, hundreds or even thousands of micro data centers. "We spoke to customers who said, 'I've got to roll out and install 3,500 of these around the country,'" says Humphrey. "And many of these companies don't have IT staff at all of these sites." To address this scenario, APC designed standardized, plug-and-play micro data centers that can be rolled out seamlessly. Additionally, remote management capabilities allow central IT departments to monitor and troubleshoot the edge infrastructure without costly and time-intensive site visits.

In part because micro data centers operate in far-flung environments, security is of paramount concern. The self-contained nature of micro data centers ensures that only authorized personnel will have access to infrastructure equipment , and security tools such as video surveillance provide organizations with forensic evidence in the event that someone attempts to infiltrate the infrastructure.

How Micro Data Centers Can Help in Retail, Healthcare

Micro data centers make business sense for any organization that needs secure IT infrastructure at the network edge. But the solution is particularly appealing to organizations in fields such as retail, healthcare and finance , where IT environments are widely distributed and processing speeds are often a priority.

In retail, for example, edge computing will become more important as stores find success with IoT technologies such as mobile beacons, interactive mirrors and real-time tools for customer experience, behavior monitoring and marketing .

"It will be leading-edge companies driving micro data center adoption, but that doesn't necessarily mean they'll be technology companies," says Taylor. "A micro data center can power real-time analytics for inventory control and dynamic pricing in a supermarket."

In healthcare, digital transformation is beginning to touch processes and systems ranging from medication carts to patient records, and data often needs to be available locally; for example, in case of a data center outage during surgery. In finance, the real-time transmission of data can have immediate and significant financial consequences. And in both of these fields, regulations governing data privacy make the monitoring and security features of micro data centers even more important.

Micro data centers also have enormous potential to power smart city initiatives and to give energy companies a cost-effective way of deploying resources in remote locations , among other use cases.

"The proliferation of edge computing will be greater than anything we've seen in the past," Taylor says. "I almost can't think of a field where this won't matter."

Learn more about how solutions and services from CDW and APC can help your organization overcome its data center challenges.

Micro Data Centers Versus IT Closets

Think the micro data center is just a glorified update on the traditional IT closet? Think again.

"There are demonstrable differences," says Bruce A. Taylor, executive vice president at Datacenter Dynamics. "With micro data centers, there's a tremendous amount of computing capacity in a very small, contained space, and we just didn't have that capability previously ."

APC identifies three key differences between IT closets and micro data centers:

  1. Difference #1: Uptime Expectations. APC notes that, of the nearly 3 million IT closets in the U.S., over 70 percent report outages directly related to human error. In an unprotected IT closet, problems can result from something as preventable as cleaning staff unwittingly disconnecting a cable. Micro data centers, by contrast, utilize remote monitoring, video surveillance and sensors to reduce downtime related to human error.
  2. Difference #2: Cooling Configurations. The cooling of IT wiring closets is often approached both reactively and haphazardly, resulting in premature equipment failure. Micro data centers are specifically designed to assure cooling compatibility with anticipated loads.
  3. Difference #3: Power Infrastructure. Unlike many IT closets, micro data centers incorporate uninterruptible power supplies, ensuring that infrastructure equipment has the power it needs to help avoid downtime.

Calvin Hennick is a freelance journalist who specializes in business and technology writing. He is a contributor to the CDW family of technology magazines.

[Nov 09, 2018] Solving Office 365 and SaaS Performance Issues with SD-WAN

Notable quotes:
"... most of the Office365 deployments face network related problems - typically manifesting as screen freezes. Limited WAN optimization capability further complicates the problems for most SaaS applications. ..."
"... Why enterprises overlook the importance of strategically placing cloud gateways ..."
Nov 09, 2018 | www.brighttalk.com

About this webinar Major research highlights that most of the Office365 deployments face network related problems - typically manifesting as screen freezes. Limited WAN optimization capability further complicates the problems for most SaaS applications. To compound the issue, different SaaS applications issue different guidelines for solving performance issues. We will investigate the major reasons for these problems.

SD-WAN provides an essential set of features that solves these networking issues related to Office 365 and SaaS applications. This session will cover the following major topics:

[Nov 09, 2018] Make sense of edge computing vs. cloud computing

Notable quotes:
"... We already know that computing at the edge pushes most of the data processing out to the edge of the network, close to the source of the data. Then it's a matter of dividing the processing between the edge and the centralized system, meaning a public cloud such as Amazon Web Services, Google Cloud, or Microsoft Azure. ..."
"... The goal is to process near the device the data that it needs quickly, such as to act on. There are hundreds of use cases where reaction time is the key value of the IoT system, and consistently sending the data back to a centralized cloud prevents that value from happening. ..."
Nov 09, 2018 | www.infoworld.com

The internet of things is real, and it's a real part of the cloud. A key challenge is how you can get data processed from so many devices. Cisco Systems predicts that cloud traffic is likely to rise nearly fourfold by 2020, increasing 3.9 zettabytes (ZB) per year in 2015 (the latest full year for which data is available) to 14.1ZB per year by 2020.

As a result, we could have the cloud computing perfect storm from the growth of IoT. After all, IoT is about processing device-generated data that is meaningful, and cloud computing is about using data from centralized computing and storage. Growth rates of both can easily become unmanageable.

So what do we do? The answer is something called "edge computing." We already know that computing at the edge pushes most of the data processing out to the edge of the network, close to the source of the data. Then it's a matter of dividing the processing between the edge and the centralized system, meaning a public cloud such as Amazon Web Services, Google Cloud, or Microsoft Azure.

That may sound a like a client/server architecture, which also involved figuring out what to do at the client versus at the server. For IoT and any highly distributed applications, you've essentially got a client/network edge/server architecture going on, or -- if your devices can't do any processing themselves, a network edge/server architecture.

The goal is to process near the device the data that it needs quickly, such as to act on. There are hundreds of use cases where reaction time is the key value of the IoT system, and consistently sending the data back to a centralized cloud prevents that value from happening.

You would still use the cloud for processing that is either not as time-sensitive or is not needed by the device, such as for big data analytics on data from all your devices.

There's another dimension to this: edge computing and cloud computing are two very different things. One does not replace the other. But too many articles confuse IT pros by suggesting that edge computing will displace cloud computing. It's no more true than saying PCs would displace the datacenter.

It makes perfect sense to create purpose-built edge computing-based applications, such as an app that places data processing in a sensor to quickly process reactions to alarms. But you're not going to place your inventory-control data and applications at the edge -- moving all compute to the edge would result in a distributed, unsecured, and unmanageable mess.

All the public cloud providers have IoT strategies and technology stacks that include, or will include, edge computing. Edge and cloud computing can and do work well together, but edge computing is for purpose-built systems with special needs. Cloud computing is a more general-purpose platform that also can work with purpose-built systems in that old client/server model.

Related:

David S. Linthicum is a chief cloud strategy officer at Deloitte Consulting, and an internationally recognized industry expert and thought leader. His views are his own.

[Nov 08, 2018] GT 6.0 GridFTP

Notable quotes:
"... GridFTP is a high-performance, secure, reliable data transfer protocol optimized for high-bandwidth wide-area networks ..."
Nov 08, 2018 | toolkit.globus.org

The open source Globus® Toolkit is a fundamental enabling technology for the "Grid," letting people share computing power, databases, and other tools securely online across corporate, institutional, and geographic boundaries without sacrificing local autonomy. The toolkit includes software services and libraries for resource monitoring, discovery, and management, plus security and file management. In addition to being a central part of science and engineering projects that total nearly a half-billion dollars internationally, the Globus Toolkit is a substrate on which leading IT companies are building significant commercial Grid products.

The toolkit includes software for security, information infrastructure, resource management, data management, communication, fault detection, and portability. It is packaged as a set of components that can be used either independently or together to develop applications. Every organization has unique modes of operation, and collaboration between multiple organizations is hindered by incompatibility of resources such as data archives, computers, and networks. The Globus Toolkit was conceived to remove obstacles that prevent seamless collaboration. Its core services, interfaces and protocols allow users to access remote resources as if they were located within their own machine room while simultaneously preserving local control over who can use resources and when.

The Globus Toolkit has grown through an open-source strategy similar to the Linux operating system's, and distinct from proprietary attempts at resource-sharing software. This encourages broader, more rapid adoption and leads to greater technical innovation, as the open-source community provides continual enhancements to the product.

Essential background is contained in the papers " Anatomy of the Grid " by Foster, Kesselman and Tuecke and " Physiology of the Grid " by Foster, Kesselman, Nick and Tuecke.

Acclaim for the Globus Toolkit

From version 1.0 in 1998 to the 2.0 release in 2002 and now the latest 4.0 version based on new open-standard Grid services, the Globus Toolkit has evolved rapidly into what The New York Times called "the de facto standard" for Grid computing. In 2002 the project earned a prestigious R&D 100 award, given by R&D Magazine in a ceremony where the Globus Toolkit was named "Most Promising New Technology" among the year's top 100 innovations. Other honors include project leaders Ian Foster of Argonne National Laboratory and the University of Chicago, Carl Kesselman of the University of Southern California's Information Sciences Institute (ISI), and Steve Tuecke of Argonne being named among 2003's top ten innovators by InfoWorld magazine, and a similar honor from MIT Technology Review, which named Globus Toolkit-based Grid computing one of "Ten Technologies That Will Change the World." The Globus Toolkit also GridFTP is a high-performance, secure, reliable data transfer protocol optimized for high-bandwidth wide-area networks . The GridFTP protocol is based on FTP, the highly-popular Internet file transfer protocol. We have selected a set of protocol features and extensions defined already in IETF RFCs and added a few additional features to meet requirements from current data grid projects.

The following guides are available for this component:

Data Management Key Concepts For important general concepts [ pdf ].
Admin Guide For system administrators and those installing, building and deploying GT. You should already have read the Installation Guide and Quickstart [ pdf ]
User's Guide Describes how end-users typically interact with this component. [ pdf ].
Developer's Guide Reference and usage scenarios for developers. [ pdf ].
Other information available for this component are:
Release Notes What's new with the 6.0 release for this component. [ pdf ]
Public Interface Guide Information for all public interfaces (including APIs, commands, etc). Please note this is a subset of information in the Developer's Guide [ pdf ].
Quality Profile Information about test coverage reports, etc. [ pdf ].
Migrating Guide Information for migrating to this version if you were using a previous version of GT. [ pdf ]
All GridFTP Guides (PDF only) Includes all GridFTP guides except Public Interfaces (which is a subset of the Developer's Guide)

[Nov 08, 2018] globus-gridftp-server-control-6.2-1.el7.x86_64.rpm

Nov 08, 2018 | centos.pkgs.org
6.2 x86_64 EPEL Testing
globus-gridftp-server-control - - -
Requires
Name Value
/sbin/ldconfig -
globus-xio-gsi-driver(x86-64) >= 2
globus-xio-pipe-driver(x86-64) >= 2
libc.so.6(GLIBC_2.14)(64bit) -
libglobus_common.so.0()(64bit) -
libglobus_common.so.0(GLOBUS_COMMON_14)(64bit) -
libglobus_gss_assist.so.3()(64bit) -
libglobus_gssapi_error.so.2()(64bit) -
libglobus_gssapi_gsi.so.4()(64bit) -
libglobus_gssapi_gsi.so.4(globus_gssapi_gsi)(64bit) -
libglobus_openssl_error.so.0()(64bit) -
libglobus_xio.so.0()(64bit) -
rtld(GNU_HASH) -
See Also
Package Description
globus-gridftp-server-control-devel-6.1-1.el7.x86_64.rpm Globus Toolkit - Globus GridFTP Server Library Development Files
globus-gridftp-server-devel-12.5-1.el7.x86_64.rpm Globus Toolkit - Globus GridFTP Server Development Files
globus-gridftp-server-progs-12.5-1.el7.x86_64.rpm Globus Toolkit - Globus GridFTP Server Programs
globus-gridmap-callout-error-2.5-1.el7.x86_64.rpm Globus Toolkit - Globus Gridmap Callout Errors
globus-gridmap-callout-error-devel-2.5-1.el7.x86_64.rpm Globus Toolkit - Globus Gridmap Callout Errors Development Files
globus-gridmap-callout-error-doc-2.5-1.el7.noarch.rpm Globus Toolkit - Globus Gridmap Callout Errors Documentation Files
globus-gridmap-eppn-callout-1.13-1.el7.x86_64.rpm Globus Toolkit - Globus gridmap ePPN callout
globus-gridmap-verify-myproxy-callout-2.9-1.el7.x86_64.rpm Globus Toolkit - Globus gridmap myproxy callout
globus-gsi-callback-5.13-1.el7.x86_64.rpm Globus Toolkit - Globus GSI Callback Library
globus-gsi-callback-devel-5.13-1.el7.x86_64.rpm Globus Toolkit - Globus GSI Callback Library Development Files
globus-gsi-callback-doc-5.13-1.el7.noarch.rpm Globus Toolkit - Globus GSI Callback Library Documentation Files
globus-gsi-cert-utils-9.16-1.el7.x86_64.rpm Globus Toolkit - Globus GSI Cert Utils Library
globus-gsi-cert-utils-devel-9.16-1.el7.x86_64.rpm Globus Toolkit - Globus GSI Cert Utils Library Development Files
globus-gsi-cert-utils-doc-9.16-1.el7.noarch.rpm Globus Toolkit - Globus GSI Cert Utils Library Documentation Files
globus-gsi-cert-utils-progs-9.16-1.el7.noarch.rpm Globus Toolkit - Globus GSI Cert Utils Library Programs
Provides
Name Value
globus-gridftp-server-control = 6.1-1.el7
globus-gridftp-server-control(x86-64) = 6.1-1.el7
libglobus_gridftp_server_control.so.0()(64bit) -
Required By Download
Type URL
Binary Package globus-gridftp-server-control-6.1-1.el7.x86_64.rpm
Source Package globus-gridftp-server-control-6.1-1.el7.src.rpm
Install Howto
  1. Download the latest epel-release rpm from
    http://dl.fedoraproject.org/pub/epel/7/x86_64/
    
  2. Install epel-release rpm:
    # rpm -Uvh epel-release*rpm
    
  3. Install globus-gridftp-server-control rpm package:
    # yum install globus-gridftp-server-control
    
Files
Path
/usr/lib64/libglobus_gridftp_server_control.so.0
/usr/lib64/libglobus_gridftp_server_control.so.0.6.1
/usr/share/doc/globus-gridftp-server-control-6.1/README
/usr/share/licenses/globus-gridftp-server-control-6.1/GLOBUS_LICENSE
Changelog
2018-04-07 - Mattias Ellert <[email protected]> - 6.1-1
- GT6 update: Don't error if acquire_cred fails when vhost env is set

[Nov 08, 2018] 9 Aspera Sync Alternatives Top Best Alternatives

Nov 08, 2018 | www.topbestalternatives.com

Aspera Sync is an elite, versatile, multi-directional no concurrent record replication and synchronization. It is intended to conquer the execution and versatility inadequacies of conventional synchronization instruments like Rsync. Aspera Sync can scale up and out for most extreme rate replication and synchronization over WANs. Prominent capacities are The FASP advantage, superior, smart trade for Rsync, underpins complex synchronization arrangements, propelled record taking care of, and so on. Aspera Sync is reason worked by Aspera for elite, versatile, multi-directional offbeat record replication and synchronization. Intended to beat the execution and adaptability deficiencies of conventional synchronization instruments like Rsync, Aspera Sync can scale up and out for greatest pace replication and synchronization over WANs, for now,'s biggest vast information record stores -- from a great many individual documents to the most significant document sizes. Hearty reinforcement and recuperation strategies secure business necessary information and frameworks so undertakings can rapidly recoup necessary documents, structures or a whole site in the occasion if a calamity. Be that as it may, these strategies can be undermined by average exchange speeds amongst essential and reinforcement locales, bringing about fragmented reinforcements and augmented recuperation times. With FASP – controlled transactions, replication fits inside the little operational window so you can meet your recuperation point objective (RPO) and recovery time objective (RTO).

1. Syncthing Syncthing replaces exclusive synchronize and cloud administrations with something open, reliable and decentralized. Your information is your information alone, and you should pick where it is put away if it is imparted to some outsider and how it's transmitted over the Internet. Syncthing is a record sharing application that permits you to share reports between various gadgets in an advantageous way. Its online Graphical User Interface (GUI) makes it conceivable Website Syncthing Alternatives

[Nov 03, 2018] Technology is dominated by two types of people; those who understand what they don t manage; and those who manage what they don t understand – ARCHIBALD PUTT ( PUTTS LAW )

Notable quotes:
"... These C level guys see cloud services – applications, data, backup, service desk – as a great way to free up a blockage in how IT service is being delivered on premise. ..."
"... IMHO there is a big difference between management of IT and management of IT service. Rarely do you get people who can do both. ..."
Nov 03, 2018 | brummieruss.wordpress.com

...Cloud introduces a whole new ball game and will no doubt perpetuate Putts Law for ever more. Why?

Well unless 100% of IT infrastructure goes up into the clouds ( unlikely for any organization with a history ; likely for a new organization ( probably micro small ) that starts up in the next few years ) the 'art of IT management' will demand even more focus and understanding.

I always think a great acid test of Putts Law is to look at one of the two aspects of IT management

  1. Show me a simple process that you follow each day that delivers an aspect of IT service i.e. how to buy a piece of IT stuff, or a way to report a fault
  2. Show me how you manage a single entity on the network i.e. a file server, a PC, a network switch

Usually the answers ( which will be different from people on the same team, in the same room and from the same person on different days !) will give you an insight to Putts Law.

Childs play for most of course who are challenged with some real complex management situations such as data center virtualization projects, storage explosion control, edge device management, backend application upgrades, global messaging migrations and B2C identity integration. But of course if its evidenced that they seem to be managing (simple things ) without true understanding one could argue 'how the hell can they be expected to manage what they understand with the complex things?' Fair point?

Of course many C level people have an answer to Putts Law. Move the problem to people who do understand what they manage. Professionals who provide cloud versions of what the C level person struggles to get a professional service from. These C level guys see cloud services – applications, data, backup, service desk – as a great way to free up a blockage in how IT service is being delivered on premise. And they are right ( and wrong ).

... ... ...

( Quote attributed to Archibald Putt author of Putt's Law and the Successful Technocrat: How to Win in the Information Age )

rowan says: March 9, 2012 at 9:03 am

IMHO there is a big difference between management of IT and management of IT service. Rarely do you get people who can do both. Understanding inventory, disk space, security etc is one thing; but understanding the performance of apps and user impact is another ball game. Putts Law is alive and well in my organisation. TGIF.

Rowan in Belfast.

stephen777 says: March 31, 2012 at 7:32 am

Rowan is right I used to be an IT Manager but now my title is Service Delivery Manager. Why? Because we had a new CTO who changed how people saw what we did. I ve been doing this new role for 5 years and I really do understand what i don't manage. LOL

Stephen777

[Oct 30, 2018] Cloud has a rich future, but I didn't even know Red Hat had any presence there, let alone $35 billion worth.

Oct 30, 2018 | arstechnica.com

ControlledExperiments Ars Centurion reply 5 hours ago I am not your friend wrote: I just can't comprehend that price. Cloud has a rich future, but I didn't even know Red Hat had any presence there, let alone $35 billion worth.

Yeah the price is nuts, reminiscent of FB's WhatsApp purchase, and maybe their Instagram purchase.

If you actually look at the revenue Amazon gets from AWS or Microsoft from Azure, it's not that much, relatively speaking. For Microsoft, it's nothing compared to Windows and Office revenue, and I'm not sure where the growth is supposed to come from. It seems like most everyone who wants to be on the cloud is already there, and vulns like Spectre and Meltdown broke the sacred VM wall, so...

[Oct 27, 2018] We are effectively back to the centralized systems ("clouds" controlled by Amazon (CIA), Google (NSA)). The peripheral components of autonomous networked systems are akin to sensory devices with heavy computational lifting in the (local) regime controlled servers.

Oct 27, 2018 | www.moonofalabama.org

realist , Oct 27, 2018 10:27:04 AM | link

The first wave of the computer revolution created stand-alone systems. The current wave is their combination into new and much larger ones.
Posted by b on October 26, 2018 at 02:18 PM

Not strictly correct. It has come full circle in sense. We started with standalone machines; then we had central mainframes with dumb terminals; then
the first networks of these centralized servers (XeroxParc's efforts being more forward looking and an exception); then PCs; then internet; and now (per design and not by merit, imo)

We are effectively back to the centralized systems ("clouds" controlled by Amazon (CIA), Google (NSA)). The peripheral components of autonomous networked systems are akin to sensory devices with heavy computational lifting in the (local) regime controlled servers.

---

By this, I mean moral codes no longer have any objective basis but are evermore easily re-wired by the .002 to suit their purposes
Posted by: donkeytale | Oct 27, 2018 6:32:11 AM | 69

I question the implied "helpless to resist reprogramming" notion of your statement. You mentioned God. Please note that God has endowed each and every one of us with a moral compass. You mentioned Jesus. Didn't he say something about "you generation of vipers"? I suggest you consider that ours is a morally degenerate generation and is getting precisely what it wants and deserves.

[Sep 16, 2017] Google Drive Faces Outage, Users Report

Sep 16, 2017 | tech.slashdot.org

(google.com) 75

Posted by msmash on Thursday September 07, 2017

Numerous Slashdot readers are reporting that they are facing issues access Google Drive, the productivity suite from the Mountain View-based company. Google's dashboard confirms that Drive is facing outage .

Third-party web monitoring tool DownDetector also reports thousands of similar complaints from users. The company said, "Google Drive service has already been restored for some users, and we expect a resolution for all users in the near future.

Please note this time frame is an estimate and may change. Google Drive is not loading files and results in a failures for a subset of users."

[Jul 02, 2017] The Details About the CIAs Deal With Amazon by Frank Konkel

Jul 17, 2014 | www.theatlantic.com

The intelligence community is about to get the equivalent of an adrenaline shot to the chest. This summer, a $600 million computing cloud developed by Amazon Web Services for the Central Intelligence Agency over the past year will begin servicing all 17 agencies that make up the intelligence community. If the technology plays out as officials envision, it will usher in a new era of cooperation and coordination, allowing agencies to share information and services much more easily and avoid the kind of intelligence gaps that preceded the Sept. 11, 2001, terrorist attacks.

For the first time, agencies within the intelligence community will be able to order a variety of on-demand computing and analytic services from the CIA and National Security Agency. What's more, they'll only pay for what they use.

The vision was first outlined in the Intelligence Community Information Technology Enterprise plan championed by Director of National Intelligence James Clapper and IC Chief Information Officer Al Tarasiuk almost three years ago. Cloud computing is one of the core components of the strategy to help the IC discover, access and share critical information in an era of seemingly infinite data.

For the risk-averse intelligence community, the decision to go with a commercial cloud vendor is a radical departure from business as usual.

In 2011, while private companies were consolidating data centers in favor of the cloud and some civilian agencies began flirting with cloud variants like email as a service, a sometimes contentious debate among the intelligence community's leadership took place.

... ... ...

The government was spending more money on information technology within the IC than ever before. IT spending reached $8 billion in 2013, according to budget documents leaked by former NSA contractor Edward Snowden. The CIA and other agencies feasibly could have spent billions of dollars standing up their own cloud infrastructure without raising many eyebrows in Congress, but the decision to purchase a single commercial solution came down primarily to two factors.

"What we were really looking at was time to mission and innovation," the former intelligence official said. "The goal was, 'Can we act like a large enterprise in the corporate world and buy the thing that we don't have, can we catch up to the commercial cycle? Anybody can build a data center, but could we purchase something more?

"We decided we needed to buy innovation," the former intelligence official said.

A Groundbreaking Deal

... ... ...

The Amazon-built cloud will operate behind the IC's firewall, or more simply: It's a public cloud built on private premises.

Intelligence agencies will be able to host applications or order a variety of on-demand services like storage, computing and analytics. True to the National Institute of Standards and Technology definition of cloud computing, the IC cloud scales up or down to meet the need.

In that regard, customers will pay only for services they actually use, which is expected to generate massive savings for the IC.

"We see this as a tremendous opportunity to sharpen our focus and to be very efficient," Wolfe told an audience at AWS' annual nonprofit and government symposium in Washington. "We hope to get speed and scale out of the cloud, and a tremendous amount of efficiency in terms of folks traditionally using IT now using it in a cost-recovery way."

... ... ...

For several years there hasn't been even a close challenger to AWS. Gartner's 2014 quadrant shows that AWS captures 83 percent of the cloud computing infrastructure market.

In the combined cloud markets for infrastructure and platform services, hybrid and private clouds-worth a collective $131 billion at the end of 2013-Amazon's revenue grew 67 percent in the first quarter of 2014, according to Gartner.

While the public sector hasn't been as quick to capitalize on cloud computing as the private sector, government spending on cloud technologies is beginning to jump.

Researchers at IDC estimate federal private cloud spending will reach $1.7 billion in 2014, and $7.7 billion by 2017. In other industries, software services are considered the leading cloud technology, but in the government that honor goes to infrastructure services, which IDC expects to reach $5.4 billion in 2017.

In addition to its $600 million deal with the CIA, Amazon Web Services also does business with NASA, the Food and Drug Administration and the Centers for Disease Control and Prevention. Most recently, the Obama Administration tapped AWS to host portions of HealthCare.gov.

[Jun 09, 2017] Amazon's S3 web-based storage service is experiencing widespread issues on Feb 28 2017

Jun 09, 2017 | techcrunch.com

Amazon's S3 web-based storage service is experiencing widespread issues, leading to service that's either partially or fully broken on websites, apps and devices upon which it relies. The AWS offering provides hosting for images for a lot of sites, and also hosts entire websites, and app backends including Nest.

The S3 outage is due to "high error rates with S3 in US-EAST-1," according to Amazon's AWS service health dashboard , which is where the company also says it's working on "remediating the issue," without initially revealing any further details.

Affected websites and services include Quora, newsletter provider Sailthru, Business Insider, Giphy, image hosting at a number of publisher websites, filesharing in Slack, and many more. Connected lightbulbs, thermostats and other IoT hardware is also being impacted, with many unable to control these devices as a result of the outage.

Amazon S3 is used by around 148,213 websites, and 121,761 unique domains, according to data tracked by SimilarTech , and its popularity as a content host concentrates specifically in the U.S. It's used by 0.8 percent of the top 1 million websites, which is actually quite a bit smaller than CloudFlare, which is used by 6.2 percent of the top 1 million websites globally – and yet it's still having this much of an effect.

Amazingly, even the status indicators on the AWS service status page rely on S3 for storage of its health marker graphics, hence why the site is still showing all services green despite obvious evidence to the contrary. Update (11:40 AM PT): AWS has fixed the issues with its own dashboard at least – it'll now accurately reflect service status as it continues to attempt to fix the problem .

[Apr 01, 2017] Amazon Web Services outage causes widespread internet problems

Apr 01, 2017 | www.cbsnews.com
Feb 28, 2017 6:03 PM EST NEW YORK -- Amazon's cloud-computing service, Amazon Web Services, experienced an outage in its eastern U.S. region Tuesday afternoon, causing unprecedented and widespread problems for thousands of websites and apps.

Amazon is the largest provider of cloud computing services in the U.S. Beginning around midday Tuesday on the East Coast, one region of its "S3" service based in Virginia began to experience what Amazon, on its service site, called "increased error rates."

In a statement, Amazon said as of 4 p.m. E.T. it was still experiencing "high error rates" that were "impacting various AWS services."

"We are working hard at repairing S3, believe we understand root cause, and are working on implementing what we believe will remediate the issue," the company said.

But less than an hour later, an update offered good news: "As of 1:49 PM PST, we are fully recovered for operations for adding new objects in S3, which was our last operation showing a high error rate. The Amazon S3 service is operating normally," the company said.

Amazon's Simple Storage Service, or S3, stores files and data for companies on remote servers. It's used for everything from building websites and apps to storing images, customer data and customer transactions.

"Anything you can think about storing in the most cost-effective way possible," is how Rich Mogull, CEO of data security firm Securosis, puts it.

Amazon has a strong track record of stability with its cloud computing service, CNET senior editor Dan Ackerman told CBS News.

"AWS... is known for having really good 'up time,'" he said, using industry language.

Over time, cloud computing has become a major part of Amazon's empire.

"Very few people host their own web servers anymore, it's all been outsourced to these big providers , and Amazon is one of the major ones," Ackerman said.

The problem Tuesday affected both "front-end" operations -- meaning the websites and apps that users see -- and back-end data processing that takes place out of sight. Some smaller online services, such as Trello, Scribd and IFTTT, appeared to be down for a while, although all have since recovered.

Some affected websites had fun with the crash, treating it like a snow day:

[Apr 01, 2017] After Amazon outage, HealthExpense worries about cloud lock-in by Maria Korolov

Notable quotes:
"... "From a sustainability and availability standpoint, we definitely need to look at our strategy to not be vendor specific, including with Amazon," said Lee. "That's something that we're aware of and are working towards." ..."
"... "Elastic load balances and other services make it easy to set up. However, it's a double-edged sword, because these types of services will also make it harder to be vendor-agnostic. When other cloud platform don't offer the same services, how do you wean yourself off of them?" ..."
"... Multi-year commitments are another trap, he said. And sometimes there's an extra unpleasant twist -- minimum usage requirements that go up in the later years, like balloon payments on a mortgage. ..."
Apr 01, 2017 | www.networkworld.com

The Amazon outage reminds companies that having all their eggs in one cloud basket might be a risky strategy

"That is the elephant in the room these days," said Lee. "More and more companies are starting to move their services to the cloud providers. I see attackers trying to compromise the cloud provider to get to the information."

If attackers can get into the cloud systems, that's a lot of data they could have access to. But attackers can also go after availability.

"The DDoS attacks are getting larger in scale, and with more IoT systems coming online and being very hackable, a lot of attackers can utilize that as a way to do additional attacks," he said.

And, of course, there's always the possibility of a cloud service outage for other reasons.

The 11-hour outage that Amazon suffered in late February was due to a typo, and affected Netflix, Reddit, Adobe and Imgur, among other sites.

"From a sustainability and availability standpoint, we definitely need to look at our strategy to not be vendor specific, including with Amazon," said Lee. "That's something that we're aware of and are working towards."

The problem is that Amazon offers some very appealing features.

"Amazon has been very good at providing a lot of services that reduce the investment that needs to be made to build the infrastructure," he said. "Elastic load balances and other services make it easy to set up. However, it's a double-edged sword, because these types of services will also make it harder to be vendor-agnostic. When other cloud platform don't offer the same services, how do you wean yourself off of them?"

... ... ...

"If you have a containerized approach, you can run in Amazon's container services, or on Azure," said Tim Beerman, CTO at Ensono , a managed services provider that runs its own cloud data center, manages on-premises environments for customers, and also helps clients run in the public cloud.

"That gives you more portability, you can pick something up and move it," he said.

But that, too, requires advance planning.

"It starts with the application," he said. "And you have to write it a certain way."

But the biggest contributing factor to cloud lock-in is data, he said.

"They make it really easy to put the data in, and they're not as friendly about taking that data out," he said.

The lack of friendliness often shows up in the pricing details.

"Usually the price is lower for data transfers coming into a cloud service provider versus the price to move data out," said Thales' Radford.

Multi-year commitments are another trap, he said. And sometimes there's an extra unpleasant twist -- minimum usage requirements that go up in the later years, like balloon payments on a mortgage.

[Mar 03, 2017] Do You Replace Your Server Or Go To The Cloud The Answer May Surprise You

Mar 03, 2017 | www.forbes.com
Is your server or servers getting old? Have you pushed it to the end of its lifespan? Have you reached that stage where it's time to do something about it? Join the crowd. You're now at that decision point that so many other business people are finding themselves this year. And the decision is this: do you replace that old server with a new server or do you go to: the cloud.

Everyone's talking about the cloud nowadays so you've got to consider it, right? This could be a great new thing for your company! You've been told that the cloud enables companies like yours to be more flexible and save on their IT costs. It allows free and easy access to data for employees from wherever they are, using whatever devices they want to use. Maybe you've seen the recent survey by accounting software maker MYOB that found that small businesses that adopt cloud technologies enjoy higher revenues. Or perhaps you've stumbled on this analysis that said that small businesses are losing money as a result of ineffective IT management that could be much improved by the use of cloud based services. Or the poll of more than 1,200 small businesses by technology reseller CDW which discovered that " cloud users cite cost savings, increased efficiency and greater innovation as key benefits" and that " across all industries, storage and conferencing and collaboration are the top cloud services and applications."

So it's time to chuck that old piece of junk and take your company to the cloud, right? Well just hold on.

There's no question that if you're a startup or a very small company or a company that is virtual or whose employees are distributed around the world, a cloud based environment is the way to go. Or maybe you've got high internal IT costs or require more computing power. But maybe that's not you. Maybe your company sells pharmaceutical supplies, provides landscaping services, fixes roofs, ships industrial cleaning agents, manufactures packaging materials or distributes gaskets. You are not featured in Fast Company and you have not been invited to presenting at the next Disrupt conference. But you know you represent the very core of small business in America. I know this too. You are just like one of my company's 600 clients. And what are these companies doing this year when it comes time to replace their servers?

These very smart owners and managers of small and medium sized businesses who have existing applications running on old servers are not going to the cloud. Instead, they've been buying new servers.

Wait, buying new servers? What about the cloud?

At no less than six of my clients in the past 90 days it was time to replace servers. They had all waited as long as possible, conserving cash in a slow economy, hoping to get the most out of their existing machines. Sound familiar? But the servers were showing their age, applications were running slower and now as the companies found themselves growing their infrastructure their old machines were reaching their limit. Things were getting to a breaking point, and all six of my clients decided it was time for a change. So they all moved to cloud, right?

Nope. None of them did. None of them chose the cloud. Why? Because all six of these small business owners and managers came to the same conclusion: it was just too expensive. Sorry media. Sorry tech world. But this is the truth. This is what's happening in the world of established companies.

Consider the options. All of my clients' evaluated cloud based hosting services from Amazon , Microsoft and Rackspace . They also interviewed a handful of cloud based IT management firms who promised to move their existing applications (Office, accounting, CRM, databases) to their servers and manage them offsite. All of these popular options are viable and make sense, as evidenced by their growth in recent years. But when all the smoke cleared, all of these services came in at about the same price: approximately $100 per month per user. This is what it costs for an existing company to move their existing infrastructure to a cloud based infrastructure in 2013. We've got the proposals and we've done the analysis.

You're going through the same thought process, so now put yourself in their shoes. Suppose you have maybe 20 people in your company who need computer access. Suppose you are satisfied with your existing applications and don't want to go through the agony and enormous expense of migrating to a new cloud based application. Suppose you don't employ a full time IT guy, but have a service contract with a reliable local IT firm.

Now do the numbers: $100 per month x 20 users is $2,000 per month or $24,000 PER YEAR for a cloud based service. How many servers can you buy for that amount? Imagine putting that proposal out to an experienced, battle-hardened, profit generating small business owner who, like all the smart business owners I know, look hard at the return on investment decision before parting with their cash.

For all six of these clients the decision was a no-brainer: they all bought new servers and had their IT guy install them. But can't the cloud bring down their IT costs? All six of these guys use their IT guy for maybe half a day a month to support their servers (sure he could be doing more, but small business owners always try to get away with the minimum). His rate is $150 per hour. That's still way below using a cloud service.

No one could make the numbers work. No one could justify the return on investment. The cloud, at least for established businesses who don't want to change their existing applications, is still just too expensive.

Please know that these companies are, in fact, using some cloud-based applications. They all have virtual private networks setup and their people access their systems over the cloud using remote desktop technologies. Like the respondents in the above surveys, they subscribe to online backup services, share files on DropBox and Microsoft MSFT +1.45% 's file storage, make their calls over Skype, take advantage of Gmail and use collaboration tools like Google GOOG +1.45% Docs or Box. Many of their employees have iPhones and Droids and like to use mobile apps which rely on cloud data to make them more productive. These applications didn't exist a few years ago and their growth and benefits cannot be denied.

Paul-Henri Ferrand, President of Dell DELL +% North America, doesn't see this trend continuing. "Many smaller but growing businesses are looking and/or moving to the cloud," he told me. "There will be some (small businesses) that will continue to buy hardware but I see the trend is clearly toward the cloud. As more business applications become more available for the cloud, the more likely the trend will continue."

[Jan 12, 2017] Digitopoly Congestion on the Last Mile

Notable quotes:
"... By Shane Greenstein On Jan 11, 2017 · Add Comment · In Broadband , communication , Esssay , Net Neutrality ..."
"... The bottom line: evenings require far greater capacity than other times of the day. If capacity is not adequate, it can manifest as a bottleneck at many different points in a network-in its backbone, in its interconnection points, or in its last mile nodes. ..."
"... The use of tiers tends to grab attention in public discussion. ISPs segment their users. Higher tiers bring more bandwidth to a household. All else equal, households with higher tiers experience less congestion at peak moments. ..."
"... such firms (typically) find clever ways to pile on fees, and know how to stymie user complaints with a different type of phone tree that makes calls last 45 minutes. Even when users like the quality, the aggressive pricing practices tend to be quite irritating. ..."
"... Some observers have alleged that the biggest ISPs have created congestion issues at interconnection points for purposes of gaining negotiating leverage. These are serious charges, and a certain amount of skepticism is warranted for any broad charge that lacks specifics. ..."
"... Congestion is inevitable in a network with interlocking interests. When one part of the network has congestion, the rest of it catches a cold. ..."
"... More to the point, growth in demand for data should continue to stress network capacity into the foreseeable future. Since not all ISPs will invest aggressively in the presence of congestion, some amount of congestion is inevitable. So, too, is a certain amount of irritation. ..."
Jan 12, 2017 | www.digitopoly.org
Congestion on the Last Mile
By Shane Greenstein On Jan 11, 2017 · Add Comment · In Broadband , communication , Esssay , Net Neutrality It has long been recognized that networked services contain weak-link vulnerabilities. That is, the performance of any frontier device depends on the performance of every contributing component and service. This column focuses on one such phenomenon, which goes by the label "congestion." No, this is not a new type of allergy, but, as with a bacteria, many users want to avoid it, especially advanced users of frontier network services.

Congestion arises when network capacity does not provide adequate service during heavy use. Congestion slows down data delivery and erodes application performance, especially for time-sensitive apps such as movies, online videos, and interactive gaming.

Concerns about congestion are pervasive. Embarrassing reports about broadband networks with slow speeds highlight the role of congestion. Regulatory disputes about data caps and pricing tiers question whether these programs limit the use of data in a useful way. Investment analysts focus on the frequency of congestion as a measure of a broadband network's quality.

What economic factors produce congestion? Let's examine the root economic causes.

The Basics

Congestion arises when demand for data exceeds supply in a very specific sense.

Start with demand. To make this digestible, let's confine our attention to US households in an urban or suburban area, which produces the majority of data traffic.

No simple generalization can characterize all users and uses. The typical household today uses data for a wide variety of purposes-email, video, passive browsing, music videos, streaming of movies, and e-commerce. Networks also interact with a wide variety of end devices-PCs, tablets, smartphones on local Wi-Fi, streaming to television, home video alarm systems, remote temperature control systems, and plenty more.

It is complicated, but two facts should be foremost in this discussion. First, a high fraction of traffic is video-anywhere from 60 to 80 percent, depending on the estimate. Second, demand peaks at night. Most users want to do more things after dinner, far more than any other time during the day.

Every network operator knows that demand for data will peak (predictably) between approximately 7 p.m. and 11 p.m. Yes, it is predictable. Every day of the week looks like every other, albeit with steady growth over time and with some occasional fluctuations for holidays and weather. The weekends don't look any different, by the way, except that the daytime has a bit more demand than during the week.

The bottom line: evenings require far greater capacity than other times of the day. If capacity is not adequate, it can manifest as a bottleneck at many different points in a network-in its backbone, in its interconnection points, or in its last mile nodes.

This is where engineering and economics can become tricky to explain (and to manage). Consider this metaphor (with apologies to network engineers): Metaphorically speaking, network congestion can resemble a bathtub backed up with water. The water might fail to drain because something is interfering with the mouth of the drain or there is a clog far down the pipes. So, too, congestion in a data network can arise from inadequate capacity close to the household or inadequate capacity somewhere in the infrastructure supporting delivery of data.

Numerous features inside a network can be responsible for congestion, and that shapes which set of households experience congestion most severely. Accordingly, numerous different investments can alleviate the congestion in specific places. A network could require a "splitting of nodes" or a "larger pipe" to support a content delivery network (CDN) or could require "more ports at the point of interconnection" between a particular backbone provider and the network.

As it turns out, despite that complexity, we live in an era in which bottlenecks arise most often in the last mile, which ISPs build and operate. That simplifies the economics: Once an ISP builds and optimizes a network to meet maximum local demand at peak hours, then that same capacity will be able to meet lower demand the rest of the day. Similarly, high capacity can also address lower levels of peak demand on any other day.

Think of the economics this way. An awesome network, with extraordinary capacity optimized to its users, will alleviate congestion at most households on virtually every day of the week, except the most extraordinary. Accordingly, as the network becomes less than awesome with less capacity, it will generate a number of (predictable) days of peak demand with severe congestion throughout the entire peak time period at more households. The logic carries through: the less awesome the network, the greater the number of households who experience those moments of severe congestion, and the greater the frequency.

That provides a way to translate many network engineering benchmarks-such as the percentage of packet loss. More packet loss correlates with more congestion, and that corresponds with a larger number of moments when some household experiences poor service.

Tradeoffs and Externalities

Not all market participants react to congestion in the same way. Let's first focus on the gazillion Web firms that supply the content. They watch this situation with a wary eye, and it's no wonder. Many third-party services, such as those streaming video, deliver a higher-quality experience to users whose network suffers less congestion.

Many content providers invest to alleviate congestion. Some invest in compression software and superior webpage design, which loads in ways that speeds up the user experience. Some buy CDN services to speed delivery of their data. Some of the largest content firms, such as YouTube, Google, Netflix, and Facebook, build their own CDN services to improve delivery.

Next, focus on ISPs. They react with various investment and pricing strategies. At one extreme, some ISPs have chosen to save money by investing conservatively, and they suffer the complaints of users. At the other extreme, some ISPs build a premium network, then charge premium prices for the best services.

There are two good reasons for that variety. First, ISPs differ in their rates of capital investment. Partly this is due to investment costs, which vary greatly with density, topography, and local government relations. Rates of investment tend to be inherited from long histories, sometimes as a product of decisions made many years ago, which accumulated over time. These commitments can change, but generally don't, because investors watch capital commitments and react strongly to any departure from history.

The second reason is more subtle. ISPs take different approaches to raising revenue per household, and this results in (effectively) different relationships with banks and stockholders, and, de facto, different budgets for investment. Where does the difference in revenue come from? For one, competitive conditions and market power differ across neighborhoods. In addition, ISPs use different pricing strategies, taking substantially different approaches to discounts, tiered pricing structures, data cap policies, bundled contract offerings, and nuisance fees.

The use of tiers tends to grab attention in public discussion. ISPs segment their users. Higher tiers bring more bandwidth to a household. All else equal, households with higher tiers experience less congestion at peak moments.

Investors like tiers because they don't obligate ISPs to offer unlimited service and, in the long run, raise revenue without additional costs. Users have a more mixed reaction. Light users like the lower prices of lower tiers, and appreciate saving money for doing little other than email and static browsing.

In contrast, heavy users perceive that they pay extra to receive the bandwidth that the ISP used to supply as a default.

ISPs cannot win for losing. The archetypical conservative ISP invests adequately to relieve congestion some of the time, but not all of the time. Its management then must face the occasional phone calls of its users, which they stymie with phone trees that make service calls last 45 minutes. Even if users like the low prices, they find the service and reliability quite irritating.

The archetypical aggressive ISP, in contrast, achieves a high-quality network, which relieves severe congestion much of the time. Yet, such firms (typically) find clever ways to pile on fees, and know how to stymie user complaints with a different type of phone tree that makes calls last 45 minutes. Even when users like the quality, the aggressive pricing practices tend to be quite irritating.

One last note: It is a complicated situation where ISPs interconnect with content providers. Multiple parties must invest, and the situations involve many supplier interests and strategic contingencies.

Some observers have alleged that the biggest ISPs have created congestion issues at interconnection points for purposes of gaining negotiating leverage. These are serious charges, and a certain amount of skepticism is warranted for any broad charge that lacks specifics.

Somebody ought to do a sober and detailed investigation to confront those theories with evidence. (I am just saying.)

What does basic economics tell us about congestion? Congestion is inevitable in a network with interlocking interests. When one part of the network has congestion, the rest of it catches a cold.

More to the point, growth in demand for data should continue to stress network capacity into the foreseeable future. Since not all ISPs will invest aggressively in the presence of congestion, some amount of congestion is inevitable. So, too, is a certain amount of irritation.

Copyright held by IEEE.

To view the printed essay, click here.

[Nov 09, 2015] Thoughts on the Amazon outage

Notable quotes:
"... The 'Cloud' isn't magic, the 'Cloud' isn't fail-proof, the 'Cloud' requires hardware, software, networking, security, support and execution – just like anything else. ..."
"... Putting all of your eggs in one cloud, so to speak, no matter how much redundancy they say they have seems to be short-sighted in my opinion. ..."
"... you need to assume that all vendors will eventually have an issue like this that affects your overall uptime, brand and churn rate. ..."
"... Amazon's downtime is stratospherically high, and their prices are spectacularly inflated. Their ping times are terrible and they offer little that anyone else doesn't offer. Anyone holding them up as a good solution without an explanation has no idea what they're talking about. ..."
"... Nobody who has even a rudimentary best-practice hosting setup has been affected by the Amazon outage in any way other than a speed hit as their resources shift to a secondary center. ..."
"... Stop following the new-media goons around. They don't know what they're doing. There's a reason they're down twice a month and making excuses. ..."
"... Personally, I do not use a server for "mission critical" applications that I cannot physically kick. Failing that, a knowledgeable SysAdmin that I can kick. ..."
nickgeoghegan.net
Disaster Recovery needs to be a primary objective when planning and implementing any IT project, outsourced or not. The 'Cloud' isn't magic, the 'Cloud' isn't fail-proof, the 'Cloud' requires hardware, software, networking, security, support and execution – just like anything else.

All the fancy marketing speak, recommendations and free trials, can't replace the need to do obsessive due diligence before trusting any provider no matter how big and awesome they may seem or what their marketing department promise.

Prepare for the worst, period.

Putting all of your eggs in one cloud, so to speak, no matter how much redundancy they say they have seems to be short-sighted in my opinion. If you are utilizing an MSP, HSP, CSP, IAAS, SAAS, PAAS, et all to attract/increase/fulfill a large percentage of your revenue or all of your revenue like many companies are doing nowadays then you need to assume that all vendors will eventually have an issue like this that affects your overall uptime, brand and churn rate. A blip here and there is tolerable.

Amazon's downtime is stratospherically high, and their prices are spectacularly inflated. Their ping times are terrible and they offer little that anyone else doesn't offer. Anyone holding them up as a good solution without an explanation has no idea what they're talking about.

The same hosting platform, as always, is preferred: dedicated boxes at geographically disparate and redundant locations, managed by different companies. That way when host 1 shits the bed, hosts 2 and 3 keep churning.

Nobody who has even a rudimentary best-practice hosting setup has been affected by the Amazon outage in any way other than a speed hit as their resources shift to a secondary center.

Stop following the new-media goons around. They don't know what they're doing. There's a reason they're down twice a month and making excuses.

Personally, I do not use a server for "mission critical" applications that I cannot physically kick. Failing that, a knowledgeable SysAdmin that I can kick.

[Oct 23, 2015] The Downtime Dilemma Reliability in the Cloud by Lauren Carlson

For a large corporation to go tot he cloud is a very risky as neither reliability nor loyalty of personnel are present. Like with outsourcing of helpdesk bad things are swiped under that carpet until they can't be.

When I'm not blogging for Software Advice, I like to do a little personal writing of my own. I use Google's Blogger as my platform for reflection. A couple of weeks ago, I tried to create a new post, but like thousands of other Blogger-ites, I was unable to do so. After a quick search on Twitter and various user boards, I realized Blogger was down.

The application was unavailable for about 20 hours. This outage is just one in what seems to be a string of recent cloud failures. Amazon's EC2 is probably the biggest fail story lately. But Microsoft's BPOS hosted bundle also experienced a significant amount of downtime recently. And, earlier this week, little monsters everywhere went gaga when Amazon released a digital copy of "Born This Way" for 99 cents, causing Amazon to experience another unfortunate crash.

These incidents have been covered extensively on the major tech news outlets, leading the technorati to once again question the reliability of cloud computing. One contributor wrote on the Microsoft Service Forum:

"Back to in-house servers we go, I suppose. This string of incidents will set the cloud/off-site model back months, if not years, I fear…"

When things go awry in the cloud, many companies are affected. Because these periods of downtime are public knowledge, it creates a misconception that cloud computing is unreliable and should be avoided. However, when things falter with on-premise systems, it is hidden behind the corporate curtain.

Despite cloud computing's proven track record of success and gaining popularity as a cost-effective solution, it's still managing to get a bad rap. Even with these highly visible incidents in the media recently, is this bashing of cloud computing really warranted?

Downtime in the cloud

Anyone who has ever purchased a cloud-based software system is familiar with the Service Level Agreement (SLA). In the SLA, the provider commits to a percentage of up-time, or amount of time the system can be expected to run without interruption. Ideally this would be 100%, but as with most technology, hiccups in service delivery are inevitable.

When creating the SLA, vendors take into account regularly scheduled maintenance, as well as unplanned outages or downtime. After making those considerations, most cloud companies can still quote about 99.9% up-time. That looks pretty impressive and seems to be in line with the kind of performance we have come to expect from SaaS vendors. Unfortunately, naysayers still like to harp on that .1%.

Even though cloud systems are recognized as the cash-flow-friendly alternative to on-premise systems, we still have the traditionalists that refuse to embrace the cloud. Many prefer to instead dwell on the "what ifs." What if the host's servers go down? What if mission-critical data is lost? While these are clearly valid questions, for many on-site purists, what it really comes down to is control. Users feel more secure when they are in control of the system. However, Walter Scott, CEO, GFI Software, offers a reminder:

"Cloud-based solution vendors not only have the latest technology, the latest firewalls, the best data centers and the highest levels of redundancy possible but they will apply multiple layers of [in-depth defense] that your average business (a Fortune 500 company may be an exception) can never have."

ENKI • 4 years ago

Lauren, as you pointed out, reliability isn't really the issue, since Amazon never violated its own statements about reliability, and despite the meltdown, their overall reliability is still better than most businesses receive from internal infrastructure. This is true across the board at SaaS vendors like NetSuite or SalesForce, Platform-as-a-Service vendors like Heroku, or Infrastructure-as-a-Service vendors like Amazon.

Instead, I see the reliability issue as composed of two parts. First, there is misalignment of *expectations* about reliability versus what cloud vendors actually offer. Second, this gap is the result of a customer base that is becoming increasingly unable (due to lack of keeping skilled staff on the payroll or because they buy remarketed cloud as SaaS or PaaS) to discern or solve potential reliability problems because they have outsourced the problem to cloud vendors - or at least they think they have!

For example, those customers of Amazon's who were down for 20 hours thought they were buying hosting, but instead all along, due to their use model of the cloud, they were paying for remote servers with a fancy provisioning interface and no real high availability features. Since you brought up politics, I'll use a political analogy.

Much like the problem with our federal deficit, this is the result of people wanting to have their cake (reliability) and eat it too (cost savings.) Unfortunately, the laws of physics and probability require more infrastructure for reliability, and that costs money - real customer money. For example, your article talks about getting to 100%, but yet I can show you that no system will ever reach 100%, and when you go past 4-nines, the costs climb exponentially with every additional 9 of uptime. Faced with such a sober accounting, cloud users can decide for themselves how much reliability is enough.

Where things can be improved immediately is for infrastructure cloud vendors and their customers to talk about the cost/reliability tradeoff so the expectations are aligned. Yet this is difficult in a mass-market, vending-machine self service market, which is why I advocate that infrastructure cloud customers outsource cloud deployment/management if they don't want to develop it as an in-house skill (and they shouldn't have to if they don't want to!) In any case, infrastructure cloud customers cannot escape the reality at the moment that they need someone on their side to ensure that the way they use the cloud will result in the uptime they expect. This also goes for performance, by the way.

For SaaS cloud customers, the issue is more complex, since they have no control over how their SaaS vendor designs and buys their infrastructure. The best they can do is make sure that they're getting a reasonable SLA from their vendor, and that there are meaningful penalty clauses that will keep the vendor focused on reliability.

All things considered, cloud still can enable significant cost savings and thereby permit web-based businesses to exist which could never have been viable before, so it's here to stay. But I think the honeymoon period in which cloud seemed a magical panacea to all IT ills is over. There still has to be someone at the helm of any cloud deployment who knows how to ensure that it is reliable and performs as expected.

Eric Novikoff
ENKI
http://www.enki.co

robertcathey

Great post, Lauren. Public cloud availability and security failures are indeed much more visible than when similar issues strike the corporate data center.

Netflix provides an excellent template for how to improve availability and reliability while leveraging the cost and flexibility advantages of public commodity cloud. By spanning multiple data centers and thinking analytically about base/peak demand, they've successfully put a critical piece of their product strategy on the lowest cost infrastructure.

Which brings up an instructive point: All of this assumes that we're talking about public COMMODITY (or webscale) cloud. Public "clouds" that are essentially virtualization 2.0 strategies built on legacy client/server architectures are not competitive.

[Oct 23, 2015] AWS Outage Doesn't Change Anything

"... To think that putting applications 'in the cloud' magically makes everything better is naive at best. ..."
"... Writing an application that assumes all of the infrastructure it runs on is fragile and may fail at any moment is complex and difficult. So, for many years the dominant thinking in writing applications was to assume that the infrastructure was essentially perfect, which made writing the applications much simpler. This is the assume robust model. ..."
Forbes

If the latest AWS outage changes anything in your approach to cloud adoption, then you're doing it wrong. This is not the first AWS outage (I first wrote about one in 2011, back when I had hair), nor will it be the last. Nor will it be only AWS that suffers another outage at some point in the future. We've already seen outages from Office365, Azure, Softlayer, and Gmail.

Outages are a thing that happens, whether your computing is happening in your office, in co-location, or in 'the cloud', which is just a shorthand term for "someone else's computer".

To think that putting applications 'in the cloud' magically makes everything better is naive at best.

A Tradeoff

I've written about the resiliency trade-off before, so to summarize, there are only two ways to approach this: Assume robust, or assume fragile.

Writing an application that assumes all of the infrastructure it runs on is fragile and may fail at any moment is complex and difficult. So, for many years the dominant thinking in writing applications was to assume that the infrastructure was essentially perfect, which made writing the applications much simpler. This is the assume robust model.

The trade-off was that we had to spend a lot of time and effort and money on making the infrastructure robust. So we have RAID, and clustering, and Tandem/HP NonStop, and redundancy, and a host of other techniques that help the infrastructure to stay online even when bits of it break.

[May 03, 2014] Ask Slashdot What To Do With Misdirected Email

The fact that gmail ignores dot in email address treating [email protected] and [email protected] has interesting security implications. The same for treating [email protected] and [email protected] as identical. As one commenter noted "Oh and Google needs to admit they fucked up and fix it, I'm pretty sure that guys info I got could lead to some sort of lawsuit."
Jan 13, 2014 | Slashdot

An anonymous reader writes "My Gmail account is of the form (first initial).(middle initial).(common last name)@gmail.com. I routinely receive emails clearly intended for someone else. These range from newsletters to personal and business emails. I've received email with various people's addresses, phone numbers and even financial information.

A few years ago I started saving the more interesting ones, and now have an archive of hundreds of emails directed at no less than eight distinct individuals. I used to try replying to the personal ones with a form response, but it didn't seem to help.

To make matters worse, I frequently find I can't use my email to create a new account at various sites because it's already been registered. Does anyone else have this problem? Is there any good way to handle this?"

Animats

Get a real mail account (5, Insightful)

Get a real mail account and get off Gmail/Hotmail/other free service. You get what you pay for.

MarioMax

Re: Get a real mail account (4, Informative)

This. Domains are cheap, and hosting/forwarding is cheap. Plus you get some level of personalization.

Also easier to remember. [email protected] is catchy while [email protected] is generic and easily forgotten.

Nerdfest

Re: Get a real mail account (4, Insightful)

Exactly. This also covers the case where your ISP or Microsoft or Google does something that you can't abide by. It decouples you from your provider.

You can move to a different email hosting service or even run your own without much inconvenience. It also looks a little more professional than having a HotMail account.

Anonymous Coward

Re: Get a real mail account

Absolutely. I must have avoided the melee since I domained back in '95. Gmail was interesting for porn accounts and whatnot, but now mailinator is better.

Gmail isn't good for anything anymore except privacy violations.

MarioMax

Re: Get a real mail account (1)

I've used my own domain for 9 years with paid hosting thru a major host. Personally I can't stand webmail and stick to traditional POP3 email and for that purpose it suits me. But it is easy enough to set up domain forwarding to services like gmail if you choose (most likely for a fee).

The nice thing about buying a domain is you can pretty much set up unlimited email addresses under the domain for any purpose you choose, or use a single email address as a "catch-all" for said domain. Web services like Facebook won't know and won't care.

As for specific hosting recommendations, they are all about the same in terms of terrible service and support, but I encourage you to research and decide for yourself.

Anonymous Coward

Re: The only plausible solution... (0)

Is to change your name

You'd be surprised at the amount of misaddressed email I get at [email protected]. It's rather astonishing, I do say.

aardvarkjoe

Re:Abandon Your Real Name (1)

As for the rest of your problem, just set up a second Gmail address with a nonsensical middle name (first initial).turnip.(common last name)@gmail.com and have it forward to your "real" gmail address. Problem solved.

This is actually a good idea even if you don't have the problem that the original poster had. I created a new gmail account with that general idea a little while back which I use for things like online retailers. It makes it really easy to filter those emails out of my personal inbox, which can be a pain sometimes otherwise.

The [email protected] addresses would let you do something similar, but they've got a couple serious drawbacks -- many (in my experience, probably "most") websites will reject an email address with a + sign, and also it exposes your actual personal address. Using a separate gmail address solves those.

I do wish that Google would come up with a proper disposable email address solution.

mvar

Re:Name? (1)

This. As for misdirected email, i had a similar problem a couple of years back when someone decided to use my email (no real name) for his facebook account. As it seems email confirmation is optional and the guy made a full profile, added friends etc xD

watermark

gmail plus sign postfix

Well, I have a solution to your "email has already been registered" issue.

Gmail will treat [email protected] as the same address as [email protected], both will go into the [email protected] account.

Give the site an email address with a plus sign postfix like that and it should detect it as a new unique address.

Some sites don't allow the plus symbol in email addresses (even though it's a valid character), so mileage may vary.

whoever57

Re:gmail plus sign postfix (2)

MANY sites don't allow the plus symbol in email addresses (even though it's a valid character), so mileage may vary.

FTFY.

Seriously, having used "plus-addressing" for many years, I can attest to the fact that many websites won't accept it.

I know of one site where I did register years ago, but their de-registration page won't accept the "plus-address" that I used to register (rakuten.com, I'm looking at you).

chill

Yes (4, Funny)

Yes, I have this exact same problem. However, I do not keep other people's e-mail.

I have been able to track down the correct people to whom the e-mails belong. In two cases, the people are lawyers and the e-mails contained either personal or confidential information.

Another case is a general contractor, and I've received quotes from subcontractors, blueprints and general correspondence.

In one case it was a confirmation of tickets for a theme park. (I debated showing up as soon as the park opened and claiming the tickets, but ethics got the better of me.)

These people now reside in my address book. I forward the e-mail in question over to them, and CC a copy to the sender.

Anonymous Coward

What is the problem?

To make matters worse, I frequently find I can't use my email to create a new account at various sites because it's already been registered.

Why not make a password reset for them (unless they have "security questions") and change the email? Then you can create your own account. It is not your problem that some hobo can't enter their own e-mail address when registering accounts.

As for the unwanted email, tell the sender politely that they have sent personal/confidential information to you, an unsuspecting third party with a similar address. Then throw any future mail from them away. I have gotten some mail like this, but they all rectified their mistake and stopped sending to me. If they wouldn't, it isn't my problem (apart from pressing the "junk email" button in my MUA).

Anonymous Coward

Even worse: Facebook does not validate e-mails (0)

So I got somebody else's Facebook notifications. From time to time, I get some e-mail from Facebook stating the e-mail address has not been verified (with no description on what to do if you are not the intended recipient). I hoped this situation would die with time, but it is already five months since I got the first e-mail.

At some stage in the past, I also got some e-mails from ebay about a seller and a buyer discussing transaction e-mails. These ones did actually die.

In both cases, the e-mail account the messages should go was not the one I tend to give out. Google allows for different spellings on the same account. Your e-mail account may be achieved by following permutations:

And this is not a bug, it is a feature.

hawguy

I have the same problem (4, Funny)

I use my first initial+last name as my email address and get mail destined for a half dozen people. One person is an elderly gentleman in the midwest, I've given up any hope of getting him to stop giving out my email address. I only get a half dozen or so a month so it's not too bad.

I usually send a form letter to emails where it looks like a person might read the response (as opposed to newsletters, etc). For those emails where I don't think a human will read the response, I usually just hit the Spam button, unless there's a quick and easy to find unsubscribe link.

Sometimes when an email has a signature that says that if I receive a copy of the email in error I must delete all copies, in my reply, I ask whether they want to work on a time and materials basis or a fixed price $500 contract for me to track down and delete the email from all devices that it may have been delivered to (having emails go to a phone, tablet, several computers, imap download + backup means a fair amount of work to find and delete it everywhere). So far none have been willing to pay. I wonder if I could accept their demand to delete all copies of the email as implicit authorization to do the work and then bill them for the work.

Anonymous Coward

I like mail redirectors. Everyone but true spammers will respond to you redirecting all the mail from their domain back to the support address for that domain. Preface it with, "you must have lost this, I am helping., HERE" And resend the email. Maybe twice to make sure it isn't lost. Works every time.

Anonymous Coward

me too (0)

my gmail is [email protected] and i have this problem all the time. i have on occasion looked up the person using my email by searching the phone book for people with my name around the address of the local businesses and people that frequently email me... usually it appears the people are 60+ but when someone used my email to start a twitter account it was someone in his 30s based on the picture he used on the account. i did like someone above said and used email based password reset and posted on the account that the person was using the wrong email address and that the account should be removed from their friend list or whatever twitter does.

in general i am really annoyed by the email i constantly get, though the other week i did get some tickets to an indoor trampoline place that sounded fun... sadly the place was 2500 miles away. most the people using my account i think are leaving off the random number or swapping out a _ for an inconsequential . that leads me to getting their emails.

Anonymous Coward

I have the same issue (0)

I have had several emails from job applications to registrations on shopping sites to my gmail. I reply telling the person that they have contacted the wrong person, and advise them to contact the intended recipient by another means.

I once got a schedule for a church rota for somewhere in the states, and when I replied saying I wasn't the person in question they asked me to forward it to them! I'm not quite sure how they expected me to do this.

This misaddressing of emails is probably really confusing the NSA email contact database though.

Anonymous Coward

Had this issue (0)

Someone was registering for sites using my GMail address without the dot I use. They registered for a site and an email came through confirming their details, including phone number.

I phoned up and asked him politely to not use my email address.
He accused me of hacking his account he has used for 2 years.
I explained I have had the account since GMail was 'invite only'.

Got swore at loads, so hung up and set up a rule so that mail without the dot is ignored and trashed. Problem solved!

mdenham

Re: Had this issue

For what it's worth, GMail treats all e-mail addresses that are identical other than dots as the same e-mail address internally, so [email protected], [email protected], [email protected], and [email protected] are all going to be the same account.

I've noticed that forum spammers like to use that trick to get around "each account must have a unique e-mail" settings on certain types of forum software.

hism

Unsubscribe or filter (1)

I have the same problem. There's at least two dozens distinct individuals who have had emails erroneously addressed to my inbox.

For automated emails that offer an easy link to unsubscribe or dissociate my email address from that account, I use the provided link. Those are pretty easy.

Sometimes people register for paid services that send a monthly bill and it comes to my email address. They may or may not be of English origin. For these, I just add a filter or rule to my email provider or client to just delete them or move them. Communicating with someone, possibly in another language, possibly requiring lots of bureaucratic red tape, is not really worth it. If they care about it enough, it's their responsibility to fix it.

The most annoying case is when a large group of friends start an email thread with a whole bunch of different people in the "to" or "cc" field. Asking them to correct the email address is pretty much an exercise in futility, since all it takes is one person to hit 'reply to all' and your email address is back on the thread. For these, I just block every recipient on the thread.

I've never had the problem of someone already having registered my email. One way around it would be to set up another email address that just forwards to your actual email address.

Anonymous Coward

Yep, I have this issue

1) If I can track down the person, I try to contact them and let them know they have they're using the wrong email
2) If it's a real person sending the email (like when one person have out my email for his house refinance stuff), I email the person back asking them to contact via phone or whatever the person and tell them they have the wrong email address
3) If a person in #2 does so and i keep receiving new emails because the person doesn't learn, I ask someone again like in #2, though this time I recommend they they stop doing business with, or throw out the job application, or whatever because the person is so stupid that they can't even figure out their own address
4) I've been know to find the person via their relatives and ask them to inform the person that they're using the wrong email
5) For sites where registrations were done, I simply go to the site, click Forgot Password, get a reset, go in, and change the information so it's no longer to my email address. Often I change the address to STOP+USING+[MY+ADDRESS]@gmail.com. Sometimes logging in to the account has the benefits of getting me their address and/or phone number to contact, which I've done.
6) In cases where I've changed the email address and they've had tech support change it back to mine, I go back in to the account and change ALL the info to mine, so now it become my account and they can no longer use it or get any access to it.

xrayspx

I've just been dealing with this (1)

I use a personal domain for my actual mail, but have accounts at all the major free mail sites too, just for spam or whatever.

I started getting mail to my Yahoo account which wasn't spam, but clearly not for me, as part of a group of people participating in a medical imaging conference. For a while I just blew it off, but eventually the organizer mailed my actual non-yahoo address by mistake as well. So I decided to be swell about it and let her know that I'm not the person she's trying to reach. She said "Oh, I'm sorry, I meant to do (yourname)@yahoo.com, thanks!", and so I told her "well no, that's also me, sorry". I proceeded to tell her an address which would work for her intended recipient (work email for the person she was trying to mail, who isn't me).

Basically she refused to believe she has been sending to the wrong address, and said "I had no idea two people could have the same email address, I guess Yahoo must allow it or something". At that point, I gave up and just let it go again. It's not high-volume enough to matter.

koan

Me too (1)

They can't reply or get your reply because they can't log in, I went so far as to track one person down via an ad sent to them, I have also received someone's complete information, SSN, etc. In the end I just drag them to the trash.

Oh and Google needs to admit they fucked up and fix it, I'm pretty sure that guys info I got could lead to some sort of lawsuit.

weave

Happens to me a lot with my own domain (4, Insightful)

I own a very short domain name where the first part of the name is the same as many organization's name.

e.g., if it was example.com then others have example.co.uk or exampleinc.com etc and I get a LOT of their email because I wildcard my domain for email and people just assume that example.com will work

As I get them, I add a postfix rule to reject that specific username but I still get stuff, including very confidential stuff.

I haven't advised these organizations because I fear they'll just turn around and try to dispute to get my domain or accuse me of criminal interception or whatever. So I just delete them and they can wonder why they never got a reply.

Rule #1: "Email is not a guaranteed service."

Rule #2: "Email is not secure. Stop sending confidential stuff through it"

kiick

Get your own domain name (1)

I had various problems with email address collisions as well. Then when I had to change ISPs, I decided to get my own domain name. It's a little different when you own your own email address. If you register a domain, you can be [email protected] or such. Then you just forward from your actual email host to the registered email address. It's only a few dollars a year. Then YOU decide who gets an email address for your domain, and you can have whatever policy you want to avoid collisions.

Garin

bah, you guys are no fun (2)

Y'all are missing out on a good time.

I have a gmail account with the first name dot last name set up. As you can imagine I get quite a few messages for people who forget to tell their friends about their middle initial. However from context, I can often tell which of my name-sharing buddies the email was intended for. Over the years I have actually gotten to know a couple of them, which is fun.

I don't bother trying to tell the senders about the mistakes, they usually do nothing, oddly. The recipient, however, tends to get on it effectively.

It's quite interesting do talk to them. What's in a name?

Anonymous Coward

Worst is Barnes and noble, nook

They won't take your email address off if some uses it by mistake, you are stuck getting perpetual updates

ShaunC

This happens to me a lot, too

A few months back, I received an email on my Gmail from the agent of an NFL player. The agent was apparently looking to help his client negotiate a contract, and conveniently attached a draft of said contract. I went and updated the NFL player's Wikipedia entry stating that he was going into free agency and looking for a gig. Hey, I could have done a lot worse, like placing bets using inside info or something.

Many, many years ago, I had the screen name "File" on AOL. There was some sort of ancient productivity suite (maybe Notes, or 123, or something) where you would cc a message to "file" in order to keep a local copy, and many AOL users presumed their email service worked the same way. Oh sweet Christ, the things that landed in my inbox there over the years..

lamber45

Haven't had this issue with GMail, but with other (2)

My GMail (and Yahoo! as well) username is (first name)(middle name)(last name), all fairly common [in fact at my current employer there are multiple matches of (first name)(last name), and my father has the same (first name)(last name) as well], and I have not had this problem with either service. Perhaps using initials instead of full names is part of it; or your last-name may have different demographic connotations.

I did, however, recently have that problem with a Comcast account. When the tech visited our home for installation, he created an account (first name)(last name) @comcast.net . I didn't actually give it out anywhere, yet within a few months it was filled with a hundred or so messages for someone in another state. I did try responding to one item that seemed moderately important, and whoever got the response [the help-desk of some organization] didn't seem to grasp that I had no connection with the intended recipient. Since I hadn't advertised it anywhere, it was easy to change the username, to (my first initial)(wife's first initial)(my last initial)(wife's last initial)(string of digits) @comcast.net. While this address appears to have been reused, apparently Comcast no longer allows address reuse; I tried using a previous ID that I had used a long time ago, and it was not available.

Since you ask for advice, I recommend two courses of action:

1. As long as you still have access to that address, when you receive anything that is clearly misdirected and potentially of high value, deal with it politely. Don't use a "form response", instead personalize the response to the content of the message. CC the intended recipient on the response, if you are able to divine who it is. Once you've dealt with the matter, delete the whole thread. For newsletters, try following an "unsubscribe" action, if that's not available mark as spam.

2. Consider an exit strategy from your current e-mail address, no matter how much is attached to it. See the Google help posting "Change your username". For the new address, try a long nickname or full first name instead of first initial; or maybe add a string of numbers, a city your contacts will recognize, or a title. Give your important contacts plenty of advance notice, post the new address with the reasons you're switching [perhaps with a list of the confusing other identities as well] on your "old" Google+ profile. After a reasonable time (say six months or a year), delete your old account. Make sure you change your address at all the "various sites" you've registered at before doing so, in case you need to use a password reset function.

Anonymous Coward

Periods don't count (0)

Also, note that the periods in your name don't make any difference. Email addressed to [email protected], [email protected] and [email protected] go to the same mailbox.

... If you are certain that everyone will use the periods just as you specified then it is pretty easy to add a filter which separates the mail into different folders based on the position of the periods. That can automatically filter email addresses that aren't formatted to your liking.

[Oct 30, 2013] Snowden leak NSA secretly accessed Yahoo, Google data centers to collect information

There are some additional dangers in over-reliance of cloud computing, especially using major players... ;-)
RT USA

Those documents, supplied by former NSA contractor Edward Snowden and obtained by the Washington Post, suggest that the US intelligence agency and its British counterpart have compromised data passed through the computers of Google and Yahoo, the two biggest companies in the world with regards to overall Internet traffic, and in turn allowed those country's governments and likely their allies access to hundreds of millions of user accounts from individuals around the world.

"From undisclosed interception points, the NSA and GCHQ are copying entire data flows across fiber-optic cables that carry information between the data centers of the Silicon Valley giants," the Post's Barton Gellman and Ashkan Soltani reported on Wednesday.

The document providing evidence of such was among the trove of files supplied by Mr. Snowden and is dated January 9, 2013, making it among the most recent top-secret files attributed to the 30-year-old whistleblower.

[Oct 02, 2013] Google Accused of Wiretapping in Gmail Scans

Federal wiretap law exempts interception of communication if it is necessary in a service provider's "ordinary course of business," which Google said included scanning e-mail. That argument did not fly with Judge Koh.

"In fact, Google's alleged interception of e-mail content is primarily used to create user profiles and to provide targeted advertising - neither of which is related to the transmission of e-mails," she wrote in last week's ruling.

... ... ...

Also last week, Google asked the Court of Appeals for the Ninth Circuit to reconsider a Sept. 10 ruling that a separate wiretapping lawsuit could proceed. That one involves Google Street View vehicles that secretly collected personal information from unencrypted home computer networks.

[Sep 15, 2013] Mark Zuckerberg Awarded CIA Surveillance Medal by Jim W. Dean, VT Editor

Facebook Contributed More to Monitoring Americans Than All Other Sources Combined, and Cheaper, too. Login to Facebook to comment. Nuff said. Written before Snowden revelations
Jul 22, 2012 | Veterans Today

Who's behind those Foster Grants – The CIA, of course.

Well, now it is official. Mark Zuckerberg was not so smart after all, but just fronting for the CIA in one of the biggest Intelligence coups of all times.

But there remains one small problem, the CIA is not supposed to monitor Americans. I guess we will hear more on that soon from the lawyers once the litigation gets cranked up.

Personally I will be more interested in how this is going to effect the stock offering and shares as all Americans should own the entity that has been spying on them.

And then there are the SEC full disclosure regulations and penalties. It's bonanza time for the lawyers.

Could the loophole the CIA used be that, 'you aren't being spied on if you are willingly posting everything a repressive regime would love to have on your Facebook account, with no threats, no family hostages, no dirty movies or photos that could be released?

But enough with the lead in. Let's take you directly to our source where you can get it straight from the source's mouth, including seeing Zuckerberg getting his award.

We really need your comments on this below so we can speak to power with one voice…something that can rarely be done around here.

I know what you're thinking, but no, I am not stupid…all of my Facebook material is all made up, including all of my friends. I am in the safe zone. My momma didn't raise no fool. But how about you?


YouTube - Veterans Today - – CIA and Zuckerberg

Hope you enjoyed the spoof folks. I thought it was great. And congrats to the Onion News Network gang on getting those 3.7 million YouTube views !!!

  1. DaveE

    July 10, 2012 - 6:27 pm

    "The Onion" is great and they certainly have no shortage of material for their satirical wit. I guess you might as well laugh about it, there's no telling how much longer we'll be able to laugh about ANYTHING, if the Zuckerbergs have their way with us.

    Log in to Reply
  2. PallMall

    July 10, 2012 - 6:39 pm

    Of course, everyone should realize this video is SPOOF News by The Onion.
    http://www.creditwritedowns.com/2011/03/the-onion-cia-says-facebook-is-a-dream-come-true.html

    Log in to Reply
  3. PallMall

    July 10, 2012 - 6:46 pm

    Chris Sartinsky is a writer for The Onion News Network.

  4. The Rahnameh

    July 10, 2012 - 9:10 pm

    Google as well. Google suffers from a clever stock price inflation. It begs the question, "What has Google done to assure its investors that it is worth its price every quarter?" After you attempt the answer, then contrast that with a bonafide security like Apple (and what it had to do to maintain its price). Facebook was a ponzi scheme. The entire market is a pyramid scheme, in fact.

    The game is theirs and one can keep playing it or change the rules to win. The effect here is akin to the one that begets protestors who ready to revolt against a government, but are still subconsciously observing basic pedestrian rules, keeping off property where it's obviously private, etc.

    Facebook and Google are a team. The cover for the collaboration was blown when Facebook became a Google searchable hit.

    Here is the top level synopsis in hindsight (I have left out many details/tangents):

    1. "America Online" (oy, the name's obvious!) care of Steve Case and many Zionists. AOL was arguably an even more robust online social community than Facebook, with customizable profiles, Keywords, status messages/tweets known as away messages, message boards, e-mail, instant messages, multiplayer games, and even viable chat rooms;
    2. DARPA released WWW and people escaped from a stale AOL;
    3. Friendster and Myspace emerge. Myspace's addresses replace AOL's keywords in an eerie redux;
    4. Myspace is bought by Rupert Murdoch and subsequently turns into a spam filled lot of junk from what was a robust community of customizable information; and then,
    5. Facebook emerges as the new bastion and a migration occurs to the "new scene". These migrations are little more than media encouraged penning of sheeple into various cages.

    This continues, but based on the linear history above alone, one can make many accurate inferences.

  5. JS

    July 11, 2012 - 7:28 am

    No, I did NOT enjoy the spoof. Of course I'm aware of The Onion and their spoof news, but billions worldwide are unaware of who they are, and many will take this "news" seriously. The Onion is a disinfo operative's "wet dream". I'm surprised you guys find it funny. One of these days, The Onion may do a spoof about you. Would you laugh then? Enough already.

    For the record, I have never had a personal MySpace, Facebook, Twitter, etc account. Would consider one only for business.

    • Jim W. Dean

      July 11, 2012 - 10:38 am

      JS, You are the second person in a year to not like a Spoof….that you should have picked up on. We are, among other things, an intel and analysis site, and we do things like this so readers have a chance to see what they missed if they don't get it till the end.

      We do this not only to give readers a feel for what it's like to be able to pick up on stuff like this, say in a situation where it was critical to do so. We will keep doing it as long as the huge majority enjoys…and more than a few of those even catch the between the lines message that was in here.

      Re-watch it and you will spot the clues…and you will spot them sooner the next time. It's called learning, and we are doing it every day…and teaching, too. Gordon's peice that follows is a bookend to this one…the Phd level…where the whole public got 'spoofed' on the DC Sniper case.

      So we all need to be smarter if we are going to be able to give the bad guys a run for their money. Right now, they are on the golf course…not too worried.

      Thanks for your efforts.

  6. judgment

    July 11, 2012 - 11:04 pm

    I know this is the land of freedom and one should not expect to worry about being spied on but. I never signed up on Face book, when asked why I could only say "just a feeling, because of the personal questions they asked to join" One thing people should know by now is that government is surely not going to look out for what we get yourself into for any reason, as we used to say "read the small print".

    Some years ago an Orthopedic Clinic asked me for my personal picture which they were taking there to go on my record. Why does my face picture has to do with my spinal condition??? Help said "government requested we do so for all the records now". This was before Obama.

    Interesting because some months ago I started using a local Orthopedic emergency etc. The paper they gave me to sign had nothing to do with pinched nerve, so I asked and got a very rude answer. The people sitting there were poor very likely Medical, they said they would absorb cost Medicare did not pay.

    I smelled some fraud and evidently they did not want curious people, Well, never could get an appointment from them. Same with Well Fargo asking me for personal financial information to open a checking account. They were so testy when I refused to tell them the amount of my Family Trust Estate, I told them they were to sophisticated for me and closed the checking account.

    So, it is going around, list of names they sell pay very well. Somewhere recently I read an offer names, phone address,of all Obama volunteer from that special Obama For America. the price was in the thousands.

Snowden means the cloud is about to dissipate as a business model Once More, With Feeling

As an antidote, here are some of the things we should be thinking about as a result of what we have learned so far.

The first is that the days of the internet as a truly global network are numbered. It was always a possibility that the system would eventually be Balkanised, i.e. divided into a number of geographical or jurisdiction-determined subnets as societies such as China, Russia, Iran and other Islamic states decided that they needed to control how their citizens communicated. Now, Balkanisation is a certainty.

Second, the issue of internet governance is about to become very contentious. Given what we now know about how the US and its satraps have been abusing their privileged position in the global infrastructure, the idea that the western powers can be allowed to continue to control it has become untenable.

Third, as Evgeny Morozov has pointed out, the Obama administration's "internet freedom agenda" has been exposed as patronising cant. "Today," he writes, "the rhetoric of the 'internet freedom agenda' looks as trustworthy as George Bush's 'freedom agenda' after Abu Ghraib."

That's all at nation-state level. But the Snowden revelations also have implications for you and me.

They tell us, for example, that no US-based internet company can be trusted to protect our privacy or data. The fact is that Google, Facebook, Yahoo, Amazon, Apple and Microsoft are all integral components of the US cyber-surveillance system. Nothing, but nothing, that is stored in their "cloud" services can be guaranteed to be safe from surveillance or from illicit downloading by employees of the consultancies employed by the NSA. That means that if you're thinking of outsourcing your troublesome IT operations to, say, Google or Microsoft, then think again.

And if you think that that sounds like the paranoid fantasising of a newspaper columnist, then consider what Neelie Kroes, vice-president of the European Commission, had to say on the matter recently. "If businesses or governments think they might be spied on," she said, "they will have less reason to trust the cloud, and it will be cloud providers who ultimately miss out. Why would you pay someone else to hold your commercial or other secrets, if you suspect or know they are being shared against your wishes? Front or back door – it doesn't matter – any smart person doesn't want the information shared at all. Customers will act rationally and providers will miss out on a great opportunity."

[Aug 11, 2013] Tech giants meet with Obama to save cloud computing by Byron Acohido,

August 9, 2013 | USA today

Edward Snowden's whistleblowing escapades could seriously undermine the growth of cloud computing and thus stifle the growth models for America's biggest tech companies.

And that appears to be the reason why Apple CEO Tim Cook, AT&T CEO Randall Stephenson, Google computer scientist Vint Cerf and other tech executives met behind closed doors with President Obama Thursday.

"The meeting appears to be for a variety of reasons, but basically the companies want to understand exactly what the government is doing with their systems as they try to assuage a lot of concerns from a lot of different stakeholders," says Brian Henchey a privacy and information tech attorney at Baker Botts.

A group called the Information Technology and Innovation Foundation on Tuesday issued a report asserting that Google, Microsoft, Yahoo, Facebook and Apple stood to lose as much as $35 billion over the next three years as Europeans shy away from cloud services with suspect privacy safeguards.

[Aug 03, 2013] XKeyscore: NSA tool collects 'nearly everything a user does on the internet'

Interaction of cloud computing, especially email such as Gmail, and three letter agencies is an interesting topic in itself.
31 July 2013 | The Guardian

FerventPixel

What about HTTPS? Is it secure or not?

MarkLloydBaker -> FerventPixel

HTTPS is pretty secure. It certainly makes things harder for the NSA. We shouldn't get hung up on whether protection mechanisms are perfect. Every little bit helps.

But it's also important to remember that there are potential vulnerabilities all over the place in computer systems, and that spies and thieves spend their time trying to find new places to attack. On a simple level: Using HTTPS in your browser doesn't mean your email is encrypted. Another big one is that the NSA can, according to several reports, enter any Windows machine through its back door to steal data, plant spyware, etc. HTTPS is out of the picture in that case, and the NSA can easily break Microsoft's own encryption because Microsoft told them how. Report Share this comment on Twitter Share this comment on Facebook

timetorememberagain -> FerventPixel

Please take a look:

http://news.cnet.com/8301-13578_3-57595202-38/feds-put-heat-on-web-firms-for-master-encryption-keys/

ekOSullivan

There's no way the US surveillance state will ever back down. It's impossible now. This means that if you really care about your privacy, you have to learn how to protect it yourself. This takes effort:

  1. Linux - a open source operating system.
  2. Thunderbird + Enigmail with 4096 bit keys (meta data will still be available).
  3. Firefox - trustworthy, unlike IE and Google Chrome and Safari et al.
  4. Jitsi for video chats.
  5. Bitmessage (still in beta, looks promising though - P2P mail).
  6. Pidgin with GnuPG Plugin.

I mention Linux because if you use a proprietary operating system, you leave yourself open to side-attacks, so everything else you do can be compromised.

All these things take effort. It should be very obvious by now Western governments are not going to reverse the total surveillance agenda. If privacy is really that important to you, then you need to make an effort to protect it.

Stephanie White

This is really critical, need-to-know information for all of us. Now that Clapper has said there have been transgressions, I think it only a matter of time before there are adjustments within the NSA (and with the law) to make everything (appear to be) copacetic.

However, I think the problem...the thing that really contributes to the problem's intractability, are the involved interests, i.e. the people who profit from such a system. I don't think we've really seen that, yet. Is there a way to get at that information? I imagine the complexity/intersection of the network of interests is mind-boggling.

Andras Donaszi-Ivanov

How come the international community (e.g. every F-ing country in the world) NOT up rioting about the US reading every single freaking email between foreigners (aka."the rest of the world")? How is this not an issue? It just boggles my mind. Report Share this comment on Twitter Share this comment on Facebook

andrewjs -> Andras Donaszi-Ivanov

Because they're doing it as well.

bivvyfox

This is scandalous. Anyone with basic security clearance can tap any computer. This could easily be mis-used by one of many employees -

It's annoying that we CANNOT do anything about it. Other than work on paper and give up technology! It's annoying that because this was not public it was not accountable. I doubt they have many systems in place to check for abuse by staff. They just want to nail terrorists. More safe guards needs to go to the people! We need to be told more about this and the safeguards, and anyone lying about it should not be let off because they were trying to get terrorists. Fair enough going after terrorists, but in my opinion this is a mess they created by evading Iraq and drone attacks in Packistan.

They paid an informant to get info saving Iraq had weapons of mass destruction - when they didn't. The guy simply wanted payment and made something up! And drone attacks in civilian areas killing innocents often, is only helping fuel more terrorists. They are having to all lengths to stop terrorists including spying on all of us and invading our rights to privacy. It's about this a new tactic was started, a re-think of this whole mess they created.

BStroszek

Acknowledging what he called "a number of compliance problems", Clapper attributed them to "human error" or "highly sophisticated technology issues" rather than "bad faith".

Excuses, excuses. Clapper is simply incapable of telling the truth or coming clean, even when he knows everyone else knows that he is lying. Mythomaniac/compulsive liar/pathological liar/congenital liar - take your pick.. It must be force of habit.

( Interesting article : How America's Top Tech Companies Created the Surveillance State )

TPSpacedOut

Interesting to read the 2008 Feb copy of XKeyscore. Now what does the 2013 version do more? All https are belongs to NSA?

yermelai

So moraly wrong! So ineffective against terrorism (did nothing to spot the Tsarnayevs)! Such a waste of money at a tiime when people need jobs!

And a crime against the environment as server farms around the world now consume as much energy as if they were the 5th largest country in the world!

Anna Apanasewicz

How did Orwell know?

Bluestone -> Anna Apanasewicz

It's all part and parcel of the nature of human beings. Documentation of observed behaviour for thousands of years. This latest flavour is just a variation brought on by a change in social interaction brought about by a technological development.

Technology enables. Then it's a question of what we refrain from.

SaveRMiddle

It all screams of only one thing......A government with civil unrest concerns as the magnitude of America's inequality gap continues to grow rather silently like an unacknowledged/downplayed disease.

BuddyChrist

The NSA documents assert that by 2008, 300 terrorists had been captured using intelligence from XKeyscore.

1- Let's be seeing the evidence of that then. None you say?

2- That's particularly funny as due to laws such as the Patriot Act - as one can now be labeled 'a terrorist' for simply wearing a jaunty hat or going to knitting group on wednesdays.

Pathetic justification for what is in effect a New Nazi spy machine.

M Zaki

And Americans think they live in a democracy. The USA is a Police State, catering to the corporations and the wealthy. And just because you get to choose between two dictators (chosen by and from the wealthy) every four years, doesn't mean you have a democracy.

nationalbar

"Mike Rogers, the Republican chairman of the House intelligence committee, said of Snowden's assertion: "He's lying. It's impossible for him to do what he was saying he could do.""

Looks like it is Mike Rogers, the republican chairman of the house intelligence committee, who is doing the lying. Oh, and by the way, "His wife, Kristi Clemens Rogers, was previously President and CEO of Aegis LLC, a contractor to the United States Department of State for intelligence-based and physical security services." "Aegis LLC is a U.S. company and a member of the worldwide Aegis Group which is based in London with overseas offices in Afghanistan, Iraq and Bahrain. "

It is Snowden who will go down in history as the real hero here.

usawatching -> nationalbar

"It is Snowden who will go down in history as the real hero here."....I fear you are correct and that Snowden did us a favor and will be pilloried for it.

I understand the need to protect us from terrorists, but not if our own government becomes one of the terrorists. this is very much like the IRS scandal, showing government gone crazy with its own power and size.

InoWis1

Release List of Keywords Used to Monitor Social Networking Sites

Maybe we should ALL put these on the end of every email.... that would keep-em busy

http://politicalblindspot.org/dept-of-homeland-security-forced-to-release-list-of-keywords-used-to-monitor-social-networking-sites/

Lydmari

Witness in Senate J hearings: "The devil is in the details." Baloney. The devil is in collect all with no oversight.

Ousamequin

"I, sitting at my desk," said Snowden, could "wiretap anyone, from you or your accountant, to a federal judge or even the president, if I had a personal email".

While I am not sure what the significance of this is, this is very, very similar to at least one of capabilities that a fictional NSA whistleblower revealed in Episode 8 of LAST YEAR'S first series of Aaron Sorkin's The Newsroom. That fictional three minute exchange referenced other capabilities of Prism and Boundless Informant using the fictional program name of Global Clarity (a name rather eerily similar in its Orwellian double speak to the names of the actual programs themselves).

Here's the video link:

http://www.youtube.com/watch?feature=player_embedded&v=Ke4IVa4TbXc

wordsdontmatter -> Ousamequin

what about that silly show 'person of interest' where it pretty much says 'you are being watched''..etc and tries to make a framing spin that even though 'i' created this they were only using it to stop terrorist but did not care to stop 'crime'.. so the author of the system and his 'muscle' for hire work together and tap into the system through backdoors..

the show got boring after a few times as it is the same thing each time. I imagine this could just have been a way of softening or justifying this in the back of peoples' minds. It is a good thing. Hell, the military has a hollywood branch and they push for control of scripts, give free hardware or assistance for movies that push their agenda.

I still think there are way too many police and lawyer type shows since returning to the states 2008. I left in 2000 and came back and it seems like there are 5 to 10 types of these security shows on all day (csi, law and order, etc...) Csi even has different cities, haha! I am sure this is just another security meme to mass convince that these are servants of justice not the reality of what it really is and what you see each day.

fnchips

I want my privacy back!!

So I canceled facebook - never use Google anymore - never Twitter - hardly e-mail - All what's left - I have to stop posting on the blogs of the Surveillance State -(the Internet!)

But thank you Glenn - at least I know where Mona gets all her information from! (just joking - buddy!) Report Share this comment on Twitter Share this comment on Facebook

hh999922 -> fnchips

do you still use the internet at all?

if so, they're monitoring you every time you type an address in.

freedaa

As an avid Internet user since the mid 90s, I am beginning to fear the future. Will we all need to use VPN, personalised HPPS or full encryption to maintain some semblance of privacy? Or do we need to use the following statement on the top of our browsers -

"Abandon all hope, ye who enter here."

Maybe Dante had some serious vision.

zangdook

Sorry if I'm being slow, but where are they sucking up this data from? Are they tapping into undersea cables, or is it just things which pass through the USA and close allies? I'm guessing it's absolutely everything they can possibly get their hands on.

Davey01 -> zangdook

Look at PRISM and Boundless Informant documents and slides. If you have your own email server they would not be able to read those emails. If you have your own web server they will not be able to log in as described above. Hence they push your reliance on cloud and 3rd party services.

boilingriver -> zangdook

look at glens article ...How nsa is still harvesting your online data.( On December 31, 2012, an SSO official wrote that ShellTrumpet had just "processed its One Trillionth metadata record)

wordsdontmatter -> Davey01

actually i felt it revealed the dumb down nature of their audience or the training lap dogs in the info session. I mean these people actually doing this things, as well, as those doing drone strikes or torturing people are the ones actually doing these things. They do not want critical thinkers, or someone who has even been in jail or has bad credit or has a streak of resistance to authority because those people are hard to mentally dominate and control. I am sure this is why they put so many people in jail as they are threats to this time of social dominance.

MrSammler

A top secret National Security Agency program allows analysts to search with no prior authorization through vast databases containing emails, online chats and the browsing histories of millions of individuals, according to documents provided by whistleblower Edward Snowden.

Google Search will also do the job. Report Share this comment on Twitter Share this comment on Facebook

Davey01 MrSammler

Google stores all your search history. https://www.ixquick.com

Michael Westgate

Maybe I'll buy a typewriter also. Back to good old snail mail....

I'm just thankful they cannot see me!

evenharpier -> evenharpier

Spencer Ackerman ‏@attackerman 4m Deputy Attorney General Cole referred to needing "all" Americans phone data for investigations which IIRC think is 1st explicit reference. Retweeted by Julian Sanchez

toombsie

So according to this document, people overseas can never discuss Osama bin Laden (the most famous terrorist in history) because to even do so may draw the attention of the NSA. Normal people bring up Osama bin Laden all the time -- my dad makes jokes about him.

Just crazy they think they can victimize everyone in the world with this technology and pry into everyone's private conversations as they continually compile larger and larger Kill Lists rather than addressing the root problem of terrorism -- which is people hate us because of our foreign policy. Change our foreign policy, stop being an empire projecting power all over the world, murdering people as we please with no repercussions, and maybe the victims of US aggression will change their feelings about America.

Tiger184

Yet over 1 million people here illegally in the USA have been conveniently "lost" by Homeland Security. They can't find them they claim. Guess the time that was supposed to be spent tracking these illegals was spent by snooping on innocent Americans. How illegal is that?!

LostintheUS

Excellent. Thank you, Glenn and thank you Edward Snowden.

The curtain is pulled back.

Wendell Berry wrote: "The more tightly you try to control the center, the more chaos rages at the periphery". Time for the periphery to rage.

And just think, schools are being closed, Americans are going hungry and cold and our tax dollars are paying for this.

CharlesSedley

Missing in this entire brouhaha is that our privacy is being violated not only by the government (NSA) but by corporations outside of government control.

Snowden was an employee of a corporation, Booz Allen, not the NSA, Booz Allen is 100% owned by the Carlyle Group.

I still wonder if Americans would be on board with the NSA if one asked them the simple question.

"Are you comfortable with the fact that your national secrets are in the hands of a company* that was recently owned by the bin Laden Group?"

*The Carlyle Group

bushwhacked CharlesSedley

It didn't seem to bother Americans that one of George W. Bush's first business ventures was financed by the bin Ladens.

They elected the cretin to the Presidency twice -- once before 9/11, once after.

ID614495

Your average citizen of any country will probably not have any dangerous data etc. worth searching. The issue is if people are looking at extremist sites whether sexual or terrorist, it is they who should be worried. The media "The Fourth Estate "in western democracies do behave at times like a twin headed monster. Challenging when it suits them but alarming their readership at other times. Spying has existed for centuries, after all Sir Francis Walsingham, Elizabeth 1st spymaster set the template for spies, yet he also secured England's stability.

Espionage, electronically or whatever is a necessity of a stable & secure democracy. Report Share this comment on Twitter Share this comment on Facebook

JimTheFish ID614495

That's naive in the extreme. To put it kindly.

The issue is if people are looking at extremist sites whether sexual or terrorist, it is they who should be worried

Not in the slightest. There are also 'enemies of the state', as well as those who are just plain considered slightly dubious by those in power. 'Wrongdoers' at various times in history have included gays, blacks, homosexuals, Jews, communists. You'd be stupid in the extreme to think that spy networks throughout history haven't also been used as an instrument of subjugation against its own people -- or against considered 'the enemy within'.

Those who would give up essential liberty to purchase a little temporary safety deserve neither liberty nor safety (Ben Franklin)

RicardoFloresMagon

From the presentation:

Show me all the VPN startups in Country X, and give me the data so I can decrypt and discover the users

* These events are easily browsable in XKEYSCORE

Holy. Shit.

RicardoFloresMagon -> Rastafori

you didn't really believe in Tor did you? They'd never build a system that they couldn't themselves hack.

You are mistaken.

1. This is not about Tor, this is about VPN providers. Different thing. Think StrongVPN, proXPN, Ipredator, WiTopia, PureVPN, VyprVPN, etc.

2. Tor is open source software. While the Navy had a hand in its development, it is maintained by people like Jake Applebaum and a whole bunch of volunteers, who are on the frontline in the fight against surveillance. Any dodgy stuff the navy would have put in would have been found.

Tor's vulnerability comes from the limited number of exit nodes, and the likelihood that the govt. owns a good portion of them, which would allow them to see decrypted traffic. But Tor's architecture makes it impossible (OK, let's say, really really hard) to trace back where it originally came from.

diddoit

'A Surveillance Society?' (HC 58-I) Home Affairs Committee's conclusions and recommendations include:

Many Politicians are sensible , the problem is, they're ignored by the executive.

ChicagoDaveM

NSA's XKeystore can just as easily p0wn any browser they want. If they haven't built that they can easily.

How the NSA Could Hack (Almost) Any Browser

A little trick called 'packet injection'

The feds can theoretically use your computer against you to mount an almost untraceable attack - by butting in on your electronic conversation.

This technique, known as "packet injection," works because, absent cryptographic protection, a software client can not distinguish an attacker's reply from a legitimate reply. So all an electronic wiretapper needs to do is examine the traffic, determine that it meets some criteria and inject his own response timed to arrive first.

Most famously, the "Great Firewall of China" uses this technique. It simply watches all requests and, when it discovers that a client desires banned content, the Great Firewall injects a reply which the client interprets as ending the connection.

So, speculatively, what could an agency like the National Security Agency, with an avowed interest in offensive tools, an arsenal of exploits, the budget to simply buy exploits from willing sellers and subject to allegations of widespread hacking do with a global network of wiretaps? Why, attack practically any Web browser on the planet, whenever they want.

All the NSA needs to do is provide its analyst with a point-and-click tool and modify their wiretaps appropriately. After identifying the computer of a target, the global wiretaps could simply watch for any Web traffic from that computer. When the victim's browser requests a script from somewhere on the Web, the odds are good it will pass by the wiretaps. When a wiretap sees such a request, it injects a malicious reply, using a zero-day attack to ensure that the victim gets compromised.

If the attack itself only resides in memory, it would hardly leave a trace on the victim's computer, as memory resident attacks disappear when the computer is reset. Normally, this would represent a significant limitation, but with the ability to so easily infect browsers, a hypothetical attacker could easily reinfect their victims.

A sophisticated network monitor might detect injected packets based on race-conditions (after all, the real reply still arrives, it simply arrives late). But since the Internet is messy, such race conditions might not always occur and, even if they do occur, may simply indicate a bug rather than an attack. Even more sophisticated taps could also block the legitimate reply, eliminating this anomaly.

Detecting the attack payload itself is also a very hard problem. There are a couple of companies developing products which attempt to detect zero-day attacks, but overall this represents areas of active research and development.

Finally, even if a victim detects an attack, attributing such an attack to a particular intelligence agency is also difficult. The NSA and its U.K. friends in the GCHQ can build this. And they aren't the only ones: any country with sufficient Internet transit passing through or near their borders might deploy such a system. Germany and France probably have enough network visibility to build something like this on their own soil.

Other countries would need to deploy out-of-country wiretaps, as Russia and particularly China are less used for transit, while Israel's native reach is probably limited to Middle Eastern targets. Of course, any country that wants to attack their own citizens this way can simply buy an off-the-shelf tool for a few million dollars (Google translate).

Again, I know of no evidence that the NSA or any other intelligence agency has built or is using such universal attack tools. But as we are now all bystanders in what appears to be an escalating espionage conflict, we may need to consider the Internet itself hostile to our traffic. Universal encryption of our messages does more than protect us from spies, it protects us from attack.

Finally, the electronic spooks need to understand that difficult to detect and attribute does not mean impossible. With public revelations of both NSA and Chinese hacking on the global radar, as well as commercial malware, private companies and researchers are focusing considerable talent on detecting nation-state hacking.

Nicholas Weaver (@ncweaver) is a researcher at the International Computer Science Institute in Berkeley and a visiting researcher at the University of California, San Diego. His opinions and speculations are his own.

hh999922 -> ChicagoDaveM

this is all vastly too complex.

why would they bother hacking your computer, when they can simply read off all the http and smtp requests you send from it, which go through their servers?

they'd know what you're reading, who you're emailing, etc.. and where you are from your IP address. all the intelligence they'll need.

if they need to read something on your computer, they'd not hack. they'd simply stove your door down at 6am and take it.

MarkLloydBaker

If NSA officials continue to claim that they're only storing metadata then, in addition to pointing to Glenn's article, someone needs to publicly ask them:

Then what is Bluffdale for?

Twenty trillion phone call records (metadata) could literally be crammed into a single PC with a big RAID (array of hard disks). Ten such PCs in a rack would occupy 4-6 square ft. of floor space. The million square ft. Bluffdale facility sure as hell ain't for storing metadata!

JCDavis -> MarkLloydBaker

Exactly. The present system in the US and UK once had a 3 day buffer for technical reasons, but with the new storage facilities this can be increased to years, so that with a search warrant (or not), they can search back in time to see everything we ever did online and everything we backed up to the cloud, including revisions and deletions.

paulzak

Completely unacceptable. Makes me want to begin searching on all manner of subjects I'm not actually all that interested in to gum up the works. That, no doubt would earn me a visit from a pair of nice agents as happened with the guy in Germany who invited Facebook friends to walk the fence line of an NSA facility.

pontpromenade

Question for the technically savvy and/or more careful readers of the article:

Doesn't the NSA system require them to hack into (or get permission to use) ISP and/or website/social network server logs? Report Share this comment on Twitter Share this comment on Facebook

BobJanova -> pontpromenade

I think it is intercepting packets in transit so it doesn't need to go digging in logs. It needs permission to place surveillance on major Internet routers but I think it's well established that they do that already. Report Share this comment on Twitter Share this comment on Facebook

toffer9 -> pontpromenade

Additionally, if you mentioned the word 'permission' to the NSA, they would laugh in your face (before having you put in an underground cell).

ALostIguana

OK. That's creepy. Metadata to build associations is one thing. Actually data-mining the activities and content of messages is quite another.

itsmerob

If i click recommend on an anti NSA comment, will the NSA log this and the U.S/U.K governments deem me to be a terrorist, potential terrorist, terrorist sympathizer, someone who is a potential threat, politically unreliable or potential troublemaker? Perhaps this whole spying story serves governments because people will be very careful what they say. In effect, silencing dissent.

GM Potts -> itsmerob

Exactly, self-censorship from fear.

I am so afraid that I listen to you, Your sun glassed protectors they do that to you. It's their ways to detain, their ways to disgrace, Their knee in your balls and their fist in your face. Yes and long live the state by whoever it's made, Sir, I didn't see nothing, I was just getting home late.

- L Cohen

Birbir

The biggest hypocrites on the planet. The American Dream where your every keystroke,email and phone and life in general is being monitored everyday.

The despicable drone meisters who will wage war on the whole world wide web and world.

Just plain evil.

The American Dream they say.

McStep

the nsa is collecting MY internet data? what an absolute waste of the US taxpayer's money...

stevecube

And with this power will come the corruption... We are entering an age when citizens might be targeted for 'thought crimes'; Posting to sites like 'The Guardian' could be interpreted as 'aiding the enemy'. This nebulous grey zone is a scary new world and as the US spirals into further decay one ponders which citizens it will be rounding up first -- and for what. The captains of Americas industry and halls of power have turned their backs on the fundamentals of their constitution and have convinced themselves that the fascist state that now prevails is all about protecting their security. Sad, sad, sad...

Even sadder, the apathy of the average American at the wholesale removal of their fundamental rights. Things will only get worse....

Budanevey

Presumably this software can report the details of everyone who's downloaded a pirated copy of 'Dexter' and invoice or fine them?

So why the need for SOPA, PIPA, COICA, and ACTA? Or were these legislative proposals intended to monetise full-take activities by the partner-states and provide political and legal cover for their secret surveillance of the Internet?

It seems to me that there has been years of elaborate lying and deception going on by politicians and parliaments, including use of taxpayers money and the involvement of businesses and probably banks, whilst Big Brother has been built in the background.

FrewdenBisholme

Sorry, but I'm embarrassed how technically illiterate the drawing of conclusions from the evidence is in this article.

I don't want to be surveyed any more than you do. But to fight this sort of intrusion you really have to be careful and precise.

The presentation shown explains some (very unclear) capability or other to do with HTTP. In then says they're interested in HTTP because that's the protocol most typical Web activity uses. It does not say they can search all HTTP activity for any typical user!

Without knowing what the "sessions" are shown in the first slide, the scope of this capability is totally unknown.

This is so overblown. It's really no better than a press release story where the journo laps up all the claims some company makes about their products!

RealEscapist -> FrewdenBisholme

I can clear that up for you very easily.

Back in the late 90's and early 21st, there was a similar program (that is actually probably what we know as PRISM now) which was set up in AT&T labs in Atlanta. It searched for Keywords. Well, people were so offended that there was a mass movement (thank you 4Chan) to spam the internet with use of the keyword in chats and websites so as to render tons of garbage data to that system. The govmt (supposedly) had to shut that system down because it was no longer useful. That's probably how Prism came about.

The way they did it then and do it now is that when you fire off data to a website, a DNS server has to translate your command to an IP address and any other commands that are in that URL are then transferred to the webserver at the target. He who controls the DNS server controls unfettered access to all of your browser activities, and the DNS logs show who connected from where at what time. The relationship is easy to assemble from there.

Atlanta continues to be one of the largest DNS server farms in the USA. And as you know, most DNS servers are in the demesne of the US in general. Add to it that most are owned by AT&T, Verizon, etc....and there you have it.

rustyschwinnToo
In the PPT (other than the use of Linux which doesn't surprise me), there are three slides that I find interesting.

Slide 6: "Where is".

This slide boasts of server locations across the world.

Assuming each dot represents a server location (it's unclear whether these are access locations or snarfing points), The UK and (oddly) central America are well covered.

The few in Africa interest me because a few weeks ago I was talking to a friend who is in the data centre chiller business. He had seen a summary request for chiller services for a 5,000 rack data centre build in an African country and did I know who it was for? Of course, I didn't. But I joked that perhaps the NSA was doing extraordinary rendition of data to different jurisdictions with lax legal constraints just like outsourcing torture.

Now I'm not sure it was a joke.

Slide 17: "Show all the VPN startups in Country X"

The slide also boasts that the system can decrypt the data and that no other system "performs this on raw unselected bulk traffic".

We have got used to believing that VPN is the ubiquitous secure way to do business to business transactions and connectivity.

This slide is proof that the NSA is capturing much more commercially damaging data than tracking somebody using an anonymous proxy.

This is very dangerous: one leak of the data and global commerce could be disrupted in a tidal wave of confidentiality breaches.

It also implies that they are sniffing the data at a very low level (I think). To catch a VPN session as it fires up requires capturing the initial, open, connection. To decrypt implies they have a way to capture the require key and cert exchanges.

Security absolutely depends on absolute trust of the layers further down the ISO layer and the hardware. This one slide implies that everyones cable modem could, actually, be a spy in the room.

Slide 24 "Show me all the exploitable machines in country X"

This smells of bot network stuff. This would also imply that cooperation with Microsoft may be on more than Outlook encryption, and around the operating system.

There is precedent for this. One of my mentors in my computer youth (in the early 80's) was a defector from what was then an "iron curtain" country. Before he got out his job was, as a computer scientist, testing hardware from the west. He discovered changed microcode on a mainframe on at least one occasion that did some subtly naughty stuff.

This change can only have been made by the manufacturer with the US governments connivance -- there were very strict export controls on computers at the time and the reason the receiver was suspicious in the first place because this computer was recent model exported with less than usual massive paperwork delay.

And no, it wasn't a bug.

This leads me to ponder how many counterfeit copies of Windows the US government distributes abroad :)

RicardoFloresMagon -> rustyschwinnToo

I agree with your points, especially the last two, about slides 17 and 24.

We kind-a knew already that the NSA would and could exploit, being the biggest buyer on the zero-day market, Stuxnet, etc. and if the Chinese can industrialize their hacking, so certainly can the NSA. The Shodan-like search engine the presentation talks about just makes this real easy, and I reckon they'll have a Metasploit-like tool as well, botnets, as you say, and command & control interfaces that are user-friendly, and likely dont require a lot of technical skills. (Even if the interfaces in the PPT themselves look straight from the 90s)

The VPN bit has shook me to the core.

JoGrimond

The tracking that the Guardian chooses to permit (and for the most part pays for) on this page includes:
Google+1
Linked In
Twitter Badge
Criteo
Google Adsense
Foresee Results
Optimisely
Real Media
Netratings Site Census
Comscore Beacon
Chartbeat
Revenue Science

I do not know who 'DoNotTrackMe' fail to block over and above this. I don't much care.

Davey01 -> JoGrimond

They also use Jquery - API libraries from Google which are easily hosted on individual servers. Webmasters know this. ;)

eNgett -> JoGrimond

I see all these too, but DoNotTrackMe is working for me on this page.

JoGrimond -> eNgett

I was perhaps unclear. DoNotTrackMe says it is blocking all these, and I have no reason to doubt it. However, we cannot know what DoNotTrackMe is allowing without telling us. God help the poor NSA staffer tasked with monitoring CiF - if there is one.

I am white, male, middle aged, and suburban. Perhaps if I were younger, black, and urban I might get hassled by the police on a regular basis. For this, they would need no intelligence tools at all, it would be enough for them to see me on the street.

If that happened, just maybe I would be tempted to ask them why they did not collate information on people who really might be up to no good.

marbleflat

A top secret National Security Agency program allows analysts to search with no prior authorization through vast databases containing emails, online chats and the browsing histories of millions of individuals, according to documents provided by whistleblower Edward Snowden. So they could nail all the spammers, phishing scammers and 419 artists in no time if they wanted to, yes?

6chars:
It's the top story on Fox News:

http://www.foxnews.com/

_"Who's Shopping at NSA Data 'Store'?"_

[Jul 12, 2013] IT Analyst Dan Kusnetzky Talks about Cloud Computing and Cloud Hype (Video)

"cloud = buzzword marketing" vs. "cloud what's different", and who in reality is actually doing cloud things.
July 12, 2013 | Slashdot

Dan Kusnetzky and I started out talking about cloud computing; what it is and isn't, how "cloud" is often more of a marketing term than a technical one, and then gradually drifted to the topic of how IT managers, CIOs, and their various bosses make decisions and how those decisions are not necessarily rational.

What you have here is an 18-minute seminar about IT decision-making featuring one of the world's most experienced IT industry analysts, who also writes a blog, Virtually Speaking, for ZDnet.

KingofGnG

What is cloud computing? Simple: lies, lies and bullshit. That's all. http://kingofgng.com/eng/2013/07/10/the-many-lies-of-cloud-computing/ [kingofgng.com]

bigtech

Re:Don't reference Dilbert. If you must, get it right.

Nailed it with the Dilbert comic. I mean *nobody* jumped on the SQL database bandwagon fad.

Anonymous Coward

Re:Don't reference Dilbert. If you must, get it right

http://dilbert.com/strips/comic/1995-11-17/

Anonymous Coward

Cloud is nonsense (Score:0)

See Criticisms of Cloud Computing

Anonymous Coward

worthwhile 18 minutes (Score:0)

if you want to sell or market cloud services in any way, this is a good interview. thanks!

Fubari

a bit harsh. (Score:2)

So far, posts are being a bit harsh.

The only serious criticism I would offer is they chose the wrong audience for an interview like this: I suspect slashdot is more about practicing technologists & hobbyists, Kusnetsky's observations seemed like they'd be more useful to CIO-level, enterprise architects, or perhaps non-techies that need to cope with clouds.

I thought Kusnetsky made some useful distinctions; "cloud = buzzword marketing" vs. "cloud what's different", and who in reality is actually doing cloud things. Kusnetsky was also concerned about how to measure what people are actually doing at an industry level; I thought that was interesting, since (as a practitioner) I tend to focus on specific projects, one implementation at a time. *shrug* they're looking at the "cloud thing" from a different perspective than I do, which I thought was interesting.

r.e. the Dilbert Example (remembered: Dilbert: "Do you want red or blue database?" vs. actual (dilbert: "What color do you want that database?" Boss: "I think Mauve has the most ram."), it was a fine point - it got the idea across (e.g. sometimes management runs with trade-magazine fads without understanding it), even if he didn't correctly quote canonical Scott Adams.

execution: the sound was rough, especially bad on the interviewer's side (very hard to understand what Roblimio was saying). That made it harder to watch than it needed to be. I probably won't recommend this video to anybody because the content was rough enough it wasn't worth trying to sift through and find gems of ideas... maybe the transcript will be better.

Demonoid-Penguin

XKCD reference?

Funniest cloud joke I could find :D

Man reveals the simple truth about Cloud Computing! [scottferguson.com.au]
Marketing and Sales hate him.

I wonder is one of his "major" clients enjoys his sense of humor...

Dareth

My Favorite Dilbert Cloud Strip

Dilbert Cloud Strip [dilbert.com]

[Jul 09, 2013] Cloud computing is a trap, warns GNU founder Richard Stallman

September 29, 2008 | The Guardian

Richard Stallman on cloud computing: "It's stupidity. It's worse than stupidity: it's a marketing hype campaign." Photograph: www.stallman.org

The concept of using web-based programs like Google's Gmail is "worse than stupidity", according to a leading advocate of free software.

Cloud computing – where IT power is delivered over the internet as you need it, rather than drawn from a desktop computer – has gained currency in recent years. Large internet and technology companies including Google, Microsoft and Amazon are pushing forward their plans to deliver information and software over the net.

But Richard Stallman, founder of the Free Software Foundation and creator of the computer operating system GNU, said that cloud computing was simply a trap aimed at forcing more people to buy into locked, proprietary systems that would cost them more and more over time.

"It's stupidity. It's worse than stupidity: it's a marketing hype campaign," he told The Guardian.

"Somebody is saying this is inevitable – and whenever you hear somebody saying that, it's very likely to be a set of businesses campaigning to make it true."

The 55-year-old New Yorker said that computer users should be keen to keep their information in their own hands, rather than hand it over to a third party.

His comments echo those made last week by Larry Ellison, the founder of Oracle, who criticised the rash of cloud computing announcements as "fashion-driven" and "complete gibberish".

"The interesting thing about cloud computing is that we've redefined cloud computing to include everything that we already do," he said. "The computer industry is the only industry that is more fashion-driven than women's fashion. Maybe I'm an idiot, but I have no idea what anyone is talking about. What is it? It's complete gibberish. It's insane. When is this idiocy going to stop?"

The growing number of people storing information on internet-accessible servers rather than on their own machines, has become a core part of the rise of Web 2.0 applications. Millions of people now upload personal data such as emails, photographs and, increasingly, their work, to sites owned by companies such as Google.

Computer manufacturer Dell recently even tried to trademark the term "cloud computing", although its application was refused.

But there has been growing concern that mainstream adoption of cloud computing could present a mixture of privacy and ownership issues, with users potentially being locked out of their own files.

Stallman, who is a staunch privacy advocate, advised users to stay local and stick with their own computers.

"One reason you should not use web applications to do your computing is that you lose control," he said. "It's just as bad as using a proprietary program. Do your own computing on your own computer with your copy of a freedom-respecting program. If you use a proprietary program or somebody else's web server, you're defenceless. You're putty in the hands of whoever developed that software."

[Jul 05, 2013] European firms 'could quit US internet providers over NSA scandal'

The Guardian

European commission vice-president says American cloud services providers could suffer loss of business

Pointing to the potential fallout from the disclosures about the scale of NSA operations in Europe, Kroes, the European commissioner for digital matters, predicted that US internet providers of cloud services could suffer major business losses.

"If businesses or governments think they might be spied on, they will have less reason to trust cloud, and it will be cloud providers who ultimately miss out. Why would you pay someone else to hold your commercial or other secrets if you suspect or know they are being shared against your wishes?" she said.

"It is often American providers that will miss out, because they are often the leaders in cloud services. If European cloud customers cannot trust the United States government, then maybe they won't trust US cloud providers either. If I am right, there are multibillion-euro consequences for American companies. If I were an American cloud provider, I would be quite frustrated with my government right now."

... ... ...

Kroes warned that US firms could be the biggest losers from the US government's voracious appetite for information.

"Concerns about cloud security can easily push European policy-makers into putting security guarantees ahead of open markets, with consequences for American companies. Cloud has a lot of potential. But potential doesn't count for much in an atmosphere of distrust."

[Jul 04, 2013] EU To Vote On Suspension of Data Sharing With US

July 04, 2013 | Slashdot

eulernet

Side effects

There is an interesting side effect about this data problem: the cloud.

Currently, the biggest cloud providers are based in US. But due to the NSA disclosure, most companies cannot afford to give their data to outside countries, especially since it's now clear that NSA spied european companies economically.

So local cloud providers will quickly emerge, and this will directly impact Google and Amazon's services. US clouds cannot be trusted anymore.

wvmarle

Re: Side effects

Agreed, fully.

Recently I had the need of a virtual server - just to run my web site, host my documents, and various other tasks. So searching for this I specifically searched for local Hong Kong companies (which is where I live), to host such a server. And a short search later I found one that offers cloud servers, just what I needed.

A few months ago I was thinking about the same issue - and then I was considering Amazon. I am a customer of Amazon already, for their glacier cold storage service, where I keep back-ups (all encrypted before they leave my systems). They have a good reputation, and overall very good prices, however it being a US company made me not even consider them now.

And that's a direct result of Snowden's revelations.

TheP4st

Re:Side effects

US clouds cannot be trusted anymore.

They never could, only difference is that now it is confirmed and I can enjoy of saying "I told you so!". However, I would not trust any cloud service regardless of its country of origin with important data.

[Jun 16, 2013] Video: Woz explains how cloud computing is turning us into Soviet Russia

Apple cofounder Steve Wozniak had a quick chat with FayerWayer earlier this week, and the site asked him about a wide range of topics, including the new look of iOS 7 and the recent revelations about the NSA's PRISM surveillance program. Wozniak's most interesting comments, though, were about how cloud computing is slowly eroding the concept of owning content that we pay for, which in turn leaves us with less freedom than we used to have.

"Nowadays in the digital world you can hardly own anything anymore," he said. "It's all these subscriptions… and you've already agreed that every right in the world belongs to them and you've got no rights. And if you've put it on the cloud, you don't own it. You've signed away all the rights to it. If it disappears, if they decide deliberately that they don't like you and they cut that off, you've lost all the photographs of your life… When we grew up ownership was what made America different than Russia."

The full video is posted below.

http://www.youtube.com/watch?v=xOWDwKLJAfo&feature=player_embedded

[Jun 14, 2013] U.S. Agencies Said to Swap Data With Thousands of Firms

Corporatism is on the march...
Bloomberg

Microsoft Bugs

Microsoft Corp. (MSFT), the world's largest software company, provides intelligence agencies with information about bugs in its popular software before it publicly releases a fix, according to two people familiar with the process. That information can be used to protect government computers and to access the computers of terrorists or military foes.

Redmond, Washington-based Microsoft (MSFT) and other software or Internet security companies have been aware that this type of early alert allowed the U.S. to exploit vulnerabilities in software sold to foreign governments, according to two U.S. officials. Microsoft doesn't ask and can't be told how the government uses such tip-offs, said the officials, who asked not to be identified because the matter is confidential.

Frank Shaw, a spokesman for Microsoft, said those releases occur in cooperation with multiple agencies and are designed to give government "an early start" on risk assessment and mitigation.

In an e-mailed statement, Shaw said there are "several programs" through which such information is passed to the government, and named two which are public, run by Microsoft and for defensive purposes.

Willing Cooperation

Some U.S. telecommunications companies willingly provide intelligence agencies with access to facilities and data offshore that would require a judge's order if it were done in the U.S., one of the four people said.

In these cases, no oversight is necessary under the Foreign Intelligence Surveillance Act, and companies are providing the information voluntarily.

The extensive cooperation between commercial companies and intelligence agencies is legal and reaches deeply into many aspects of everyday life, though little of it is scrutinized by more than a small number of lawyers, company leaders and spies. Company executives are motivated by a desire to help the national defense as well as to help their own companies, said the people, who are familiar with the agreements.

Most of the arrangements are so sensitive that only a handful of people in a company know of them, and they are sometimes brokered directly between chief executive officers and the heads of the U.S.'s major spy agencies, the people familiar with those programs said.

... ... ...

Committing Officer

If necessary, a company executive, known as a "committing officer," is given documents that guarantee immunity from civil actions resulting from the transfer of data. The companies are provided with regular updates, which may include the broad parameters of how that information is used.

Intel Corp. (INTC)'s McAfee unit, which makes Internet security software, regularly cooperates with the NSA, FBI and the CIA, for example, and is a valuable partner because of its broad view of malicious Internet traffic, including espionage operations by foreign powers, according to one of the four people, who is familiar with the arrangement.

Such a relationship would start with an approach to McAfee's chief executive, who would then clear specific individuals to work with investigators or provide the requested data, the person said. The public would be surprised at how much help the government seeks, the person said.

McAfee firewalls collect information on hackers who use legitimate servers to do their work, and the company data can be used to pinpoint where attacks begin. The company also has knowledge of the architecture of information networks worldwide, which may be useful to spy agencies who tap into them, the person said.

McAfee's Data

McAfee (MFE)'s data and analysis doesn't include information on individuals, said Michael Fey, the company's worldwide chief technology officer.

"We do not share any type of personal information with our government agency partners," Fey said in an e-mailed statement. "McAfee's function is to provide security technology, education, and threat intelligence to governments. This threat intelligence includes trending data on emerging new threats, cyber-attack patterns and vector activity, as well as analysis on the integrity of software, system vulnerabilities, and hacker group activity."

In exchange, leaders of companies are showered with attention and information by the agencies to help maintain the relationship, the person said.

In other cases, companies are given quick warnings about threats that could affect their bottom line, including serious Internet attacks and who is behind them.

... ... ...

The information provided by Snowden also exposed a secret NSA program known as Blarney. As the program was described in the Washington Post (WPO), the agency gathers metadata on computers and devices that are used to send e-mails or browse the Internet through principal data routes, known as a backbone.

... ... ...

Metadata

That metadata includes which version of the operating system, browser and Java software are being used on millions of devices around the world, information that U.S. spy agencies could use to infiltrate those computers or phones and spy on their users.

"It's highly offensive information," said Glenn Chisholm, the former chief information officer for Telstra Corp (TLS)., one of Australia's largest telecommunications companies, contrasting it to defensive information used to protect computers rather than infiltrate them.

According to Snowden's information, Blarney's purpose is "to gain access and exploit foreign intelligence," the Post said.

It's unclear whether U.S. Internet service providers gave information to the NSA as part of Blarney, and if so, whether the transfer of that data required a judge's order.

... ... ...

Einstein 3

U.S telecommunications, Internet, power companies and others provide U.S. intelligence agencies with details of their systems' architecture or equipment schematics so the agencies can analyze potential vulnerabilities.

"It's natural behavior for governments to want to know about the country's critical infrastructure," said Chisholm, chief security officer at Irvine, California-based Cylance Inc.

Even strictly defensive systems can have unintended consequences for privacy. Einstein 3, a costly program originally developed by the NSA, is meant to protect government systems from hackers. The program, which has been made public and is being installed, will closely analyze the billions of e-mails sent to government computers every year to see if they contain spy tools or malicious software.

Einstein 3 could also expose the private content of the e-mails under certain circumstances, according to a person familiar with the system, who asked not to be named because he wasn't authorized to discuss the matter.

AT&T, Verizon

Before they agreed to install the system on their networks, some of the five major Internet companies -- AT&T Inc. (T), Verizon Communications Inc (VZ)., Sprint Nextel Corp. (S), Level 3 Communications Inc (LVLT). and CenturyLink Inc (CTL). -- asked for guarantees that they wouldn't be held liable under U.S. wiretap laws. Those companies that asked received a letter signed by the U.S. attorney general indicating such exposure didn't meet the legal definition of a wiretap and granting them immunity from civil lawsuits, the person said.

[Jun 14, 2013] PRISM 2.0: From 9 to 'thousands' of technology and finance companies

June 14, 2013 | VentureBeat
When Edward Snowden leaked the news about PRISM, we thought it was just 9 U.S. companies that were sharing customers' data with the National Security Agency (NSA). Now it looks like literally thousands of technology, finance, and manufacturing firms are working with the NSA, CIA, FBI, and branches of the U.S. military.

According to a new report by Bloomberg, these thousands of companies are granting sensitive data on equipment, specifications, zero-day bugs, and yes, private customer information to U.S. national security agencies and are in return receiving benefits like early access to classified information.

Those companies reportedly include Microsoft, Intel, McAfee, AT&T, Verizon, Level 3 Communications, and more.

... ... ...

There have long been rumors of a Windows backdoor allowing government agents access to computers running Windows, which Microsoft has always denied. But those backdoors might not even be necessary if companies like Microsoft and McAfee provide government agencies early access to zero-day exploits that allow official hackers to infiltrate other nations' computer systems … and American ones.

And it's becoming increasingly clear that the U.S. calling out China for hacking overseas is the pot calling the kettle black. Or the dirty cop calling the thief a criminal.

Wouldn't it be ironic if the largest shadowy international hacking organization is probably right here at home.

Snowden NSA Spying On EU Diplomats and Administrators

Slashdot

Re:The US is nobody's friend (Score:5, Interesting)

by Anonymous Coward on Sunday June 30, 2013 @02:09AM (#44145869)

Except for the fact that, *by treaty* The US, UK, NZ Canada and Australia are allegedly sharing all intelligence each of their respective agencies gather. Originally; the intent was to let each nation focus its spending and efforts on just one region that it already had a substantial interest in while still benefitting from a dliligent approach in all the other regions. Explicit in this was a reciprocity. The American NSA, with all its well known and not so well known programs, harvests vast amounts of data on say, UK citizens, perfectly within it's purview of external intelligence, meanwhile MI6 shares all the data it has collected on US citizens.

A lot of people; including myself, have been very vocal about their concerns at the scope of data being collected by the various three letter agencies of the US government. Many people in power get reassured by statements along the lines of "we never keep any data on our own citizens unless there is a link to a person of interest". What gets overlooked is that the US doesn't *have* to keep data on all it's citizens, all they have to do is pass along all the raw data they collect, in keeping with the treaty, and then just ask the partner nations for the digested and analyzed results. (and they of course do the same in return)

It is the top secret version of the "business in the Cloud" problem. The organization WILL collect everything it possibly can, data mine and analyze as they see fit, they will just keep the actual data stores in servers located and operated offshore by "affiliates". Some court rules the organization cannot collect or keep such data? No problem, our affiliate will do that for us offshore and dodge those pesky laws.

The difference here is, the organizations are not in it for profit (though funding is always a motive) they are in it because they genuinely believe it is their duty to do so. Think of it this way; you are a bodyguard, your livelihood depends on the client staying healthy, you love the client and want them to stay healthy as well. Yet the client has made a bunch of rules tohis/her own taste. The upshot is that you can only stand on the left side and can only be within arms reach durign daylight. If you take your job seriously, you would be very motivated to team up with another clients bodyguard so as to cover those gaps in the protection you provide. Your client never said anything about having the _other_ bodyguard in the bedroom at night after all, just you.

All intelligence agencies have that problem. Being a good weasel makes you good at your job of collecting intel, but the better weasel you are, the easier and more likely it is that you end up no longer truely serving the people you are trying to protect.

If there is one thing history AND/OR current events can teach us, it's that it is a HELL of a lot easier and safer to do ones job well rather than ones duty well.

Re:The US is nobody's friend (Score:5, Insightful)

by Zontar The Mindless (9002)

"Be careful how you choose your enemy, for you will come to resemble him."

[May 11, 2013] Boston Replacing Microsoft Exchange With Google Apps

I doubt that Google is competitive with Optonline mail, which is free for users of Optonline Internet. Thunderbird is a reasonably good and free mail client for SMTP mail.
Slashdot

Google Docs and Drive are down... (Score:4, Insightful)

by mystikkman (1487801) writes: on Friday May 10, 2013 @04:59PM (#43689063)

Meanwhile, in delicious irony, Google Docs and Drive are down and inaccessible.

"Google Drive documents list goes empty for users "
http://news.cnet.com/8301-1023_3-57583952-93/google-drive-documents-list-goes-empty-for-users/?part=rss&subj=news&tag=title&utm_source=dlvr.it&utm_medium=statusnet [cnet.com]

https://twitter.com/search/realtime?q=google%20drive&src=typd [twitter.com]

Re:Only $280k? (Score:4, Insightful)

by dgatwood (11270) writes: on Friday May 10, 2013 @07:29PM (#43690481) Journal

TFA says it will still cost the city ~$800k to make the move... the $280k is reported to be the savings from dropping what they are currently doing.

The only problem is that Google Docs are not guaranteed. You don't have a contract with Google that says, "We agree to provide this forever." WIth Office, assuming you don't choose to go with their rental model, you have a copy of a piece of software that you can just keep using.

So in five years, when Google realizes that even though Docs is popular, it isn't making them any money, they'll decide to yank it with six months notice. When Boston gets to spend way more than that $280k to move back to an actual purchased office suite on an emergency basis, we'll all say, "So much for big savings."

Software as a service is fine for things that aren't mission-critical. As soon as your workflow starts to depend on it, it's a fool's bargain.

Re:Only $280k? (Score:4, Informative)

by gbjbaanb (229885) on Friday May 10, 2013 @07:50PM (#43690689)

I don't know - Thunderbird and the Lightning calendar plugin do me just as well as Outlook and its inbuilt calendar does (better actually, since Outlook decided you didn't need to know what appointments you had coming up tomorrow [microsoft.com] something I found useful for early meetings)

Link the calendar with gmail calendar, and the email with gmail emails... you've got pretty much 100% of the functionality Outlook gives you. (without the flipping Facebook integration Outlook 2013 now shoves at you, or the integration with skydrive). I use it (when I can't be bothered to read my mail using my phone, which seems to be my default view of Gmail nowadays) and it just works.

If you need centralised user accounts, OpenLDAP does that, though its tricky to make that work with a bunch of Windows clients, it does work [erikberg.com] though its not out-of-the-box. This is how it should be, after all AD is just a fancy LDAP server anyway, but with a special Windows-only protocol that Microsoft had to hand over as part of their agreement with the EU (IIRC). Good to see the Samba team has finally waded through the walls MS must have put up and got samba 4 working as a full AD server.

Re:Only $280k? (Score:4, Interesting)

by batkiwi (137781) on Saturday May 11, 2013 @12:06AM (#43692431)

When people say AD they don't mean the LDAP part with centralized user accounts. That's been doable for ages.

When windows admins talk about AD, they are talking about all of the things that you can do with group policy and how those policies apply to different containers in a hierchical or cross cutting way, depending on configuration.

With AD and GPO you can:

So if Jim moves from finance to web development, you drag and drop is user into another OU and add him to 5-10 groups on the AD server. Next time he logs on his access levels, what software is installed, what mail he has access to, his quotas, etc all change instantly.

This CAN be hacked together with a bunch of scripts, a custom repository, NIS/openLDAP, and some other stuff in Linux, but it's not well documented, well supported, or something you can ask ANY linux admin to do and they will do it in the same way.

I'm not surprised (Score:4, Informative)

by prelelat (201821) on Friday May 10, 2013 @04:46PM (#43688947)

I do think that office 365 is a very nice response to cloud office suites but unless there is still a problem since that 2011 letter about the LA contract I don't know how they will break into that market. Google is a name that most IT people think of when they think of cloud processing suites. We started using 365 about 6-8 months ago and it works fantastically in my opinion. I also do know that other people have gone with google though because it's a big name and it does what it says it does. As far as I know there haven't been any complaints about google.

Does anyone know what happened between google and the city of L.A. after this was released? I hadn't heard about it. I would be interested to know what the security issues they had were and if they were able to be resolved. This letter is considerably old in terms of technology advancements.

Re:A better idea... (Score:5, Insightful)

by foniksonik (573572) on Friday May 10, 2013 @06:12PM (#43689783) Homepage Journal

Uh booking meetings in a calendar is ~50% of the average corporate managers daily activity. The other 50% is attending said meetings.

Re:First Boston, next the others (Score:2)

by smash (1351) <[email protected]@com> on Saturday May 11, 2013 @12:45PM (#43695793) Homepage Journal

In my experience, having seen a business unit within our group attempt to use google apps and fail miserably and spend the money again on re-buying their own infrastructure I suspect they will be back within 6-12 months.

I know this isn't what the slashdot crowd want to hear, and I'm sorry... but Google apps is crap. The functionality just is not there. If you want to get off microsoft there are plenty of options - for a business who does anything more than the most basic of email or spreadsheets, Google apps simply isn't an alternative.

Google Apps Suffering Partial Outage

Slashdot

First time accepted submitter Landy DeField writes

"Tried accessing your Gmail today? You may be faced with 'Temporary Error (500)' error message. Tried to get more detailed information by clicking on the 'Show Detailed Technical Info' link which loads a single line... 'Numeric Code: 5.' Clicked on the App status dashboard link. All were green except for the Admin Control Panel / API. Took a glance 2 minutes ago and now, Google mail and Google Drive are orange and Admin Control Panel / API is red. Look forward to the actual ...'Detailed Technical Info' on what is going on."

The apps dashboard confirms that there is a partial outage of many Google Apps. The Next Web ran a quick article about this, and in the process discovered there was an outage on the same date last year.

ArcadeMan

Can't you guys read?

They sent an email explaining the cause of the... oh wait.

sandytaru

Hit the paid accounts

This took down one of our clients who pay for Google apps. So it's not just the freebie users who got affected on this, hence Google's rapid response.

RatherBeAnonymous

Re: Oh well

We switched to Google Apps a few years ago. In that time I've seen maybe a dozen full or partial outages. Some were not Google's fault. Internet routing or DNS problems were responsible some of the time. One instance was when a drunk driver hit a telephone pole about a quarter of a mile away and severed our fiber connection. When it is down, I still end up spending half the day dealing with the outage.

But In a decade of running our email in house, I had just one outage. We did have a few instances of where our Internet connection was down so outside email did not flow, but at least internal communications worked.

[Apr 18, 2013] Businesses Moving From Amazon's Cloud To Build Their Own

"There are rumblings around this week's OpenStack conference that companies are moving away from AWS, ready to ditch their training wheels and build their own private clouds."
Slashdot

itwbennett

"There are rumblings around this week's OpenStack conference that companies are moving away from AWS, ready to ditch their training wheels and build their own private clouds. Inbound marketing services company HubSpot is the latest to announce that it's shifting workloads off AWS, citing problems with 'zombie servers,' unused servers that the company was paying for. Others that are leaving point to 'business issues,' like tightening the reins on developers who turned to the cloud without permission."

serviscope_minor

Nor surprising and won't matter.

It doesn't surprise me and I don't think it will matter much.

Amazon is not particularly cheap. If you host your own, even with power, cooling and hardware, the payback time is about 4 to 6 months.

If you have a lot of load then it is going to be cheaper to host it yourself, so it's worth doing for big companies.

With Amazon of course you can start as a one man band and still have potential to grow without it getting painful from an administrative point of view.

CastrTroy

Re: Nor surprising and won't matter.

The only case where it really made sense was when you had extremely variable load. It's nice for scientists that need to rent 100 computers for use with one project, but if you're going to be using the same resources on a day-to-day basis, then it makes much more financial sense to just own your own hardware, and rent space in an existing data center. It also makes sense if you use less than a whole server in resources, but VPS was already filling that need quite well before Amazon came along.

thereitis:

Re: Nor surprising and won't matter.

If you're just using Amazon for compute power then perhaps, but then you've got no geographic redundancy with that single data center. Whether it's worth rolling your own solution really depends on your needs (lead time, uptime requirements, budget, IT skill/availability, etc).

neither does amazon unless you pay them a lot more $$$

Re: Nor surprising and won't matter.

Depending on your needs, setting up geographical redundancy with Amazon can be extremely cheap -- if you just want a cold or warm site to fail over to, you don't need to keep your entire infrastructure running at the secondary site, just replicate the data, and then spin up the servers over there when you need to fail over.

That's what my company does - we have about a dozen servers to run our website, but the secondary site has only a couple micro instances to receive data. When we need to failover, we ju

hawguy

Re:Nor surprising and won't matter.

Depending on your needs, setting up geographical redundancy with Amazon can be extremely cheap

And history has shown that you pay for what you get.

Right, if you cheap out and pay for a single availability zone in a single region, when that AZ or Region goes down, your site is down.

If you pay for multi-AZ and Multi-region deployments you get much better availability.

Just like Amazon says.

Over the past 2 years, Amazon has been more reliable than the coloc we moved away from, mostly due to the triple (!) disk failure that took out our SAN RAID array - one disk failed, and while we were waiting for the replacement, another disk went down, after we replaced those two, another disk went down while rebuilding the RAID-6 array.

With AWS, an entire region can go offline and we can bring up the backup site on the other side of the country (or, starting next month, we could bring up our Ireland region).

All this for less than half the cost we were paying for the coloc + equipment maintenance.

Dancindan84

Re: Nor surprising and won't matter.

The thing is, when a company reaches a certain size they likely have a enough computer infrastructure to have an IT department anyway, even if they aren't an IT company. With your example of Ford, they have offices for managers, sales etc. All of those people likely have desktop computers, so they likely have dedicated desktop support. Additionally they probably have some kind of centralized authentication like active directory, which means they'll need a server and some sort of sys admin/IT infrastructure already. They likely wouldn't be adding an IT division in order to host their own email, they'd be adding an email server/management to the load of the existing IT department, which is obviously not as big an upfront overhead cost, making it more attractive.

lxs

The obvious next step...

...will be to give every user their own personal cloud housed in a box under their desk. At which point the cycle will begin again.

benf_2004

Re:The obvious next step... (Score:5, Funny)

...will be to give every user their own personal cloud housed in a box under their desk. At which point the cycle will begin again.

That sounds like a great idea! We can call it a Personal Cloud, or PC for short.

cryfreedomlove

Tightening reins on developers?

From this article: "like tightening the reins on developers who turned to the cloud without permission"

Let me state this in other words: "Insecure IT guys are afraid for their own jobs if they can't lord it over developers". Seriously, developers working in an API driven cloud just don't need a classic IT organization around to manage servers for them. Cloud is a disruptive threat to classic IT orgs.

EmperorOfCanada

Random pricing

One thing that has kept me away from Amazon's cloud is the unknowns with its pricing. I have visions of a DDOS either clearing out my bank account or using up my monthly budget in the first 2 days of the month. Plus if I mis-click on something I might get an awesome setup that cleans me out. I am not a large corporation so one good bill and I am out of business. But even larger companies don't like surprises. So regardless of the potential savings I am willing to spend more if the price is fixed in stone instead of chancing being wiped out. I like sleeping through the night.

Plus as a human I really like being able to reach out and touch my machines, even if I have to fly 5 hours to do it. So the flexibility of the cloud sounds really cool where the pricing is not so flexible. It would be nice to spool up an instance of a machine that isn't going to do much most of the time that doesn't actually use up a whole machine. But then when one machine starts to get pounded to give it some more juice. Plus upgrading your hardware would be much more of a dream. You move your most demanding servers to your hottest hardware and slide the idle servers over to the older crap. Plus restores and redundancy are a dream.

Then you still have the option to fully dedicate a machine in "realspace" to a demanding process. While VM does not have much overhead it does have some. So taking a server(s) that is being pushed to the maximum and sliding it onto bare metal will then allow your hardware to be used to maximum efficiency.

Then by having no real cost overhead to having more near idle machines spool up your developers can play interesting games. Maybe they want to see what your software will do with 20 MongoDB servers running instead of the current 3; or 200.

This all said, I am a fan of Linode; where I can predict my pricing very well.

stefancaunter

Amazon has convinced many people they are cheap

I'd love to see more people taking on scale themselves, but unless the perception that Amazon is a good deal changes, this won't change much in the way of their dominance. Unless you've actually been taken to the cleaners by them on a project, and can convince your boss that owning/renting gear is a better plan, they will still be a first choice vendor.

Decision makers read magazine articles (when they aren't playing games on their phone) that tell them Amazon saves them money. Everyone sits around in a meeting and nods their head.

[Jan 08, 2013] Ad Blocking Raises Alarm Among Firms Like Google

"But he has often complained that Google's content, which includes the ever expanding YouTube video library, occupies too much of his network's bandwidth, or carrying capacity."
January 7, 2013 | NYTimes.com

Xavier Niel, the French technology entrepreneur, has made a career of disrupting the status quo.

Now, he has dared to take on Google and other online advertisers in a battle that puts the Web companies under pressure to use the wealth generated by the ads to help pay for the network pipelines that deliver the content.

Mr. Niel's telecommunications company, Free, which has an estimated 5.2 million Internet-access users in France, began last week to enable its customers to block Web advertising. The company is updating users' software with an ad-blocking feature as the default setting.

That move has raised alarm among companies that, like Google, have based their entire business models on providing free content to consumers by festooning Web pages with paid advertisements. Although Google so far has kept largely silent about Free's challenge, the reaction from the small Web operators who live and die by online ads has been vociferous.

No Internet access provider "has the right to decide in place of its citizens what they access or not on the Internet," Spiil, an association of French online news publishers, said in a statement Friday.

The French government has stepped into the fray. On Monday Fleur Pellerin, the French minister for the digital economy, plans to convene a meeting of the feuding parties to seek a resolution.

Free's shock to advertisers was widely seen as an attack on Google, and is part of the larger, global battle over the question of who should pay to deliver information on the Web - content providers or Internet service providers. An attempt to rewrite the rules failed at the December talks of the International Telecommunication Union in Dubai, after the United States and other nations objected to a proposal that, among other measures, would have required content providers to pay.

Mr. Niel declined to comment on Sunday, through a spokeswoman, Isabelle Audap.

But he has often complained that Google's content, which includes the ever expanding YouTube video library, occupies too much of his network's bandwidth, or carrying capacity. "The pipelines between Google and us are full at certain hours, and no one wants to take responsibility for adding capacity," he said during an interview last year with the newsmagazine Nouvel Observateur. "It's a classic problem that happens everywhere, but especially with Google."

Analysts said that French regulators would probably not oppose an agreement between Free and Google aimed at smoothing traffic flows and improving the quality of the service, as long as competitors were not disadvantaged. But they said regulators would probably not allow an Internet access provider to unilaterally block content.

When it comes to blocking ads, though, disgruntled consumers do not have to rely on their Internet service providers. Consumers already have the option of downloading software like Adblock Plus to do the job for them.

Free is the second-largest Internet access provider in France, behind Orange, which is operated by France Telecom and has 9.8 million Internet customers. Because Free seeks to be a low-cost competitor, the company may feel itself particularly vulnerable to the expense of providing capacity to meet Internet users' ever-growing demand for streaming and downloading videos, music and the like.

Ms. Pellerin, the digital economy minister, expressed sympathy for Free's position in an interview with Le Figaro, published Saturday. "There are today real questions about the sharing of value between the content providers - notably in video, which uses a lot of bandwidth - and the operators," she said.

"In France, and in Europe," Ms. Pellerin added, "we have to find more consensual ways of integrating the giants of the Internet into national ecosystems." And in a subsequent Twitter message, she said she was "no fan of intrusive advertising, but favorable to a solution of no opt-out by default."

[Jan 08, 2013] France Rejects Plan by Internet Provider to Block Online Ads by Charles Platiau

Typical tragedy of commons: "The company, which has long balked at carrying the huge volume of traffic from sites owned by Google without compensation, had moved last week to block online ads when it introduced a new version of its Internet access software. "
NYTimes.com

In a potential test case for Europe, the French government on Monday ordered a big Internet service provider to stop blocking online advertisements, saying the company had no right to edit the contents of the Web for users.

Fleur Pellerin, left, France's minister for the digital economy, with Maxime Lombardini, chief executive of Iliad, the parent company of the French Internet service provider Free.

The dispute has turned into a gauge of how France, and perhaps the rest of Europe, will mediate a struggle between telecommunications providers against Internet companies like Google, which generate billions of dollars in revenue from traffic that travels freely on their networks.

European telecommunications companies want a share of that money, saying they need it to finance investments in faster broadband networks - and, as the latest incident shows, they are willing to flex their muscles to get it.

Until now, European regulators have taken a laissez-faire approach, in contrast to the U.S. Federal Communications Commission, which has imposed guidelines barring operators of fixed-line broadband networks from blocking access to sites providing lawful content.

On Monday, Fleur Pellerin, the French minister for the digital economy, said she had persuaded the Internet service provider, Free, to restore full access. The company, which has long balked at carrying the huge volume of traffic from sites owned by Google without compensation, had moved last week to block online ads when it introduced a new version of its Internet access software.

"An Internet service provider cannot unilaterally implement such blocking," Ms. Pellerin said at a news conference Monday, after meetings with online publishing and advertising groups, which had complained about a possible loss of revenue.

While she acknowledged that it could be annoying "when five ads pop up on a site," she added that advertising should not be treated differently from other kinds of content. "This kind of blocking is inconsistent with a free and open Internet, to which I am very attached."

While rejecting the initiative by Free, Ms. Pellerin said it was legitimate for the company to raise the question of who should pay for expensive network upgrades to handle growing volumes of Internet traffic.

French Internet analysts said advertisements appearing on Google-owned sites or distributed by Google appeared to have been the only ones affected - fueling speculation that the move was a tactic to try to get Google to share some of its advertising revenue with Internet service providers. Google's YouTube video-sharing site is the biggest bandwidth user among Internet companies.

Google was not represented at the meetings Monday with Ms. Pellerin. In an interesting twist, its case was effectively argued by other Web publishers, including French newspapers, even though these sites, in a related dispute, are seeking their own revenue-sharing arrangement with Google. Separately, French tax collectors are also looking into the company's fiscal practices, under which it largely avoids paying corporate taxes in France by routing its ad revenue through Ireland, which has lower rates. One proposal that has been discussed would be to use receipts from a tax on Google to support local Web sites.

In yet another dispute involving Free and Google, the French telecommunications regulator is investigating complaints that the Internet provider has been discriminating against YouTube. In that case, a French consumer organization, UFC-Que Choisir, said it suspected that Free was limiting customer access to YouTube because of the high amount of bandwidth that the site consumed.

Ms. Pellerin said these issues would be examined separately. Still, the timing of Free's move raised questions, given that it came only days before a scheduled meeting among Ms. Pellerin, Internet companies and telecommunications operators to discuss the financing and regulation of new, higher-speed networks.

"Should users be held hostage to these commercial negotiations? That is not obvious to me," said Jérémie Zimmermann, a spokesman for La Quadrature du Net, a group that campaigns against restrictions on the Internet.

[Jan 08, 2013] Amazon's Unknown Unknowns

"A.W.S. tries to figure out how its complex global network of perhaps a half-million servers could break by having one team look at the work of another, by bringing in top engineers from elsewhere in Amazon and, occasionally, by hiring outside experts in performance and security." They can investigate al long as they wish but complexity will bite them again and again... And the duration of outage was pretty much above duration of a typical corporate outage: Netflix was unable to supply streaming movies. Service was not fully restored for almost 12 hours.
NYTimes.com

Operating the world's largest public cloud still leaves you with the problem of "unknown unknowns."

The phrase is associated with the then-defense secretary Donald Rumsfeld, who in 2002 described the problems of certainty with regards to war. The reality is no less acute for Amazon as it searches for ways to avoid another widespread outage of its public cloud business, Amazon Web Services.

"We ask ourselves the question all the time," said Adam Selipsky, vice president at A.W.S. "There is a list of things you don't know about."

On Dec. 29, Amazon Web Services published a detailed explanation of what went wrong Christmas Eve, when Netflix and many other customers (Amazon won't say how many) had service disruptions. The company also described what it was doing to ensure the problem would not happen again.

The short version: An A.W.S. developer inadvertently took out part of the software that makes sure data loads are properly distributed across the computers in a computer center in Virginia. New activity on the site was slowed or failed altogether. Netflix was unable to supply streaming movies. Service was not fully restored for almost 12 hours.

What Amazon didn't go into is how much it is trying to figure out what else might go wrong in an increasingly complex system. Part of that effort consists of publishing explanations (this one was notably full of information), and part consists, as in warfare, of lots more scenario planning.

A.W.S. tries to figure out how its complex global network of perhaps a half-million servers could break by having one team look at the work of another, by bringing in top engineers from elsewhere in Amazon and, occasionally, by hiring outside experts in performance and security.

"Running I.T. infrastructure in a highly reliable and cost-effective fashion is hard," Mr. Selipsky said. "We're able to put more resources on that than the vast majority of our customers can. That said, there is no substitute for experience."

There is also a deep technical lesson in the recent outage. Mostly engineers look at code for flaws, but the operational breakdown at A.W.S. on Dec. 24 seems to have been a result of human organizational error, not the software itself.

In the future, developers will need specific permission to change the data load balancing equipment. What isn't clear is whether A.W.S. can examine all of its own management practices to see if there are other such decision-making flaws.

Mr. Selipsky, understandably, could not say whether everything else was solid, though he did note that Amazon had eight years of experience running and managing big systems for others.

To be sure, such breakdowns happen all the time with old-style corporate servers. While they don't have the dramatic effect of an A.W.S. failure, they may collectively represent much more downtime.

In a study commissioned by Amazon, IDC said downtime on A.W.S. was 72 percent less per year than on conventional corporate servers. While the financing of the study makes its conclusions somewhat questionable, its results are in line with similar studies about online e-mail systems compared with in-house products.

[Dec 09, 2012] Cisco: Puppy cams threaten Internet By Jay Gillette

Expert says growth of ambient video could strain global nets

January 24, 2011 | Network World
HONOLULU -- Network demand will explode, fueled by unexpected growth in ambient video, like puppy cams and surveillance video, according to reports from the 33rd Pacific Telecommunications Council (PTC) conference held last week in Hawaii.

Several thousand technology professionals from across the Pacific hemisphere attended the annual conference. Delegates from the United States were the largest bloc, with Hong Kong SAR and China next, followed by Japan, India, Singapore and many Asia-Pacific regions and countries.

Telegeography Research presented estimates that global broadband Internet subscribers will climb to more than 700 million by 2013, with more than 300 million from Asia, compared to about 100 million in North America, and nearly 200 million in Europe.

Network World - HONOLULU -- Network demand will explode, fueled by unexpected growth in ambient video, like puppy cams and surveillance video, according to reports from the 33rd Pacific Telecommunications Council (PTC) conference held last week in Hawaii.

Several thousand technology professionals from across the Pacific hemisphere attended the annual conference. Delegates from the United States were the largest bloc, with Hong Kong SAR and China next, followed by Japan, India, Singapore and many Asia-Pacific regions and countries.

Telegeography Research presented estimates that global broadband Internet subscribers will climb to more than 700 million by 2013, with more than 300 million from Asia, compared to about 100 million in North America, and nearly 200 million in Europe.

And Robert Pepper, Cisco vice president for global technology policy, presented findings from the company's Visual Networking Index, which showed that global IP traffic is expected to increase more than fourfold (4.3 times) from 2009 to 2014.

In fact, global IP traffic is expected to reach 63.9 per month in 2014. This is equivalent to 766.8 exabytes per year . The most surprising trend is that video traffic surpassed peer-to-peer volumes in 2010 for the first time.

An unexpected driver in this overall growth of Internet traffic is the surge in ambient video. This is so-called "puppy cam" traffic -- fixed video sources featuring pets, so-called "nanny cam" child care and health monitoring video streams, and especially security camera applications .

"This a much bigger deal than anyone thought,'' said Pepper. He added that the popular Shiba Inu Puppy Cam site was said to have more Internet viewing hours than all of ESPN online video. In fact, of the top online video sites in Europe last year, "three of the top 20 are ambient video, and these didn't exist a year ago."

Other key findings in the Cisco report are:

All these changes, with larger volumes and changing user needs, require network providers and managers to prepare for new and unexpected demands on their infrastructure and operations.

Pepper said wireless networks are going to need more spectrum, "and fiber to every antenna, fiber to every village -- a T-1 connection to the antenna is not going to cut it."

Ethical issues emerge

Keynote speaker David Suzuki, a Canadian ecologist, noted that humans are now the most numerous species of any mammal in the world. "There are more humans than rats, or rabbits," he said. He warned that exponential growth of human demands on the planet has led to climate change, and could lead to potential ecological disaster.

Suzuki challenged communications industry professionals to apply their insights and innovations to more informed policies and practices. The conference itself featured a number of presentations focused on "green IT." Another focus was economic development for underserved areas of the Pacific, especially island nations and rural populations, through broadband projects.

Communications ethicist Thomas Cooper presented research showing that ethical considerations are far more prevalent in industry discussions now than they were in the 1990s. Cooper reported the top five communication ethics areas observed, in order: privacy; information security; freedom of information and censorship; digital divide; intellectual property and patent protection.

Cooper emphasized that communications ethics involves not only "red light" issues such as regulating negative impacts of invasion of privacy and cybercrime, but also "green light" issues such as education, telemedicine and e-democracy. Cooper forecast that the communications industry will continue to face these intertwined challenges, and urged PTC delegates to be ready to participate in both.

Intelligent-community contenders selected

New York-based think tank Intelligent Community Forum (ICF) announced at the conference its annual "Top Seven" semifinalists for the Intelligent Community of the Year award. The winner will be revealed at the ICF's June 2011 conference at NYU-Poly University in Brooklyn.

The ICF selects the Intelligent Community list based on how advanced communities are in deploying broadband; building a knowledge-based workforce; combining government and private-sector "digital inclusion"; fostering innovation and marketing economic development. This year's selection also includes emphasis on community health, especially in using information and communication technologies for its support, and in building health-related business clusters.

As announced by ICF co-founder Louis Zacharilla, the 2011 intelligent city finalists are:

Pacific Telecommunications Council's 2012 meeting dates have also been announced. The 34th annual meeting will be in Honolulu at the Hilton Hawai'ian Village, Jan. 15-18, 2012.

Gillette is professor of information and communication sciences at Ball State University, director of its Human Factors Institute, and a senior research fellow at the Digital Policy Institute. He has written extensively on information technologies and policy, and worked in academic, industry and public policy organizations. He can be reached at [email protected].

GAUGING THE VOLUME: What to expect in data storage and network traffic growth in 2011

[Jun 12, 2012] Adopt the Cloud, Kill Your IT Career

"Regarding the latter point: a lot of managers forget that when disaster strikes in their own data center, they are in control, and they can allocate resources and extra funds towards getting the most important servers back up first. I've been involved in virtualization and outsourcing on both sides buyer and seller for a bit more than 20 years. This aspect is always forgotten by the PHBs. If the email server explodes, I have $$$$$ high five figures per year of motivation to fix it ASAP. If an outsourced email provider explodes they have $49.95/month or whatever of motivation to fix it."
June 11 | Slashdot

Anonymous Coward: oh please

no one even knows what the cloud is. It's everything, it's nothing, it's cheaper, it's not.

run your IT shop like everything else, with common sense. Can external hosting work sometimes? sure, if so, do it and stop worrying about it.

Flyerman: Re:oh please

Ohhhh, and when you can't use external hosting, put it on your "private cloud."

I.T. curse:

And rightly so. Some IT managers like outsourcing because they think they're outsourcing accountability as well. Wrong: when you make the decision to outsource or move stuff to the cloud, it is your responsibility to do some due diligence on the vendor, make sure there's a sensible SLA, and have contingency plans just like you had when the servers were still under your control. Regarding the latter point: a lot of managers forget that when disaster strikes in their own data center, they are in control, and they can allocate resources and extra funds towards getting the most important servers back up first. But when disaster strikes your cloud provider, what priority will you get, when there's thousands of angry clients (including a number of fortune 500 companies) all shouting to get their service restored first?

That doesn't mean that outsourcing and the cloud are bad per se. It means that when you make that decision, you should apply the more or less similar skills and considerations as you did when you still ran your own data center. You as an IT manager are still end responsible for delivering services to the business, and you cannot assume the cloud is a black box that always works. Plan accordingly.

vlm

Re:I.T. curse

Regarding the latter point: a lot of managers forget that when disaster strikes in their own data center, they are in control, and they can allocate resources and extra funds towards getting the most important servers back up first.

I've been involved in virtualization and outsourcing on both sides buyer and seller for a bit more than 20 years. This aspect is always forgotten by the PHBs.

If the email server explodes, I have $$$$$ high five figures per year of motivation to fix it ASAP. If an outsourced email provider explodes they have $49.95/month or whatever of motivation to fix it.

I have seen some very sad sights over the decades. If the cost of repair/support exceeds the cost of sales for a similar commission, too bad so sad. Oh your whole multi-million dollar business relies on working, email, oh well. It doesn't matter if we're talking about mainframe service bureau processing, or outsourced email/DNS/webhosting from the 90s/00s, or an online cloud provider, your uptime is not worth a penny more than you're paying for the service.

You might, at best, get your provider to B.S. you a sense of urgency... but watch what they do, not what they say.

betterunixthanunix

Re:So much for definitions...

A cloud model is heavily relying on network resources for your computing needs, no?

I guess that's the meaning today. Yesterday, it meant outsourcing your computation, which is the more typical context, but even then it refers to anything that involves outsourcing computation (storage included).

Besides, in the "traditional" enterprise network server-client model, we already rely heavily on networked printing and networked file systems.

Which is one of the reasons "cloud computing" is a pointless and meaningless term. It is nothing more than marketing, designed to convey a sense that there is something new under the sun when it comes to networked computers, when in fact people have been outsourcing computation and relying on networks since the 1960s.

Sycraft-fu

Ya we had that problem

Campus decided to outsource our e-mail to Microsoft BOPS, rather than just do Exchange (or something else) on campus. Problem was that doesn't mean that suddenly campus IT just gets to say "e-mail isn't our problem, call MS!" No, rather IT still hast o do front line support but now when there's a problem you have to call someone else, get the runaround, finger pointing, slow response, and so on.

Net result? We now have an Exchange server on campus and do e-mail that way.

It isn't like outsourcing something magically makes all problems go away, particularly user problems. So you still end up needing support for that, but then you get to deal with another layer of support, one that doesn't really give a shit if your stuff works or not.

Basically people need to STFU about the "cloud" and realize that it is what it always has been: outsourcing and evaluate if it makes sense on those merits. Basically outsourcing is a reasonable idea if you are too small to do something yourself, or if someone does a much better job because they are specialized at it. If neither of those are true, probably best not to outsource.

bobbied

But it is Easier!

Moving to the cloud is easier, which is why we keep considering it. It is easier to off load the work onto some cloud operator who is supposed to do it better and possibly cheaper, or at least it LOOKS easier. No more dealing with backup tapes, No more dealing with software licenses and the like, just pay your vendor of choice copy all your data onto the cloud and start tossing hardware and the people that managed it out the door.

Problem here is that doing this job right, on a budget, and on time is FAR from easy. Plus, it is going to be very difficult to verify that your vendor is actually doing the job correctly, considering that the hardware isn't accessible, being located in some server room some distance away. Who knows if they actually do backups of anything, much less actually do off site storage of recovery media. My guess is that as competition in this area heats up, prices will fall with quality falling too. Costs will be trimmed by eliminating skilled labor and without skilled labor the whole house of cards will fall.

Seems to me that the cloud may be a short term gain for most, but in the long run, dumping your infrastructure and the people that go with it is going to bite you eventually, unless the business is very small.

Finally, the biggest messes I've had to clean up had very little to do with a hardware failure or some loss of data. The worst messes I've seen where caused by some administrative error.... Replacing the wrong disk in the RAID, causing the total data loss or not thinking though a command before hitting enter. I don't see how being on a cloud will fix this kind of thing.

Adopt the cloud, kill your IT career Data Center By Paul Venezia

Trust, but verify -- and keep your cards close to the vest. And calling opponent "server-huggers" does not help to refute their arguments.
June 11, 2012 | InfoWorld

It's irresponsible to think that just because you push a problem outside your office, it ceases to be your problem

19 Comments

It's safe to say that you receive many solicitations from vendors of every stripe hawking their new cloud services: software, storage, apps, hosted this, managed that. "Simplify your life! Reduce your burden! It's a floor wax and a dessert topping!" Some of these services deliver as promised, within fairly strict boundaries, though some are not what they seem. Even more have a look and feel that can make you swoon, but once you start to peer under the covers, the specter of integrating the service with your infrastructure stares back at you and steals your soul.

It's not just the possibility of empty promises and integration issues that dog the cloud decision; it's also the upgrade to the new devil, the one you don't know. You might be eager to relinquish responsibility of a cranky infrastructure component and push the headaches to a cloud vendor, but in reality you aren't doing that at all. Instead, you're adding another avenue for the blame to follow. The end result of a catastrophic failure or data loss event is exactly the same whether you own the service or contract it out. The difference is you can't do anything about it directly. You jump out of the plane and hope that whoever packed your parachute knew what he or she was doing.

A common counter to this perspective is that a company can't expect to be able to hire subject experts at every level of IT. In this view, working with a cloud or hosted service vendor makes sense because there's a high concentration of expert skill at a company whose sole focus is delivering that service. There's some truth to that, for sure, but it's not the same as infallibility. Services can fail for reasons well outside the technological purview, no matter how carefully constructed it may be. Of course, they can and do fail without outside assistance as well. The Titanic was unsinkable, if you recall.

Let's look at LinkedIn, eHarmony, and Last.fm. Although they may not be considered cloud providers in the strictest sense, they're veteran Internet companies that employ many highly skilled people to build and maintain their significant service offerings. They are no strangers to this game. Yet in the past week, all three had major security issues wherein thousands or millions of user account details were compromised. LinkedIn reportedly lost 6.5 million account details, including passwords, to the bad guys.

Just imagine if LinkedIn were a cloud provider responsible for handling your CRM or ERP application. You now have to frantically ensure that all your users change passwords or have them changed and relayed to the right party. You have to deal with what could conceivably be compromised data, rendering the application less than useless. What's left of your hair is on fire -- but you can't do anything about it directly. You can only call and scream at some poor account rep who has no technological chops whatsoever, yet is thrown to the wolves. Don't think that this can't or won't happen. It's guaranteed to happen -- again and again.

Now imagine where you'll be when you've successfully outsourced the majority of your internal IT to cloud providers. All your email, apps, storage, and security rest easy in the cloud. You have fancy Web consoles to show you what's going where and what resources you're consuming. You no longer have to worry about the pesky server hardware in the back room or all those wires. If a problem arises, you fire off an email or open a support ticket, sit back, and wait.

Once that becomes the norm, the powers that be might realize they don't need someone to do any of those tasks. I mean, if they're paying good money to these vendors for this hosted cloud stuff, why do they need an IT department? They'd be mistaken, of course, but frankly, they'd also have a point. After all, anyone can call a vendor and complain.

Don't get me wrong. I believe there are many areas in which the cloud brings significant benefits to an organization of any size. Data warehousing, archiving, and backup using cloud storage providers that offer block-level storage, tightly integrated security, and local storage caching and abstraction devices come to mind.

But on the opposite end of that spectrum are application and primary storage services that function at higher levels and can be compromised with a single leaked password. Aside from the smallest of companies, these services collected into any form cannot serve as a full-on replacement for local IT. Doing so places the organization in unnecessary jeopardy on a daily basis.

Cloud vendors necessarily become targets for computer criminals, and however vigilant the vendor may be, at some point they're going to be compromised. Judging by the recent revelations of Stuxnet, Flame, and Duqu, this may have already happened. Don't think that I'm being overly paranoid, either. If I'd told you a month ago that several widespread viruses were completely undetectable by antivirus software due to the fact they were signed using Microsoft certificates, you'd have thought the same. But it happened.

If and when it comes to light that a major cloud vendor has been compromised for months and has divulged significant amounts of sensitive customer information to hackers over that period, we should not be surprised. I mean, City College of San Francisco had been compromised for more than a decade before anyone figured it out.

The fact of the matter is that a significant internal or external event occurring at one or more cloud providers can be ruinous for that provider and, by extension, its customers. That means you in IT. The best idea is to use cloud offerings wisely, and be ever vigilant about maintaining control over what little you can. Trust, but verify -- and keep your cards close to the vest.

Mark Brennan

IT has been failing for two decades. Every business group at every company hates the IT group. They say NO. They say it will take 6 months, then they take 9 months. when you think they should take 3 weeks.

Adopting a cloud strategy gets you free from so many statutory shackles that keep your IT team locked in negative mode. It also allows you to let go of the more bureaucratic elements within IT. Keep the business analysis, apps administrators, developers and project managers. Sit them closer to the business. Reduce your DBA and network engineering effort. Shrink your compliance and environments footprint. Your business leaders will appreciate it.

A security breach may occur one day. It's less likely in the cloud than on your on-premise apps. You will not be decapitated. You're more likely to be fired to resisting the cloud than for adopting it.

MSHYYC

My experience has not been so universally negative. I find that the more engaged that IT professionals are involved in the core business operations the better. It is a two-way street--IT people have to be engaging and managers have to engage them.

What tends to happen, especially in large operations, is that IT is treated like a "cost centre" that must be minimised and marginalised -- they are a "neccesary evil". The "statutory shackles" you speak of aren't a defining characteristic of in-house IT and are not arbitrarily handed down from on IT high--they are generally put in place to meet with business requirements. If you think of it, of course you hate the IT department--you've done your best to make THEM hate YOU.

On the other hand, if IT is treated as an asset/investment, and IT is engaged in core business operations it works great (this, as I said, requires effort on BOTH management and IT sides--IT pro's really have to work on soft skills and working in multidisciplinary teams more than they typically do now). It doesn't happen this way often enough but I've seen a few shining examples where they "get it"--especially in SMEs.

To take your example and turn it around: The PHB's draft a vague requirements document and pull some time and money budget numbers out of their butts then toss it over the wall into the IT department (the 3-week expectation). The IT guys respond back with a raft of questions to nail down the requirements better and after some back and forth (and because of a history of poor planning) they manager to get 6 months to accomplish some task. Six months come around and testing starts happening, only what was asked for isn't really what they wanted and changes must be done or the whole thing is useless. Three months after the project is finally done.

An IT team that continually says "NO" and never gets things done on time is largely a symptom of larger mismanagement issues--quite often management's requirements do not line up with available resources in IT, so IT must either say "no" because they cannot manage what is being requested properly, or they take forever because everyone is too busy or lacks the skills to do it right and on time. "The cloud" will not change this situation--it merely moves the steaming pile of poo out of your server room into another data centre. Someone still has to deal with the stink. If you execute your outsourced cloud infrastructure with the same ineptitude as you manage IT in-house then you are pretty near as likely to suffer outages and security breaches regardless.

I've dealt with this before, and I'm in the midst of dealing with this again--IT infrastructure that was moved offsite to fix things turning into an even bigger disaster because the root of the problem was not addressed--only brought more into the light, where management starts fighting with the service provider over requirements and scope (mostly in terms of who eats the cost), broken chains of communication resulting in, well, loss of communication to their important offsite services, and when support is required having it take even longer than it did when it was in house. Unlike IT, the service provider never says NO which is nice...until they comply then send you a big whopping bill.

This amazing "cloud" certainly has its benefits, but it is DEFINITELY no panacea. Success really depends upon attitude no matter what strategy or technology you use. Get your house in order before you make ant sort of move on outsourcing your services or infrastructure or you will face certain disillusionment.

Mark Marquis
The cloud brings economies of scale to support operations and subsequently free's up resources for the real effort of creatively transforming business processes and ultimately increasing the efficiency and effectiveness of the company. Streamlining business operations and reducing overall support costs through cloud-based companies that have repeatable, well-defined support processes in place is a good thing. I always find it interesting how we work in the technology sector however there is always so much resistance to similar themes of technology progressive that we grew-up in.
MSHYYC
Do you write those sales brochures that those pointy-haired bosses like to drop on your desk and say something like "this sounds really smart, we need to get one"? It just that it really sounds like it due to lack of actual content. HOW do the "economies of scale to support operations" free up resources? Even better, how do you even define this "cloud"? Is is purely infrastructure--elastic virtual machines floating around out there on a cloud? Then all you are doing is liberating yourself from hardware and networking support--you still have your applications to worry about. Perhaps you are talking about storage management and data warehousing, perhaps offloading the responsibility of that? Maybe you mean going further up the stack and having hosted services for email, ERP functions, etc?

These are all worthwhile to look at, but EVERYTHING has tradeoffs. Sometimes these tradeoffs are not a problem at all, other times they are show stoppers. Do regulatory requirements prohibit you from putting something out there on the cloud? Are your a smaller business without specialised IT needs where all you need are typical hosted services? Is your business willing to "creatively transform its business processes" enough to work effectively with the service provider?

Heh. "Creatively transforming business processes". That's awesome...a virtually content-free phrase for a brochure or magazine ad:

CLOUDCO...helping you creatively transform your business processes since 2009 (r) (tm)

Sorry if I have offended, no offence was meant...I am just in a bit of a silly mood and I read this stuff and it sent my mind on a tangent.

techychick

Because God knows, no one ever hacked a local network. You want to kill your IT career? Be afraid of innovation. Yeah this whole www stuff is just a fad anyway...

Duke

@Techychick makes a good point about the need to change with the times. Certainly a local network CAN be hacked, however we lock down firewalls to present the smallest possible footprint to potential intruders. We don't respond to a PING on WAN ports and, at least in part we rely on "security by obscurity."

Cloud services, being open and accessible by their very nature, present a tempting target for script kiddies around the world to bang away 24x7.

Without a doubt the next 12 to 18 months will reveal stories of mass cloud intrusions. Those instances will lead to better security, three factor authentication and other improvements to make cloud services very much like the LANs behind a firewall.

"Cheap, Fast, Secure. Pick any two" still rings true.

TJ Evans

Exactly right! This is not a field that embraces stagnation, at least not for long :).

vyengr

And this is also a field where not every overnight sensation stands the test of time and not every "old" technology dies in a year or two - how many people are still using XP? IT managers who are easily seduced by innovation for the sake of innovation will probably not last long.

IBM Dropbox, iCloud Ban Highlights Cloud Security Issues

IBM might have signed onto a limited version of the "Bring Your Own Device" policy currently gripping many companies, but it has reportedly banned employees from using certain cloud-based apps such as Dropbox.

According to a widely circulated May 21 story in Technology Review, IBM not only forbids Dropbox and cloud services such as Apple's iCloud, but has put its proverbial foot down on smartphone-generated WiFi hotspots, as well as the practice of auto-forwarding work email to personal email accounts. "We found a tremendous lack of awareness as to what constitutes a risk," Jeanette Horan, IBM's chief information officer, told the publication.

When approached for comment, an IBM spokesperson said: "No comment as the story speaks for itself."

The introduction of commercial cloud services into an enterprise context has become a source of consternation for many an IT professional, and not only on the security front. Dave Robinson, an executive with online-backup firm Mozy (and one of its first employees), suggested in an interview that many clients adopting his company's products want very specific functionality.

"You do get into one-offs, where one organization's environment is different from others," he said, "and they use niche software, and that forces us to make decisions; in some instances, we might do a one-off work." In general, he added, companies want "a very robust administrative dashboard" in addition to strong security and the ability to set policies.

Those requirements haven't stopped companies from gravitating toward software originally designed for consumers. "About 70 percent of our business is B2B [business-to-business], and 30 percent is consumer," he said. "It was 100 percent consumer in 2007." The challenge in that context is to keep the core product simple and streamlined, in contrast to many pieces of enterprise software that offer dashboards loaded with dozens of very granular controls and options.

Dropbox declined to discuss its business market or security.

Security remains a top concern for businesses thinking of adopting cloud-based consumer apps. "That can cover everything from data safety/recovery to securing data in transit and at rest to whether a vendor can meet a company's compliance requirements," Charles King, principal analyst at Pund-IT, wrote in an email. "The same issues touch most cloud services/service providers, but the issues are more important by orders of magnitude in the business world than they are in the consumer space."

For companies with particularly stringent requirements, such as IBM, it seems the go-to solution is to either ban consumer-centric apps and services, or else institute very specific security policies that regulate those products' behavior.

[Jan 20, 2012] What Happens To Your Files When a Cloud Service Shuts Down

January 20, 2012 | Slashdot

MrSeb:

Megaupload's shutdown poses an interesting question: What happens to all the files that were stored on the servers? XDA-Developers, for example, has more than 200,000 links to Megaupload - and this morning, they're all broken, with very little hope of them returning. What happens if a similar service, like Dropbox, gets shut down - either through bankruptcy, or federal take-down? Will you be given a chance to download your files, or helped to migrate them to another similar service? What about data stored on enterprise services like Azure or AWS - are they more safe?" And if you're interested, the full indictment against Megaupload is now available.
XXX

The actual answer is (as always) to have backups of anything you feel is important. If the data is important enough, you make multiple backups to different kinds of media and store them in different places.

And, with any backup solution, one must plan for contingencies. Now that MU is offline, and the other personal file uploading sites are in danger of the same scrutiny/takedown, maybe it's time to roll your own private cloud with friends and family as storage nodes. They host your files, you host theirs. Model it after a weird hybrid bittorrent/RAID setup. That whole Storage Spaces thing from Microsoft would be a good model if it can be scaled to the network layer. The loss of any node would not bring down the entire storage pool and would allow itself enough time to re-balance the load among the remaining nodes.

Obviously, there are some logistics concerns with this method. However, a private cloud like this would certainly survive the antics of a jilted media conglomerate (or a cabal of them). And, as it would be a backup solution to data you are already keeping elsewhere (right?), it wouldn't be the only copy of the data in the event the cloud goes down.

Lashat:
Hey you just described early 1990's BBS'

welcome to the future.

forkfail

That, in a word, is horseshit.

The legitimate users of the service have lost real property without any intent to do wrong. The takedown was without warning. The folks who lost their legitimate data have had their fourth amendment rights absolutely trampled.

And you think they should be grateful that all they lost was their data, and not their physical freedom?

forkfail

A huge part of the whole cloud approach is that it is an approach to data storage that comes with all of the redundancy built in. The idea is that it's expensive to run your own redundant data stores, keep them secure, etc. So, one basically outsources it to the cloud.

Now we're in a situation where the manner in which some subset of the users of a given cloud can bring the entire thing down for everyone, resulting in the loss and exposure of everyone's data.

Let's consider for a minute AWS. There are hundreds (if not thousands) of companies that exist pretty much solely in AWS space. They rely upon the cloud for their existence. AWS is a lot more reputable than Megaupload. However, at the end of the day, the same problem potentially exists with storing things in the AWS cloud.

And if this can happen to one company, it can happen to any, including the "more reputable" ones like AWS. Especially with the SOPA-esque laws and treaties being pushed.

This will absolutely break the cloud model. It renders all the advantages of the cloud moot, and in fact, opens up a completely new security hole (that of unwarranted seizure and or destruction of data by government agencies, or perhaps even rival corporations with an accusation of illicit content). Disney thinks that MyLittleComic is storing their data in JoesCloud? Accuse JoesCloud of hosting illicit data, get the whole thing nuked.

This results in loss of business (at least in the USA); it makes it harder for the smaller firms and startups to be viable; and it further entrenches those corporations that are big enough to pay the appropiate bribes^H^H^H^H^H^H lobbyist donations in Washington DC.

Finally, I would never, ever argue against due diligence. I would, however, claim that for a number of organizations that cloud use IS due diligence. And I'd still maintain that a good number of folk's fourth amendment rights were just tossed into the crapper.

eldavojohn

Well, the summary specifically references a developer's forum where I can sympathize (being a developer) with people modding Android ROMs or whatever and uploading such binaries for distribution to others. I guess the people who run the forum don't really get a say in any of this. However, as a software developer, I can imagine a third option for files that are user generated (and for the most part legal).

Now XDA-Developers is going to have tens of thousands of once helpful posts that now lead to a broken link. How could they have avoided this? Well, I'd imagine that someone could have written an internal bot for their forums that would harvest links to the external megaupload. They then could have subscribed to megaupload, downloaded said linked files and created a local cache of their files purely for their own use on a small RAID. Now the last thing the bot would need to do is take the megaupload URL and develop some unique URI ... perhaps a hash of the date, checksum and filename? It would then maintain a key-value pair of these megaupload links to your internal URIs and also a directory structure of these URIs as the files. Now, say megaupload is a very unreliable/questionable service or goes down and now your forum is worthless. Well, you can always re-spider your site and replace all the megaupload links with links to your cloud hosting of these new files or work out a deal with another third party similar to megaupload where they would accept the file and URI and return to you the URI paired with their new URL. Then it's a matter of spidering your site and replacing the megaupload links with your new service's URLs.

It's a pain in the ass but let's face it, some forums could perish when their codependence on megaupload is fully realized in a very painful manner. And I don't think that's a fair risk to the users who have created hundreds of thousands of posts.

dissy

I never understood why people would upload a copy of a file to the Internet, manually/purposefully delete their only local copy, and proceed to complain that they no longer have a local copy.
Why on earth would you delete it from your computer?!?

There is NO excuse for this problem.

This is FAR from a new issue with "the cloud" either.
People used to do the exact same thing with web-hosting.
They would upload their website to a web server somewhere, delete their only copy, then when the hosting company went under, had the server crash, disk failure, whatever... the user would proceed to blame the ISP for the fact the user themselves deleted their only copy from their own computer. wtf?

The standard rule for backups is, if you can't bother to have two copies (One on your computer, one backed up on another device) then it clearly wasn't important enough to warrant bitching about when you lose it. That rule implied ONE copy was not enough... Why on earth would people think ZERO copies is any better?

Hard drives die. It's a fact of life. The "if" is always a yes, only the "when" is variable.
That fact alone is reason enough to already have more than one copy in your own home on your own equipment.

A provider disappearing like this should be nothing worse than a minor inconvenience in finding somewhere else to host it and upload another copy, then chase down URLs pointing there and update them. Sure, that can be a bit of work and is quite annoying, but it should be nothing on the scale of data loss.

Storage is cheap.

Encryption is easy (Thanks to the efforts of projects like PGP [symantec.com], GPG [gnupg.org], and TrueCrypt [truecrypt.org])

BackupPC [sourceforge.net] is free, runs on Linux which is free, and can be as simple as an old Pentium-2 desktop sitting unused in your basement that you toss a couple extra hard drives in.

You set it up once and it does everything for you! It daily grabs copies of other computers, all automated, all by itself. It can backup Linux, Windows, and even OSX via the network. You can feed it DHCP logs to watch for less frequently connected machines like laptops. It de-duplicates to save disk space, and can email you if and when a problem crops up. I only check mine twice or so a year just to make sure things are running (never had a problem yet) and as it deletes older backups only when needed to make room for new ones, with de-duplication I can go grab a file from any date between now and three years ago, at any stage of editing (Well, in 3 day increments for my servers.. but it's all configurable, and should be set based on the importance of the data!)
On ubuntu and debian based systems, it is a single apt-get install away. Likely just as easy on any other distro with package management.
Any true computer geek can slap together such a system with zero cost and spending less than an afternoon. Anyone else can do so for minimal cost and perhaps a day of work.

Apple has ridiculously easy backup software (Time Machine?), and Windows has the advantage of most of the software out there being written for it, so the odds that there are less than five different software packages to do this exact same thing is next to impossible.

Hell, even for non-geeks, most people have that one guy or gal in the family who supports everyones computers. Just ask them! They will likely be ecstatic to help, possibly will donate spare parts from their collection (Or find you the best prices on parts if not) - and be content in the fact they won't have to tell you things like "Sorry, your hard drive has the click-o-death, I can't recover anything from it." which no one likes to need to say.

This is worth repeating: There is NO excuse for this problem.

Personally, if it's important, I have a bare minimum of four copies.

racermd

The actual answer is (as always) to have backups of anything you feel is important. If the data is important enough, you make multiple backups to different kinds of media and store them in different places.

And, with any backup solution, one must plan for contingencies. Now that MU is offline, and the other personal file uploading sites are in danger of the same scrutiny/takedown, maybe it's time to roll your own private cloud with friends and family as storage nodes. They host your files, you host theirs. Model it after a weird hybrid bittorrent/RAID setup. That whole Storage Spaces thing from Microsoft would be a good model if it can be scaled to the network layer. The loss of any node would not bring down the entire storage pool and would allow itself enough time to re-balance the load among the remaining nodes.

Obviously, there are some logistics concerns with this method. However, a private cloud like this would certainly survive the antics of a jilted media conglomerate (or a cabal of them). And, as it would be a backup solution to data you are already keeping elsewhere (right?), it wouldn't be the only copy of the data in the event the cloud goes down.

Marxist Hacker

And if it's damning, there are plenty of dead-man-switch based e-mail services that will happily e-mail your file to several news outlets for a cheap price if you fail to check in.

Lashat

welcome to the future.

Sentrion

"...roll your own private cloud with friends and family as storage nodes. They host your files, you host theirs. Model it after a weird hybrid bittorrent/RAID setup..."

Once the FBI and **AA's find out your "rolling your own" underground clandestine P2P under-the-radar private information (translated into "intelligence" by the agency) sharing (translated into "espionage") system they will for sure decide that you are a terrorist/spy and send a drone to take out your network and your family.

They probably already have sniffers searching for this activity while we speak. ** Puts on tinfoil hat **

Kevin Stevens

I am surprised that NAS's haven't caught on very well. I have had one since 2007, and have been living in "the cloud" ever since. I can access all of my data over the internet, and it also serves as a nice little low power web server that can run gallery and various other apps. It can stream media, and I can even kick off a bit torrent movie download at work, and then watch it when I get home. All the other functions are really just gravy, as I originally bought this set up to replace a large old power hungry pc that was acting as a file server to supplement my roommate and I's meager laptop drives. I am protected both by RAID 1 and an external USB hard drive that I do a full backup to on a weekly basis. The only thing I am really missing is having a backup kept off-site, which I could do if I was willing to swap out disks, or pay for a service that would allow me to do an online backup.

Its a little pricey (about $400 for disks + the NAS itself) and requires some knowledge to set up properly, but I have no real space limitations, upload/download limits, and I can add or disable features as I see fit. Oh and of course, mine runs linux on top of a low power arm CPU.

JobyOne

That answer doesn't work for a forum like XDA-Developers. They can't exactly back up the URLs that all their links point to. If a service like this goes down backups do nothing to alleviate the painful process of updating all their gazillion links to point wherever they move the new copies from their backups to.

I thought old people knew the saying "throwing the baby out with the bathwater." Where was that kind of reasoning here?

Anonymous Brave Guy
The actual answer is (as always) to have backups of anything you feel is important.

Ironically, the specialist on-line back-up services seem to be among the worst offenders in terms of guarantees.

For example, we looked into this a few months ago, and one huge and very well known back-up service had Ts & Cs that seemed to say (quite clearly, IIRC) that if they decided to close down the service for any reason then they would have no obligation in terms of granting customers data access beyond letting you download what you could over the next 3 days. On a fully saturated leased line, with

ackthpt

As a point, the government will be using all files hosted on those servers as evidence in the case. They will not likely, and are not required to, give access to those files.

Yeah, expect a subpeona in the mail.

"Uh, I was so shocked by the news I forgot the password to my 8GB zip file."

"No worries, we have a crack team of security hackers who will have it open in a few minutes if you can't supply it."

"..."

"We'll call upon you if we need you for anything. Bye!" *click* nrrrrr...

*click* diit-doot-doot-deet-diit-doot-deet-doot-deet-doot "Hello, I'd like a ticket to New Zealand! FAST!"

KhabaLox

Seeing as Dotcom was arrested in NZ, you may want to fly to a less US-friendly locale. I hear Venezuela is lovely this time of year.

Nethemas the Great

However, you may wish to relocate somewhere that has a reasonable economy and fewer ill feelings towards the US or its citizens. Accordingly Brazil might be a better choice since it has traditionally given the finger to US extradition requests.

hedwards

No they haven't, they just don't rubber stamp them the way that they do in some places. If you really want to be safe go to Ireland, they rarely extradict anybody to the US. The last time I heard of them doing it was somebody that had killed 3 people in a drunk driving crash. Before that it had been literally years since they extradited anybody at all to the US.

sulimma:

The EU currently is evaluating whether all extradictions to the US will be stopped because the bradley manning case shows that suspects in the US are not safe from torture. (Long periods of isolation are torture according to international standards)

jdastrup
Anonymous Coward

RAR? What is this the early 2000's? Don't you mean 7Zip?

quaero_notitia

No, pirates don't use RAR or ZIP. They use YARR, matey!

Saberwind letherial :
No worries, we have a crack team of security hackers who will have it open in a few minutes if you can't supply it." Well good luck with that, its a truecrypt file disguised as a .zip, the password is 50 characters long, it also requires 10 files all which where destroyed on 'accident' So I hope your supper crack team has alot of crack.
mcrbids

It had to be said: obligatory xkcd reference. [xkcd.com]

EdIII
>In all seriousness, changing the file name on a TrueCrypt file won't help you. The file headers won't match, and you can tell what a file is by inspection. Full disk encryption shows up right away unless you modify the boot loaders.

It should not be all that hard to distinguish a Truecrypt file from other files just through classification alone.

The strength of Truecrypt is not so much in hiding the fact you are using it, but the strength of chaining multiple algorithms together, random pools to create

Altrag
Don't be ridiculous. This is the 21st century. Evidence isn't necessary beyond a vague plausibility when it comes to copyright infringement.

And once the lawsuit is started, it doesn't really matter if you're guilty or not, since you don't have the time or money to fight the legal battle anyway (for a statistically probable definition of "you.")

America: Guilty until innocence is paid for.

camperdave

Don't be naive. Their crack team of security hackers will "open your zip file" and find kiddy porn, letters to Al-Qaeda, homemade explosives recipes, and blueprints to JFK and O'Hare.

LordLimecat
If they can crack your encrypted zip file in a few minutes, then you've done something horribly wrong.

Protip, ROT13 is encoding, not encryption.

roc97007

I know cloud storage is trendy and all, and maybe I'm just an old fogey, but things like this just confirm my feeling that you should keep your stuff local. There isn't a lot of functional difference between a local storage appliance and storing your stuff in "the cloud". You can even outsource administration if you choose. The difference is, you won't lose your stuff due to the suspected bad behavior of some other company.

forkfail

That, in a word, is horseshit.

The legitimate users of the service have lost real property without any intent to do wrong. The takedown was without warning. The folks who lost their legitimate data have had their fourth amendment rights absolutely trampled.

And you think they should be grateful that all they lost was their data, and not their physical freedom?

jamstar7

Pretty much, yeah. Especially in today's climate of 'guilty by association, no trial needed'.

Post your legit files on a pirate fileserver, get busted with 'the rest of the pirates' and shame on you!

Shoulda just did a full backup and took it home like we did in the old days.

sjames

I have no doubt that some people used megaupload for for copyright infringement, but it was also a perfectly legitimate service used for lawful purposes by many people.

We don't go bust Ma Bell just because we know that more than one crime has been plotted over the phone.

hedwards

Unless they're complete morons they haven't lost any data. Anybody that trusts a cloud service to protect their data without retaining at least one copy is asking for trouble. So, unless their house burned down and their backups melted offsite as well they shouldn't have lost any data.

That being said, losing data to the feds that can then trawl through it looking for criminal offenses is much more reasonable. Although, those folks really should have chosen a service that encrypts the copies on the server.

betterunixthanunix

those folks really should have chosen a service that encrypts the copies on the server.

Right, because you can trust them not to decrypt everything for the government:

http://digital-lifestyles.info/2007/11/09/hushmail-opens-emails-to-us-dea/ [digital-lifestyles.info]

hedwards

That's why you insist upon the data being encrypted before being uploaded to the service and only being decrypted after it's safely on your computer. It greatly reduces the amount of things that can happen to it when it's out of your control.

forkfail

A huge part of the whole cloud approach is that it is an approach to data storage that comes with all of the redundancy built in. The idea is that it's expensive to run your own redundant data stores, keep them secure, etc. So, one basically outsources it to the cloud.

Now we're in a situation where the manner in which some subset of the users of a given cloud can bring the entire thing down for everyone, resulting in the loss and exposure of everyone's data.

Let's consider for a minute AWS. There are hundreds (if not thousands) of companies that exist pretty much solely in AWS space. They rely upon the cloud for their existence. AWS is a lot more reputable than Megaupload. However, at the end of the day, the same problem potentially exists with storing things in the AWS cloud.

And if this can happen to one company, it can happen to any, including the "more reputable" ones like AWS. Especially with the SOPA-esque laws and treaties being pushed.

This will absolutely break the cloud model. It renders all the advantages of the cloud moot, and in fact, opens up a completely new security hole (that of unwarranted seizure and or destruction of data by government agencies, or perhaps even rival corporations with an accusation of illicit content). Disney thinks that MyLittleComic is storing their data in JoesCloud? Accuse JoesCloud of hosting illicit data, get the whole thing nuked.

This results in loss of business (at least in the USA); it makes it harder for the smaller firms and startups to be viable; and it further entrenches those corporations that are big enough to pay the appropiate bribes^H^H^H^H^H^H lobbyist donations in Washington DC.

Finally, I would never, ever argue against due diligence. I would, however, claim that for a number of organizations that cloud use IS due diligence. And I'd still maintain that a good number of folk's fourth amendment rights were just tossed into the crapper.

EdIII
A huge part of the whole cloud approach is that it is an approach to data storage that comes with all of the redundancy built in. The idea is that it's expensive to run your own redundant data stores, keep them secure, etc. So, one basically outsources it to the cloud.

I disagree. If you are using the "cloud" as your sole backup strategy, you have failed. I personally use 3 points of failure. Primary storage, which can be distributed and redundant by itself. Secondary storage, which is really just a copy of Primary but on different hardware. Finally, Offsite storage. I don't use Amazon for that, but another service which is a differential backup, with versioning, and we maintain the encryption keys locally. Only encrypted data gets uploaded to the service.

And if this can happen to one company, it can happen to any, including the "more reputable" ones like AWS. Especially with the SOPA-esque laws and treaties being pushed.

I don't

cusco

If you honestly managed to avoid being aware that this was an incredibly risky proposition, I feel sorry for you.

Then feel sorry for the thousands of small businesses who can't afford their own IT shops and have to farm their IT services out to consultants. If the consultant says, "We can host your data in the cloud so that both your office in Spokane and the one in Portland can access it without an expensive leased line and two dedicated file servers and save you a ton of money" it sounds like a good idea. "The Cloud" is the big buzz word, being pushed by some very respectable companies like IBM, Amazon, Apple and Microsoft, and the person who they're paying to be the expert recommends it. System works fine, they save a ton of money, they sell widgets or insurance, they're not IT experts. They're the ones who are going to get screwed, and royally.

kiwimate
The legitimate users of the service have lost real property

No they haven't. It has been argued time and time again on this very site that the idea of "intellectual property" is nonsense and that the loss of data does not deprive you of anything real. If it's a legitimate argument for people who download music and movies, then it's a legitimate argument in this case. Or else it's inaccurate in both cases. You can't have it both ways.

mehrotra.akash

its the difference between going from 1 copy of data to 2 or 0

next_ghost
No they haven't. It has been argued time and time again on this very site that the idea of "intellectual property" is nonsense and that the loss of data does not deprive you of anything real. If it's a legitimate argument for people who download music and movies, then it's a legitimate argument in this case. Or else it's inaccurate in both cases. You can't have it both ways.

The discussions you're referring to were about making more copies of the data. This discussion is about taking offline servers with copies, many of which were probably the last accessible to the original uploader. This is akin to BBC scraping its archives in the 1970s. Good luck getting the surviving copies back from those who downloaded them before server shutdown.

kiwimate

According to the indictment:

all users are warned in Megaupload.comâ(TM)s Frequently Asked Questions and Terms of Service that they should not keep the sole copy of any file on Megaupload.com and that users bear all risk of data loss. The Mega Conspiracyâ(TM)s duty to retain any data for even a premium user explicitly ends when either the premium subscription runs out or Megaupload.com decides, at its sole discretion and without any required notice, to stop operating.

But besides this, Megaupload was not positioned as a legitimate backup site. If that's what people wanted, it sure wasn't competing against Carbonite. Numerous sources describe that if you didn't have a premium account then any files you uploaded got deleted if they weren't downloaded within a 21 day period. That's not for backups; that's purely for sharing files, for transferring files from me to you.

There are a ton of people in this story saying exactly this - if you uploaded your only copy of a file to this (or any other) cloud site, then more fool you.

Finally, my comment was about the poster I replied to talking about people being deprived of real property, and pointing out that the prevailing claim on Slashdot is that data files aren't real. One or a thousand copies of the file - according to posters here, it makes no difference in the real world.

So a data file disappears, forever? So what? Nobody's lost real property, have they? Unless you argue about all the work and effort and time spent to create that work - but now we're back to recognizing that electronic data files, despite not being real, nonetheless have "real" origins, and "real" impacts.

The debate is clearly purely semantic, but it's used constantly on Slashdot when the shoe is on the other foot and it's somehow considered an irrefutable stance.

a_nonamiss

If I were to physically deprive an artist of his or her only copy of his or her intellectual property, then we'd be making an apt comparison. As it is, it seems like you're just trolling for the **AA. In the Megaupload case, I would guess that with the amount of data taken down, at least one person, probably thousands, have been deprived of their only copy of data, which is real property. If I download a copy of Michael Jackson's Thriller album from LimeWire, I'm not depriving anyone of anything.

I'm not defending copyright infringement here, I'm just pointing out your terrible logic.

Brett Buck

I don't think you understand - IP is only real property when someone steals or makes unavailable YOUR IP. When it's someone else's IP, then it should be free and they are terrible people for trying to keep it to themselves, you know, data (i.e. popular movies that people spend tens or hundreds of millions to make) wants to be free, man!

Brett

Speare

Copying is not theft. Jefferson said "he who lights a taper from me, receives light without darkening me."

But destruction of your only copy IS theft. "He who snuffs my own taper while it's sitting on the shelf where I intentionally left it for access later DOES darken me."

Sure, some people use cloud storage as a way to transfer files from point A to point B, ending up with three copies: A's, cloud's, and B's. But many people use cloud storage for... you know... storage. Archives. Record-keeping. Zero copies at home, one archive copy in the cloud. This is a real danger of cloud services, and governmental shuttering of sites is only one way that a cloud can fail.

DVega

The loss is real if all the copies of a piece of work disappear. The loss is imaginary/trivial if more copies exists somewhere else. For some files in Megaupload there are no known copies.

Is it an interesting question...
boundary

...if the answer is "backup"?

Everyone has been told time and again that backing up to the cloud is a great idea. A lot of businesses bought into that. The risks of doing just that have now been made abundantly clear. Personally I'm reaching for my DAT.

winkydink

It does if you're doing it right. But you could still be screwed if your dog eats your laptop right after your cloud goes poof.

GameboyRMH

Cloud backup: The safety of an 8-member RAID0 array of SSDs combined with the speed of tape.

jimicus

Tape has terrible random access speed but any half-decent LTO tape drive can move data as fast as - if not faster than - most hard disks.

thegarbz

Offsite != cloud. Though you know this already. Personally I find the idea of using the cloud for offsite backups horrendous. The last thing I want to do after having lost everything is wait for eons for 100GB of backup to finish.

The slow speed of internet services in general is a disincentive to perform frequent backups. Use physical media, and take it offsite to a different location. Store it at your friend's house or at work.

Ihmhi

Doesn't "backing up" also usually involve multiple failsafes?

Why would you upload it to one place and *only* one place?

You can easily use sites like Rapidshare, Fileserve, etc. as a backup service. Links deleted in 90 days? Have an automated script download them every 89 days to reset the counter or however the rules go.

ByOhTek

I think any source is at risk. Relying on your data being in one location is always a risk.

Using the a cloud data storage location simply adds another layer of redundancy to help prevent you from losing your data. It is probably not the most reliable method, and it is almost certainly not the most secure.

forkfail

Isn't "the cloud" supposed to provide redundant back?

Isn't that a big part of the point right there?

Richard_at_work

In the example given, it isn't even that interesting - with Dropbox, your files are available locally as its a syncing service and not a cloud access service. The only scenario in which you won't have a local copy is if you are using the website only (i.e. not as the service is designed to be used) or have been very very prolific with Selective Sync.

If Dropbox goes away, my files remain available in my local Dropbox folder.

OnTheEdge

Good question, but it's not really an issue for Dropbox as that service maintains full local copies on each of the computers I have on my account.

sandytaru

Exactly - redundancy is built into Dropbox, which is one of the benefits of the system and why I use it despite all its flaws.

frodo from middle ea

Sigh, redundancy and backup are two completely different things. Don't believe me, delete a file from one of your dropbox synced PC, and see that file disappear from all the dropbox sync PC, when those PCs sync up.

sethstorm

If you can afford to lose the data, it's fine to have it in the cloud.

If you can't, you are SOL if you don't have a backup - one that is not in the cloud.

Synerg1y

Yep, this is why on-shore cloud computing will never take off, why would a foreign entity want to put in this position. XDA won't get their hosting back, but I highly doubt they lost anything, it's developers after all. But imagine if your business relied on megaupload, say for high speed downloads of your companies product, you'd be hurting.

Still I don't see how paying uploaders can directly be linked to promoting file sharing. It's still the uploaders choice to make the money via copyrighted material...

eldavojohn

Well, the summary specifically references a developer's forum where I can sympathize (being a developer) with people modding Android ROMs or whatever and uploading such binaries for distribution to others. I guess the people who run the forum don't really get a say in any of this. However, as a software developer, I can imagine a third option for files that are user generated (and for the most part legal).

Now XDA-Developers is going to have tens of thousands of once helpful posts that now lead to a broken link. How could they have avoided this? Well, I'd imagine that someone could have written an internal bot for their forums that would harvest links to the external megaupload. They then could have subscribed to megaupload, downloaded said linked files and created a local cache of their files purely for their own use on a small RAID. Now the last thing the bot would need to do is take the megaupload URL and develop some unique URI ... perhaps a hash of the date, checksum and filename? It would then maintain a key-value pair of these megaupload links to your internal URIs and also a directory structure of these URIs as the files. Now, say megaupload is a very unreliable/questionable service or goes down and now your forum is worthless. Well, you can always re-spider your site and replace all the megaupload links with links to your cloud hosting of these new files or work out a deal with another third party similar to megaupload where they would accept the file and URI and return to you the URI paired with their new URL. Then it's a matter of spidering your site and replacing the megaupload links with your new service's URLs.

It's a pain in the ass but let's face it, some forums could perish when their codependence on megaupload is fully realized in a very painful manner. And I don't think that's a fair risk to the users who have created hundreds of thousands of posts.

dissy

I never understood why people would upload a copy of a file to the Internet, manually/purposefully delete their only local copy, and proceed to complain that they no longer have a local copy.

Why on earth would you delete it from your computer?!?

There is NO excuse for this problem.

This is FAR from a new issue with "the cloud" either.
People used to do the exact same thing with web-hosting.
They would upload their website to a web server somewhere, delete their only copy, then when the hosting company went under, had the server crash, disk failure, whatever... the user would proceed to blame the ISP for the fact the user themselves deleted their only copy from their own computer. wtf?

The standard rule for backups is, if you can't bother to have two copies (One on your computer, one backed up on another device) then it clearly wasn't important enough to warrant bitching about when you lose it. That rule implied ONE copy was not enough... Why on earth would people think ZERO copies is any better?

Hard drives die. It's a fact of life. The "if" is always a yes, only the "when" is variable.
That fact alone is reason enough to already have more than one copy in your own home on your own equipment.
A provider disappearing like this should be nothing worse than a minor inconvenience in finding somewhere else to host it and upload another copy, then chase down URLs pointing there and update them. Sure, that can be a bit of work and is quite annoying, but it should be nothing on the scale of data loss.

Storage is cheap.

Encryption is easy (Thanks to the efforts of projects like PGP [symantec.com], GPG [gnupg.org], and TrueCrypt [truecrypt.org])
BackupPC [sourceforge.net] is free, runs on Linux which is free, and can be as simple as an old Pentium-2 desktop sitting unused in your basement that you toss a couple extra hard drives in.

You set it up once and it does everything for you! It daily grabs copies of other computers, all automated, all by itself. It can backup Linux, Windows, and even OSX via the network. You can feed it DHCP logs to watch for less frequently connected machines like laptops. It de-duplicates to save disk space, and can email you if and when a problem crops up. I only check mine twice or so a year just to make sure things are running (never had a problem yet) and as it deletes older backups only when needed to make room for new ones, with de-duplication I can go grab a file from any date between now and three years ago, at any stage of editing (Well, in 3 day increments for my servers.. but it's all configurable, and should be set based on the importance of the data!)
On ubuntu and debian based systems, it is a single apt-get install away. Likely just as easy on any other distro with package management.

Any true computer geek can slap together such a system with zero cost and spending less than an afternoon. Anyone else can do so for minimal cost and perhaps a day of work.

Apple has ridiculously easy backup software (Time Machine?), and Windows has the advantage of most of the software out there being written for it, so the odds that there are less than five different software packages to do this exact same thing is next to impossible.

Hell, even for non-geeks, most people have that one guy or gal in the family who supports everyones computers. Just ask them! They will likely be ecstatic to help, possibly will donate spare parts from their collection (Or find you the best prices on parts if not) - and be content in the fact they won't have to tell you things like "Sorry, your hard drive has the click-o-death, I can't recover anything from it." which no one likes to need to say.

This is worth repeating: There is NO excuse for this problem.

Personally, if it's important, I have a bare minimum of four copies. One for actually using, on my system drive.

Cloud was stupid [idea] from the start in the first place

unity100

The foolishness that is millions of users trusting a single giant computing grid owned by a single private corporation was stupid in the first place.

it is everyone putting their eggs in the same giant basket

ranging from policy changes to mergers/takeovers/acquisitions to bankruptcies to government intervention - whatever you can imagine. its a single point of failure and your important stuff is gone.

moreover, these cloud stuff are utilized for making collaboration tools work. so if cloud is gone, there goes your entire communication in between your team, company, clients, workgroup, whatever.

its strategically stupid. run your own cloud if you want. dont put your stuff on another company's turf. its dangerous.

forkfail

But once the SOPA-esque laws and treaties become The Way That Things Are (tm) - and unless things change drastically, they eventually will - and once the Great Consolidation has run its course - what choice will there be?

Re:All their eggs in the same basketfusiongyro

How is SOPA going to stop you from hosting your files yourself?

forkfail

It wouldn't.

It would, however, prevent you from using any sort of cloud hosting if you want to keep your data private. Because in order to be SOPA compliant, a cloud would have to scan your data to ensure that you didn't have any sort of "illicit" files.

So - why use the cloud at all? Well, for better or worse, services like AWS make it possible for certain businesses to grow and thrive - and in some cases, exist at all.

Which brings us back to my original point. Given the constant push by the seriously monied interests in SOPA-esque laws and treaties worldwide, and given the trend towards consolidation of the various corporations and services out there, eventually, it's going to be hard for a certain class of business and user not to have all their eggs in one basket - a basket that has both corporate and government eyes peeking at pretty much every bit that's out there.

If this scenario does not appeal, then perhaps a way to change the underlying trends of corporate and government Big Brotherhood needs be found.

Steauengeglase

It isn't any government that is the problem. Stupid clients who think you are a dinosaur for not putting everything "in the cloud" who are the problem.

Anonymous Coward

I've always wondered what happens to Pokemon in a trainers' computer when the trainer dies/quits/etc. I imagine the same would happen to megaupload files. Like the pokemon lost in a nonphysical oblivion for all eternity, these files will endure an endless torture of nothingness.

GameboyRMH

Lots of us do but few are willing to admit it

itchythebear

Has Megaupload been found guilty of anything? If not, why has their site been shut down? If copyright laws apply to the internet, then why doesn't due process?

Anonymous Coward

Because due process is applied, and yes, you can be arrested and put in jail before being found guilty.

I don't know what the specific procedure used in this case involved, but presumably they presented evidence to a judge that was persuasive enough to warrant this action.

That you are asking, without even expecting this to be the case, either means you are ignorant or deeply cynical.

AJH16

The servers can be seized as evidence and the service shut down to prevent additional harm being done while the case is decided. It's effectively very similar to a restraining order. It's a civil thing, so innocent until proven guilty doesn't apply, but rather until the issue is determined, the justice department moves to ensure more harm is not done. The idea is that to do so it should be pretty damn clear that policies are not being followed and the indictment does a pretty good job of documenting how

Anonymous Coward

The same reason you don't get to keep murdering people while the trial is going on.

Caerdwyn

For the same reason that some suspects are kept in jail pending their trial: it is considered highly likely by the judge presiding over the case that the criminal activity would continue, or evidence be destroyed. "Due process" includes that decision, and the prosecution and defendant both state their position before the judge makes that decision. That stage has passed.

BTW, I read the complaint. The core of the accusations are twofold: first that the Megaupload folks willfully hosted infringing content (thus losing the safe harbor protections that shield other hosting services); they knew and did nothing. Second, that through other businesses and websites they controlled, the Megaupload folks deliberately solicited infringing content and directed it to Megaupload (hence the "conspiracy" charges, which mean something very specific and not necessarily the tinfoil hats and black helicopters so popular among bloggers who think they know the meaning of a word). If those complaints are true (and none of us here knows that or will decide that; we are not the jury, and we are not seeing the evidence), then yeah, they're gonna go to jail and be stripped of every penny they own. That's reality, regardless of whether Anonymous, Slashdot, or anyone else likes it or not.

forkfail

Here's the problem with the "willful" argument in general.

Either you can have a cloud in which your data is private, or the owners of the cloud can actively prevent the use of the cloud for hosting "infringing content".

You can't have both.

[Jul 26, 2011] cloud computing reliability will not matter

March 15th, 2008 | AnyHosting

All the buzz about "cloud computing" is great, but isn't it just a rehash of "dumb terminal", "thin client" computing, that lost out big against the PC? Yes it is, but not for long; the browser does not need to be the modern equivalent of the terminal, chained to the call/response of HTTP requests in order to provide applications.

I wrote about this a while back, but I think it bears repeating.. HTML 5 includes support for "offline" applications, including client-side storage, which means that that in current and upcoming versions of Firefox, Safari, and Opera will support running web applications locally on the user's computer, without needing to be in constant communication with the server.

Instead of asking your users to install your application in the traditional sense, visiting the website that hosts your application will cause the client to download and store everything needed to operate on the client side. The application can detect whether or not the computer is online, and attempt to connect to needed real-time, syncing, and other web services as needed, and only interrupt the user if absolutely necessary.

This means that the questionable reliability of having all of your applications hosted "in the cloud" is greatly mitigated, and impact on the end user is quite minimal. Even if your entire site is down, there's no reason for that to interrupt the user of your snazzy application; in fact, with cross-site AJAX support, the user can continue to fetch and transmit data with other websites (I'm thinking a real-time price comparison site, or something like that, which today would be implemented completely server-side and just fall over in this scenario), so it may be totally acceptable for your site to receive the queued up responses from clients when it comes back up, depending on what your application does of course.

For IE support, you could use something like Google Gears or Adobe Flash's offline capabilities, until Microsoft catches up to the rest of the world. This is the biggest pain point of the brave new offline world right now, however it's a very real concern as Microsoft IE still has around 70% of the global web browser market share.. If this is something you need, check out Dojo's storage classes as a high-level library to abstract away these details for you; if you're doing a serious AJAX site nowadays you really should be using or at least intimiately familiar with the great toolkits like Dojo, Mochikit, JQuery, etc. There's no need for handling each browser/version case by hand nowadays, unless you have a really good reason.

[Jul 26, 2011] Do Cheap Power and Massive Data Centers Really Matter By Derrick Harris

May 27, 2011 | Cloud Computing News

GoGrid CEO John Keagy wrote on his blog Thursday that when it comes to cloud computing, there are a couple of things that have been overrated in the industry, like cheap hydroelectric power and massive-scale data centers. What Keagy says does matter for making cloud computing financially compelling to both providers and users are things like pay-per-use pricing, automation, shared platforms and commodity hardware. His theory doesn't make sense for every cloud provider, but he does make some good points.

Actually, power and massive data centers aren't the only factors he calls out as being over valued - Keagy also says focusing on data center containers, blade servers, super-efficient cooling and VMware has been over hyped - but they're probably the most controversial. After all, few would argue that containers, blades and/or VMware are less expensive - or necessarily better-performing - than the current alternative measures. But huge data centers and cheap power have become hallmark concerns when talking about the economics of cloud computing, so Keagy is treading on hallowed ground when he questions their importance.

Here's Keagy's stance on power:

At GoGrid, power represents less than 5% of our cost of goods sold. We're a nicely profitable company despite buying some of the most expensive power and cooling on the planet. Theoretically, we could knock a few points out of our COGS if we used a datacenter next to the Columbia River in Eastern Oregon. . . . But then we'd have new costs that we don't have now such as the costs of managing people who are native to Eastern Oregon and paying people from Silicon Valley to travel to Eastern Oregon to manage those people and lots of networking costs to take traffic to and from there which increases latency . . . .

Here's his stance on massive data centers:

These are critical to Google, which has been rumored to own 2% of the World's servers. . . . However, the efficiencies that this scale provides just aren't relevant in today's SaaS and IaaS markets yet. Margins are super high and businesses pay well for complex infrastructure. Super low-cost PaaS offerings aren't yet seeing traction from power users. Nobody is giving away free complex infrastructure (yet) on an ad supported model.

His points should be well taken when it comes to cloud computing, at least for small- to mid-sized providers such as GoGrid. If margins are high and profits are acceptable, there might be little financial value in building and managing large, geologically-sited data centers that trim costs in some places while adding costs in other places. As long as customers aren't paying more because of the decision to use expensive power, who's to complain?

Heck, even Salesforce.com - which serves nearly 100,000 customers and is operating at a $2 billion run rate - seems to be on board with Keagy's theory. The SaaS leader only runs about 3,000 servers worldwide, and has said that if it's forced to build its own data centers, it will site them based on network connectivity and not cheap land and power. When cloud providers have multi-tenancy and automation down cold, they don't need huge server footprints and they don't need to worry about operating efficiently enough to keep profit margins up.

Cloudscaling Founder and CTO Randy Bias, who helped build GoGrid's cloud along with many others, noted the importance of automation last year. In a blog post challenging the proposition that only massive-scale clouds can achieve economies of scale, he wrote, "one major economy of scale is the ability to have significant resources deployed for software development purposes. The outcome of most cloud software development is generally automation or technology that enables the business to scale more efficiently."

Of course, this all depends on the type of business you're running. Companies like GoGrid and Salesforce.com have a pretty good idea what their computing demand will be at any given time, and they deal in longer-term commitments than does a cloud provider like Amazon Web Services. It's really one-of-a-kind in terms of its reach and the adoption of its API, which means AWS might care more about energy costs and scale than does GoGrid. It has to operate an infrastructure truly capable of handling huge traffic spikes coming from individual developers, large customers or third-party platforms (such as RightScale) that spin up AWS instances for their customers, so every wasted penny matters. Although, AWS does charge a higher rate for resources delivered from its Northern California region.

AWS also has the Amazon.com connection, and webscale companies such as Amazon definitely have to care about cheap power and scale. We'll hear more about this at Structure 2011 when data center experts from Facebook, Netflix, LinkedIn and Comcast sit down for a panel discussion, but the general idea is broadly accepted. When you're dealing with huge amounts of traffic coming from all over the world, you need lots of servers in lots of locations. Further, as companies such as Facebook, Google and Yahoo certainly will attest, storing all that customer data and building systems for analyzing data also require huge infrastructural investments. When you start adding up the cost of being energy-inefficient at that scale, investing in efficiency starts to make a lot more sense.

I think Keagy, who also will take the stage at Structure 2011, is right that everyone need not get preoccupied with mimicking Google, Amazon and Facebook when building clouds or when evaluating cloud providers. There are plenty of companies doing just fine and delivering quality services by following the web giants' lead in terms of automation and homogeneity without falling victim to the siren song of massive scale. I also think, however, that if GoGrid were to start driving a relatively high percentage of web traffic, Keagy probably would change his tune in a hurry.

[May 01, 2011] Amazon's Cloud fail is a wakeup call by Sean Michael Kerner

Amazon has been the poster child for everything that is good, right and holy about the cloud.

After today, Amazon will also be demonized for everything that is wrong with their own cloud. Amazon today suffered a major outage crippling hundreds (maybe thousands?) or sites (including a few of my favs like reddit).

For years, Amazon has been suggesting that their elastic cloud (leveraging Linux throughout as the underlying OS) had the ability to scale to meet demand. The general idea was supposed to be massive scalability without any single point of failure.

Apparently they weren't entirely accurate.

Today's outage proves that there are single points of failure in Amazon's cloud architecture that expose all their users to risk. The Amazon status report page identified the
Amazon Elastic Compute Cloud in North Virginia as the root cause of some of the trouble today.

One data center going down should not take down a cloud, yet it did. Either the cloud isn't as elastic as Amazon would have its users believe, or maybe, the cloud is at the same risks as what we used to simply call 'hosting'.

In any event, cloud naysayers will use today's Amazon failure as a proof point for everything that is wrong with the cloud for years to come.

Cloud is Just Another Word for Sucker by Carla Schroder

Nov 14, 2009 | linuxtoday.com

As much as we warn about privacy, security, and reliability problems in cloud computing, it's coming and we can't stop it. So do we join the cloud party? Heck no.

It seems like it should have some advantages. Geeks back in the olden days used to say that a simple network appliance running hosted applications would be a good thing for unsophisticated users. Pay a monthly fee just like for phone services, use a subsidized terminal, and let the vendor have all the headaches of security, provisioning, system administration, updates, backups, and maintenance. What a boon for the business owner-- outsource to a service provider, no muss, no fuss, and everyone is happy.

Broken Trust

Well here we are on the threshold of this very thing, and now the geeks are complaining and warning against it. Why? Because we like to be perverse? Well maybe that is part of it. But for me the biggest problem is trust. I don't trust many tech vendors because they haven't given me any reasons to trust them, and plenty of reasons to not trust them. Over and over and over and over and over and over and over.

Why would I entrust them with my data when they do not respect my privacy or the privacy of my data? In the US personal privacy is not protected, and vendors who mangle and lose your personal or business data pay no penalty or recourse, other than bearing the brunt of your peeve. Marketers are all about privacy invasion, as much as they can get away with, and collecting, mining, and buying and selling us. Even worse, service providers roll over at the slightest "boo", releasing customer records at toothless DMCA takedown requests, and caving in to law enforcement without even making them go through due process. Where are all those attack lawyers when they can do some good for a change?

No-Nines Reliability

Reliability is a second issue. Google and Skype, to give two famous examples, have distributed datacenters but both have suffered a number of outages and service interruptions. (Speaking of Skype, the excellent columnist J.A. Watson doesn't think much of them.) Even if the cloud vendor has perfect uptimes there are many weak links between the customer and datacenter. In this glorious year 2009 of the 21st century it is still a common recommendation to have two diverse Internet connections. But even if you want to spend the money the wires are consolidated and have a small number of chokepoints. Like when a fiber cable near Pendleton, Oregon was cut a couple of years ago, and it wiped out much of the telephone and Internet for Eastern Oregon. Or when backbone providers have spats with other over peering agreements and teach each other lessons by cutting each other off, leaving customers stranded. So what do we do for redundancy, train some carrier pigeons? Learn ham radio? Interpretive dance?

Performance: Haha

The third problem is why in the heck would any sane person trade in their nice sleek efficient standalone applications for a horrible boggy Web browser abomination with a hundredth of the functionality? I do demanding jobs on my studio computer, both audio production and photo editing. You know what kicks my CPU into the red zone and keeps it there, and eats RAM like popcorn and hits the swap partition until it's crying for mercy? Not loading and editing a gigabyte audio file, or converting a big batch of multi-megabyte RAW files. What brings my whole system to a halt until some half-baked junk script finishes running? Plain old Web surfing and various Web apps I have to use. I'm not keen to buy a desktop supercomputer just to have decent browser performance.

Nobody would even be looking at Web apps if we didn't have all these closed, proprietary file formats and steep barriers to migrating to sane, open platforms and applications.

The cloud, software as a service, hosted applications, whatever you want to call it is coming. The concepts are useful, but I have little faith in the implementations.

A Practical Alternative

As always in Linux-land, there is a role for the do-it-yourselfer to turn dung into gold. Come back next week and I will tell about this.

[Oct 5, 2009] Technology Story Is Cloud Computing the Hotel California of Tech

Slashdot

Prolific blogger and open source enthusiast Matt Asay ponders whether cloud computing may be the Hotel California of tech.

It seems that data repositories in the form of Googles and Facebooks are very easy to dump data into, but can be quite difficult to move data between.

"I say this because even for companies, like Google, that articulate open-data policies, the cloud is still largely a one-way road into Web services, with closed data networks making it difficult to impossible to move data into competing services. Ever tried getting your Facebook data into, say, MySpace? Good luck with that.

Social networks aren't very social with one other, as recently noted on the Atonomo.us mailing list. For the freedom-inclined among us, this is cause for concern.

For the capitalists, it's just like Software 1.0 all over again, with fat profits waiting to be had. The great irony, of course, is that it's all built with open source."

Re-Simple

Re: Simple by DuckDodgers

Your own servers don't necessarily cost much more. Check the pricing at Amazon http://aws.amazon.com/ec2/ [amazon.com] for a 'Large Instance' with "7.5 GB of memory, 4 EC2 Compute Units (2 virtual cores with 2 EC2 Compute Units each), 850 GB of instance storage, 64-bit platform". A reserved instance costs $910 per year plus $0.12 per hour, or $1961 per year.

I can assemble a nice rackmount 1U RAID server with better computing resources than that for the same price. Multiply that by a few servers and a few years, and your cost savings over your own hosting / racks / UPs isn't going to be that high. And of course, nothing stops Amazon from raising the prices.

Also, EC2 gives the user no recourse if the system goes down for any reason, or if your data is lost. http://aws.amazon.com/agreement/ [amazon.com] You get a 10% discount if the system uptime is less than 99.95%, but that's the extent of your rights. If you screw up, it's your fault. If Amazon screws up, it's their fault but your problem.

Now, the nice thing about Cloud Computing is scaling. When your magic startup starts generating massive throughput, you can just add resources to your EC2 allotment as needed.

But for small deployments that don't anticipate sudden rapid growth, I don't get the appeal.

drsmithy Re:Simple by

I can assemble a nice rackmount 1U RAID server with better computing resources than that for the same price.

But you can't make it redundant, back it up, give it high-bandwidth connectivity, or maintain it for that price. The hardware itself, is by far the cheapest part of any server room.

But for small deployments that don't anticipate sudden rapid growth, I don't get the appeal.

Because building and maintaining any remotely reliable IT infrastructure is expensive and requires expertise that is, for most companies, utterly irrelevant to their core business.

DuckDodgers: Re:Simple (Score:4, Insightful)

Thanks. You make some excellent point. I admit, we spend a lot of time and effort (meaning, money) maintaining machines, connectivity, backups, and redundancy (RAID for data redundancy, in addition to backups, UPS and a generator for power redundancy, and separate ISPs for connection redundancy). It's a huge expense for a tiny company.

I'm just very nervous about entrusting the company meat and potatoes to an external business. If our stuff goes down because I screwed up - and it has happened - I can try to fix it immediately. If our power or internet connectivity goes down, I can work with the corresponding vendor to get it restored. If something goes wrong with my Cloud Computing setup, I am at the complete mercy of their technical staff. Instead of actively working to solve the problem, all I can do is stay on the phone with their tech support and hope they fix it. Naturally, I'd rather be working than waiting.

And of course, I'm at the mercy of the vendor. If they decide to shut down, I have to scramble to find replacement as quickly and painlessly as possible. If they decide to raise prices, I'm looking at an instant drop in operating income or else the expense of moving to another vendor.

I'm not saying the cloud is the wrong way to go. I'm just saying that I am nervous.

eldavojohn: Re:Yes (Score:5, Informative)by

If you mean a big hit that everyone knows.

I don't think that's what they meant by turning Hotel California into an adjective or analogy.

I believe the one-way street attribute would probably be the easiest way to describe it. Although there's more subtle caveats to 'Hotel California' as a lyrical work. Though interpretations have been numerous (I've heard it compared to prison), the writers describe it as an allegory about hedonism and self-destruction in Southern California [wikipedia.org] - -especially the music industry (that we all know and love). From the Wikipedia entry:

"Don Henley and Glenn wrote most of the words. All of us kind of drove into LA at night. Nobody was from California, and if you drive into LA at night... you can just see this glow on the horizon of lights, and the images that start running through your head of Hollywood and all the dreams that you have, and so it was kind of about that... what we started writing the song about. Coming into LA... and from that Life In The Fast Lane came out of it, and Wasted Time and a bunch of other songs."

So if I may elaborate the analogy may be trying to describe cloud computing as something you're kind of forced into and it would seem stupid not to take it ... but then you start to realize that it's not everything it was made out to be at the beginning. You are promised success and all the resources imaginary but then at the end when you realize you don't have control over the situation and your data or privacy becomes seriously important to you, it's nowhere to be found and irreclaimable. The song's final lyrics before the guitar solo and double stop bass: "You can checkout any time you like/But you can never leave."

No, this isn't unique, Lynyrd Skynyrd [wikia.com] felt the same way as did The Kinks [wikia.com] and I bet if I sat and thought I'd come up with much much more. I guess you'd be better off explaining it outright than calling cloud computing Hotel California but the English language allows one to play and invent I guess. The author might consider the younger crowds though for this piece.

FornaxChemica: verbal meme?

Funny, I saw this same Hotel California analogy a few days ago in a tweet [twitter.com]. Where did it all start? These people must have seen it somewhere else, this is some kind of verbal meme. It often happens I noticed, a word or a comparison you wouldn't normally use that suddenly spread all around the Net.

Anyway, I thought this one was referring to the lyrics of Hotel California and specifically to this line: "You can check out any time you like, but you can never leave".

Applied to cloud computing, I also like that one: "This could be Heaven or this could be Hell"

Required Reading Does IT matter Carr claims it doesn't. - Page 3 - Studentwebstuff Forums

"Is IT a commodity? Not exactly. Though some of the pieces may be fungible, the whole certainly is not. Have you ever worked for or with a company that didn't do IT well? If so, you know that good IT may not be sufficient for a great business, but it sure is necessary. "
ZHan, alumni
Does it matter?

Carr said that IT is infrastructure of commerce, they become invisible; they no longer matter.

"Actually, IT is like electric. Imaging how can we do our business with out electric? IT isn't particular useful for us to gain competitive advantage, since it's already so cheap and all the companies can use it. "But how to use IT effectively is still a scarce skills".

This is more important than IT. IT cannot provide the competitive advantage for us now, people who can use it effectively is the one. A company input lots of money on data storage and customer trade data. However, if the data has not been analysis, the data is useless. While, some well educated professional analyst can transfer this data to very useful information. The company can get big reward. There are full of data in the world, but there are also short of knowledge people who can process the data.

IT is just a tool. It can not ensure that you can do better. If you want to do better than others, you need to learn how to process it and get the usefull information. It also need you to upgrade the ability to use this information. This is the hardest. IT is only a bases. The most important thing is to put all this related information together. Make it to be rationalization and professionalize. For example, we use the history data to help the sales forecast, production planning, delivering and storage plan. We use this information to place order and frequently compare to actual result. Then we can make correct change.

Reference:

NYang, Alumni

IT does matter

New technologies will continue to give companies the chance to differentiate themselves by service, product feature, and cost structure for some time to come. The first mover takes a risk and gains a temporary advantage (longer if there are follow-on possibilities). The fast follower is up against less risk but also has to recover lost ground.

Grid computing, standardization of components, and open systems, far from stifling differentiation, provide a stable platform to build on and offer new ways of differentiating, either by cost, structure, product, or service. Just as literacy stimulated innovation, so do open systems and grids. Outsourcing the commodity infrastructure is a great way to control costs, build competence, and free up resources, which can be used to combine data bits in creative ways to add value. Relatively bulletproof operational reliability will be a key part of the price of success.

IT plays an important role in business today, more important in some cases than in others. In some scenarios, IT is axiomatically relevant and in others, its role is on the periphery. Ultimately, however, IT does indeed matter and so, therefore, do you.

Quote:
Is IT a commodity? Not exactly. Though some of the pieces may be fungible, the whole certainly is not. Have you ever worked for or with a company that didn't do IT well? If so, you know that good IT may not be sufficient for a great business, but it sure is necessary.
http://www.computerworld.com/s/artic...does_so_matter_

http://www.nicholasgcarr.com/articles/matter.html

xxx:

Valid Point

--------------------------------------------------------------------------------

My initial reaction to the message Carr presents in his article was complete disagrement. To me claiming that something that has such a strong impact on business, such as IT does in our world today, doesn't matter seemed almost ridiculous. However after considering his findings and comparisons, I believe the importance of IT within the business is determined by how you view or consider the topic. I think in some ways Carr's point is valid in that to a certain level, IT systems are becoming standardized and therefore have a reduced impact in providing a competitive advantage. In this sense you could make the point that IT has a REDUCED impact on business which from my perception is what Carr meant more than saying it simply does not matter.

Having said that, I go back to my initial view that IT is incredibly important in business from the standpoint that so much of modern business practice is based around technology. The fact that most businesses have incorporated IT into there system in one form or another should not show a trend that information technology is becoming obsolete, but rather that it is increasingly important. Additionally, even saying that IT is becoming standardized seems absured because at the rate technology is advancing there are always new practices and applications to be adopted that will help improve the business model. This in turn develops the point that firms now have the obligation to stay updated with their systems to at least a certain point to maintain the status quo. From there, I believe it is a varrying situation on the level a company should consider its position on IT depending on many things including the industry they are in and perhaps what their budget is for the year.

While considering this subject I read a few articles to help form my opinion. One that I found interesting was an article on the importance of phone systems in todays commercial environment. I thought it was worth reading because it takes a unique view at how IT affects different business types. Here is a link for those interested:

http://ezinearticles.com/?The-Import...rce&id=2811550

[Jul 23, 2009] Twitter's Google Docs Hack - A Warning For Cloud App Users - News - eWeekEurope.co.uk By Eric Lundquist

20-07-2009

Twitter lost its data through a

hackww.techcrunch.com/2009/07/16/twitters-internal-strategy-laid-bare-to-be-the-pulse-of-the-planet/"

publish much of the information.

The entire event - not the first time Twitter has been hacked into through cloud apps - sent the Web world into a frenzy. How smart was Twitter to rely on Google applications? How can Google build up business-to-business trust when one hack opens the gates on corporate secrets? Were TechCrunch journalists right to publish stolen documents? Whatever happened to journalists using documents as a starting point for a story rather than the end point story in itself?

Alongside all this, what are the serious lessons that business execs and information technology professionals can learn from the Twitter/TechCrunch episode? Here are my suggestions:

  1. Don't confuse the cloud with secure, locked-down environments. Cloud computing is all the rage. It makes it easy to scale up applications, design around flexible demand and make content widely accessible [in the UK, the Tory party is proposing more use of it by Government, and the Labour Government has appointed a Tsar of Twitter - Editor]. But the same attributes that make the cloud easy for everyone to access makes it, well, easy for everyone to access.
  2. Cloud computing requires more, not less, stringent security procedures. In your own network would you defend your most vital corporate information with only a username and user-created password? I don't think so. Recent surveys have found that Web 2.0 users are slack on security.
  3. Putting security procedures in place after a hack is dumb. Security should be a tiered approach. Non-vital information requires less security than, say, your company's five-year plan, financials or salaries. If you don't think about this stuff in advance you will pay for it when it appears on the evening news.
  4. Don't rely on the good will of others to build your security. Take the initiative. I like the ease and access of Google applications, but I would never include those capabilities in a corporate security framework without a lengthy discussion about rights, procedures and responsibilities. I'd also think about having a white hat hacker take a look at what I was planning.
  5. The older IT generation has something to teach the youngsters. The world of business 2.0 is cool, exciting... and full of holes. Those grey haired guys in the server room grew up with procedures that might seem antiquated, but were designed to protect a company's most important assets.
  6. Consider compliance. Compliance issues have to be considered whether you are going to keep your information on a local server you keep in a safe or a cloud computing platform. Finger-pointing will not satisfy corporate stakeholders or government enforcers.

Cloud may squeeze margins, says Microsoft exec

Rough Type Nicholas Carr's Blog

Sandy:

I'm sorry guys, I just don't get it with "Cloud Computing"! Really... think about it... All this talk about dispensing with the in-house IT staff and not having to buy and maintain servers?? Passing off the application to the "Cloud"?

Come on! All this would have held true in the days when companies were spending 250K for a mini mainframe, but you can buy one heck of a server for 10,000 bucks; enough of a machine to run a 40 million a year company on. Desktops are under a $1000. Believe me, it's not a compelling argument for cloud computing.

Also, most companies have a hardware and network person visit them by the hour when needed. Even for a company using MS Server 2003, it's bullet-proof enough that it just sits there and works. It may not be Linux or Unix, but it's good enough for a reasonable price. So the "IT guy" isn't going to break the bank. Computing has become a commodity, so there ARE no big savings there - PERIOD!

Now let's say you rent sofware like Netsuite or Salesforce.com... You pay and you pay! Every two years you'll be paying the equivalent of a one-time software purchase for a product outright. It doesn't even make financial sense. Software rental is not cheap. What if the network or Internet provider goes down? At least with a desktop system you can still work on a local version.

Most desktop systems are rapidly becoming cloud systems anyway. Like Foundation 3000 accounting software from Softrend Systems Inc. http://www.softrend.com This software you buy only once and you get both desktop feature and cloud features included. It's not an either/or decision.

Using Online features in software in no way makes the desktop redundant. What the heck do most people use to connect to the Internet with...? A desktop computer! Some software will be Internet-based and some will be desktop based. Most computing platforms of the future will be a mix of both. What we're really talking about here is the software pricing model - not whether your application is sitting on "somebody's" server out in a cloud somewhere.

So let's bring the discussion back to Earth and get our heads out of the Clouds!

Come on... let's debate this!

Posted by: Sandy

I agree with Greg that there is a lot of "cart before the horse" here. I wonder if current build-outs aren't going to create a lot of embarrassingly dark data centers with 0-ROI equipment rusting on the racks.

Tom Lord

Sandy:

So, for your $10K server you'll need space, climate control, software vendors supporting your platform in future years, commodity hardware vendors supporting your platform in future years (that's automatic with Microsoft --NNB), a backups solution (can be automated --NNB), etc. You can get by with hourly support for many things but to make all of the purchases you need to make you'll need something like an (at the very least) part-time CIO.

There are a lot of prisoner's dilemmas in those costs. If nobody moves to the cloud, your $10K server solution is a level playing field and the various vendors will compete to make it easy for you. If there's a spike of growth in the cloud at some point, and meanwhile the big-gorilla vendors are pressuring third party software vendors towards the cloud, who is going to want to sell stuff for your $10K server? (that's a theoretical problem so far -- Dell is still here, Microsoft and IBM will sell you software). In addition to supply problems, for the stuff you can get, you'll be paying larger and larger slices of the vendor's costs for retail marketing to you.

Yet, I agree with you that in-house and private ownership isn't going away anytime soon. Just that it might shift so that what you own looks more and more like a low-flying cloud (what we call "fog" in the Bay Area).

At the very high-end, as well: buying rather than subscribing, sure. But "cloud form."

Sometimes under-emphasized is that this "cloud stuff" is, as much as anything, a reconsideration of the entire software stack. Firms might still buy their own hardware, hire hourly support, and so on but software vendors will treat that "fog" as just an extension of the "cloud": additional nodes on which to install their wares.

Also overlooked: capability limitations have caused many first generation web hosted apps to be "dumbed down" compared to locally hosted apps that they replace. That might be permanent not because capability won't grow but because firms will discover that it lowers training costs, increases the portability of skills in the labor force, and is "good enough". The cloud can (at least appear to) save money that way, too.

-t

Posted by: Tom Lord at May 20, 2008 12:42 PM

Application Delivery Nicholas Carr's Simplistic View of Cloud Computing

"Cloud Computing might well be a dominant force in the provision of IT services some time in the future. Cloud Computing, however, involves the sophisticated interaction of numerous complex technologies. Carr would have better served the industry if he had spent some attention identifying the impediments that inhibit Cloud Computing and provided his insight into when those impediments will be overcome."
April 27, 2009

Nicholas Carr's Simplistic View of Cloud Computing

Nicholas Carr is at it again. After the dot com implosion, Carr wrote an article in the Harvard Business review entitled "IT Doesn't Matter". In the article, Carr aruges that since information technology is generally available to all organizations, it does not provide a permanent strategic advantage to any company. One of the reasons that I find Carr's argument to be simplistic is that it assumes that all company's are equally adept at utilizing IT to their advantage. This is clearly not the case. Another reason is that he seems to dismiss the idea of using IT to get a strategic advantage that while not permanent, will be in affect for years. The IT organizations that I deal with are quite pleased if they can help their company get a two year advantage over their competitors.

Carr recently authored "The Big Switch" and again his arguments are simplistic. The book begins with a thorough description of how the electric utilities developed in the US. He then argues by analogy that Cloud Computing is the future of IT. The analogy being that the provision of IT services will evolve exactly the same way as the provision of electicity did.

I have two primary concerns with Carr's argument. The first is the fact that any argument by analogy is necessarily week. The generation of electricity and the provision of IT services may well have some similarities, but they are not the same thing. My second concern is that the way the book reads, Carr has already determined that the future of IT is Cloud Computing and is out to convince the reader of that. There is no real discussion in the book of the pros and cons of Cloud Computing merely the repeated assertion that the future is Cloud Computing. Perhaps the closest that Carr comes to discussing the pros of Cloud Computing is when he quotes some anonymous industry analyst as saying that Amazon's cost of providing Cloud Computing services is one tenth of what it would cost the traditional IT organization. There is, however, no citation or backup of any kind to allow us to better understand that assertion.

More important, there is no discussion in the book of what has to happen from a technology perspective to make Cloud Computing viable. Cloud Computing might well be a dominant force in the provision of IT services some time in the future. Cloud Computing, however, involves the sophisticated interaction of numerous complex technologies. Carr would have better served the industry if he had spent some attention identifying the impediments that inhibit Cloud Computing and provided his insight into when those impediments will be overcome.

Jim Metzler

Nick Carr The many ways cloud computing will disrupt IT By Tom Sullivan

Mar 25, 2009 | InfoWorld Yahoo! Tech

Whether you prefer the term "utility computing" or "the cloud," the industry is headed in that direction, however slowly, and the transition will have a multifaceted impact on IT in some ways productive, others unpleasant. And it will strike to the heart of the very technology professionals who provide a significant chunk of what is today's enterprise IT.

Nicholas Carr, author of the tech-contentious Harvard Business Review article "IT Doesn't Matter" and, more recently a book "The Big Switch," spoke with InfoWorld Editor at Large Tom Sullivan about how enterprises will transition to a more utility-like model for IT, why a small cadre of companies is gobbling up 20 percent of the world's servers and the unheard of possibilities that creates, how Web 2.0 replicates business fundamentals, as well the human factor in all of this.

[ Follow the developments in the world of cloud computing with whurley's blog, Cloud Computing | Find out what cloud computing really means ]

InfoWorld: There are those who would say that The Big Switch is something of a shift away from your position in "IT Doesn't Matter." Is that true?
Nick Carr: Even as far back as my original HBR article "IT Doesn't Matter," one of the basic arguments was that more and more of the IT that companies are investing in and running themselves looks a lot like infrastructure that doesn't give you a competitive advantage. I even made an analogy with a utility that everybody has to use but becomes over time pretty much a shared infrastructure. And so what happens when most of what companies use is indistinguishable for what their competitors use? Doesn't that mean we'll move toward more a shared infrastructure, more of a utility system?

The rise of cloud computing in general reflects the fact that a whole lot of the IT that companies have been investing in is really better run centrally and shared by a bunch of companies than it is maintained individually. Now, having said that, I see it as a logical next step from "IT Doesn't Matter." On the other hand, you could say that if a company is smarter in how it takes advantage of this new technological phenomenon, it might at least get a cost advantage over its competitors. So at that level, you can say there's some tension between the idea that you can't get an advantage from technology and what we're seeing now with cloud computing.

IW: A shift toward IT as a utility will be long-term. It's happening in really small ways, such as Salesforce.com, but in the here-and-now, not a lot is going on. So do you have a timeline for this?
NC: It's a 10- to 15-year period of transition. Particularly if you look at large companies, big enterprise users, that's probably about the right timeframe. They have huge scale in their internal operations, and it's going to take quite some time for the utility infrastructure to get big enough and efficient enough to provide an alternative to big private datacenters. I completely agree that we're in a transition period that's going to take some time. And I'd say at this moment that the hype about cloud computing has gotten a bit ahead of the reality. Still, the uptake of cloud services over the last year has moved faster than I would have expected. So there is a lot going on, but there's still a long way to go.

IW: Indeed, and there's also a human element at play. IT folks are in the position to push back on cloud services, if only to preserve their own jobs. How do you envision that playing out?
NC: Well, there is a basic conflict of interest that IT departments face as they think about the cloud, and that's true, of course, of any kind of internal department that faces the prospect of being displaced by an outside provider. We also saw some of this with outsourcing as well, so it's not necessarily new. What I think is more powerful than the resistance that may come from IT departments looking to protect their turf is the competitive necessity companies face to reduce the cost of IT while simultaneously expanding their IT capacity -- and the cloud offers one good way to do that. What I mean by competitive pressure is if one of your competitors moves to more of a cloud operation and saves a lot of money, then whether your IT department likes it or not, you're going to have a competitive necessity to move in that same direction. Over time, that is going to be the dynamic of why cloud computing becomes more mainstream.

IW: And that leads to fewer IT folks, even though some say cloud services won't eliminate IT jobs?
NC: You hear that all the time, not only from vendors but IT managers as well, who say, "If we can get rid of these responsibilities, then we'll be able to redeploy our staff for more strategic purposes." That's a myth, obviously. Any time you get rid of a job, then the person in that job has to prove that their continued employment is worth it for the company. When you get rid of a job, more often than not, the employee goes out the door, particularly when you have an economy like today in which companies are looking not only for greater efficiencies but also to reduce their staff.

IW: How do you see the current recession reshaping IT?
NC: I have mixed feelings. On the one hand, it continues or even intensifies the cost pressures that have been on CIOs and IT departments over the last decade, and that would seem to imply there will be a search for more efficient and less costly ways to do the things you need to keep your business running, whether that's purely at the level of computing and storage capacity, or how you get a particular application in. From that standpoint, it should promote the use of cloud services simply because at the outset they're much cheaper and don't require capital outlays.

On the other hand, whenever you have the kind of severe economic downturn we have right now, companies tend to get very conservative and very risk-averse and so they might be less willing to experiment with a new model of IT. So there are these two contrary forces at work: One pushing companies to find more efficient ways to do things, and the other kind of this sense that "let's batten down the hatches and not do any experiments." And I don't know how those two forces play out. Earlier signs, if you look at Salesforce's results, which have held up pretty well so far, would indicate that maybe it is pushing companies to move more quickly to explore or even buy into the cloud, but it's too early to make a definitive statement on that.

IW: I read your recent blog post in which you state, in short, that a handful of big companies -- Microsoft, Yahoo, Google, and Amazon -- are buying about 20 percent of the servers sold today, and that fact along with more powerful edge devices, is leading to a new architecture that makes applications unheard of before now possible. What are some examples?
NC: I wish I knew. I don't mean that in a glib way. We don't know yet. The New York Times' use of Amazon.com is a small but telling example of what happens when you radically democratize computing so that anyone has access at any moment to supercomputer-type capacity and all the data storage they need. So it's about what can you do with that? I think you can do a whole lot, and smart people will do a lot. But almost by definition we don't know what it is yet because it hasn't been done. But if you think of constraints on IT experimentation within companies, a lot of them have to do with the fact that traditionally there's been a lot of upfront expenses required to build the capacity to do experiments, to write applications that may or may not pay off, and so that cost squelches innovation. And suddenly those costs are going away, so there can be a lot more experimentation going on than we've seen in the past. That's just looking within IT departments' purview.

Beyond that, when you look at Amazon's Kindle the most interesting thing about it isn't that's it's another e-book reader, but that it has a perpetual Internet connection essentially built into the product, by which I mean there's no extra cost, it's just a feature of the product. What happens when more and more products have Internet services essentially bundled into them and you don't even have to think about it? It seems inevitable that's coming, and companies are going to have to think about how that changes the nature of products and services when they're always connected to this incredibly powerful cloud.

IW: Now, this idea that such a small number of companies are buying such a large chunk of servers -- what are the implications?
NC: Well, first of all this comes from a guy at Microsoft [Rick Rashid, senior vice president who oversees Microsoft Research]. So I'm assuming he has good information and that's an accurate number. But if that is, 20 percent is a huge chunk of the server market, and to have that consolidated into the hands of a few purchasers in what's only the last few years, really shows a fundamental shift in the nature of that industry that more and more of the product is going to be consumed by fewer and fewer companies. That radically changes the server industry.

Is it surprising to me that that trend is underway? No, it reflects the fact that more and more computing is done in central datacenters now. Forget about the corporate world. If you look at how individuals use their PCs or smartphones today, a huge amount of stuff that used to require buying a hard drive is now done out in the cloud. All Web 2.0 is in the cloud. It doesn't surprise me because it reflects how people view the way their computers work. But if it's already at 20 percent that seems pretty remarkable in such a short span of time.

IW: In what ways are you seeing enterprises use Web 2.0 today?
NC: The value of that model isn't limited to college kids on Facebook trying to find dates. There's a lot of opportunity for companies to take this Web 2.0 model that builds on shared systems, the ability to provide the user with a lot of information, and tools that enable them to control what information they share and who they share it with at any given moment. It really replicates the fundamental aspects of business organizations where a lot of what you do is figure out which colleagues have information that you can use, how to share it with others, how to get information into the right hands.

We really haven't seen powerful social networking tools evolve for individual businesses. It will be a generation shift. People who are so plugged into social networks at home or at school, they're going to want those same capabilities at work. It's really driven by the user because it upsets the traditional IT apple cart. IT departments and staffers will generally drag their feet and then will play catch-up.

IW: Vivek Kundra, the new federal CIO, has said that the personal technologies he uses are so much better than what he was using professionally that he just had to adopt things like YouTube for the D.C. city government…
NC: Quite a while ago, I was interviewing Marc Benioff about the origins of Salesforce.com and he had a very similar story. He was working at Oracle at the time but using things like Amazon.com online. And he said, "This is really powerful and I can do all sorts of customized stuff with it. Why can't I do this with enterprise applications?" In the story he tells, that was the inspiration for Salesforce. You can relate to that because when you compare most corporate applications to what you find a-dime-a-dozen of online, computing is much easier through the services you get online everyday than it is going through your traditional corporate applications.

IW: One last question for you. What is the most significant thing enterprise IT shops should brace themselves for?
NC: The big thing they'll have to brace themselves for is that the functions that until now have accounted for most of their spending and most of their hiring are going to go away, such as all the administrative and maintenance jobs that were required to run complex equipment and applications on-site. This isn't going to happen overnight, but much of that is going to move out to the utility model over time. That doesn't mean IT shops won't continue to exist and have important functions -- they might have even have some more important functions -- but it does mean that their traditional roles are going to change and they're going to have to get used to, I think, having a lot fewer people and probably having considerably lower budgets. Again, I'm talking about change that will play out in 10 years, not change that's going to happen in two years.

Data center operators to share energy tips, test results next week by James Niccolai

June 20, 2008 (IDG News Service)

Some of the biggest names in the data center industry will gather in Silicon Valley next week to discuss the results of real-world tests intended to help identify the most effective ways to reduce energy consumption in data centers.

Operators of some very large data centers, including the U.S. Postal Service, Yahoo and Lawrence Berkeley National Laboratory, will present about a dozen case studies documenting their experiences with technologies and practices for improving energy efficiency. They include wireless sensor networks for managing air flow in data centers, modular cooling systems that reduce heat spots around densely packed servers and high-efficiency power transformers.

One of the goals is to produce real-world data that will help other organizations decide which technologies they can implement to reduce power consumption and what savings they can hope to achieve from them, said Teresa Tung, a senior researcher at Accenture, which plans to publish a free report on its Web site next Thursday that pulls together the results.

"If you want to make the case for your data center to use some of these initiatives, it's nice to have real-world data you can point to and say, 'Here are the savings that somebody got,' " she said.

The project has been organized by the Silicon Valley Leadership Group, which represents some of the area's biggest technology companies, along with Berkeley Labs. Other companies hosting tests in their data centers include Oracle, NetApp, Synopsys and Sun Microsystems, which is hosting the event next Thursday. The event is called the Data Center Energy Summit. The data centers have been testing technologies from SynapSense, IBM, Cassatt, Powersmiths, Power Assure, Liebert, APC, Modius, Rittal and SprayCool.

The vendors' motives for taking part are not only altruistic. As organizations try to expand their data centers and add more powerful servers, energy consumption could be a major inhibitor to growth in the tech industry and the economy as a whole. The event is a chance to promote technologies that allow organizations to keep growing their data centers.

The effort by industry may also help ward off potential government regulation in this area. "If there were to be regulation and certification, we definitely want it to be couched in real results that we know can be achieved by adopting these best practices and technologies," Tung said.

In a report last August (PDF here), the U.S. Environmental Protection Agency estimated that in 2006, data centers accounted for about 1.5% of total U.S. electricity consumption, or enough to power about 5.8 million homes. It forecast that the energy consumed by data centers would almost double by 2011 if current trends continued, although it said widespread use of best practices and "state of the art" technologies could reverse the growth.

Part of Accenture's role is to compare the test results with the projections in the EPA's report. Andrew Fanara, head of the EPA team that authored the report, said next week's findings would "complement" the EPA research and help validate its projections. Fanara said he expected to attend the summit next week, along with representatives from the Department of Energy.

Participants said the event is unique because of the cooperation from data center operators, who typically are secretive about their operations. "This level of transparency is rare in the data center industry, but it's a sign of how committed the participants are to increasing energy efficiency," said Jim Smith, vice president of engineering at Digital Realty Trust, another of the data centers taking part.

All the technologies being tested are available today. The case studies cover basic best practices, such as improving air flow management and consolidating server equipment, as well as emerging technologies like the wireless sensor networks. The case studies will all be published at no charge, along with Accenture's consolidated report.

Sun tested modular cooling products from five vendors, and Accenture will present a side-by-side comparison of the results, Tung said. Digital Realty Trust, Yahoo and NetApp tested air economizers, which use outside air to supplement cooling systems when the outside air is cold enough, a technique also called "free cooling."

The U.S. Postal Service is in the process of upgrading a data center in San Mateo, Calif., that will use high-efficiency power transformers, also called power distribution units, or PDUs, made by Powersmiths. Its case study will compare the operating losses of transformers already in place with the new high-efficiency products, said Peter Ouellette, Powersmiths' district manager for Northern California.

Not all the technologies were tested thoroughly. Only two weeks of data were collected for the free-cooling equipment, so the results had to be extrapolated for the year, Tung said. But she thinks most of the data will be valuable. "In each case, we've given the 'before' and the 'after,' so the before sets the baseline and the after shows what was actually saved," she said.

The summit is one of several initiatives to address energy use in data centers. Others include the Green Grid, a vendor-led effort. The Uptime Institute and McKinsey published a report on the subject in May, and the EPA is working on an Energy Star specification for both servers and data centers.

Summit organizers said the event plugs a gap between recommending energy-efficient technologies and documenting the savings from their usage.

[Jun 23, 2008] DEMYSTIFYING THE CLOUD

June 23 2008, | Information Week

When people talk about "plugging into the IT cloud," they generally have something very simple in mind-browser access to an application hosted on the Web. Cloud computing is certainly that, but it's also much more. What follows is the longer, more detailed explanation.
With so much happening in the technology industry around cloud computing, InformationWeek set out to define the megatrend in a way that helps IT professionals not only understand the nuances, but also make informed decisions about when and where to use cloud services in lieu of on-premises software and systems. Cloud computing represents a new way, in some cases a better and cheaper way, of delivering enterprise IT, but it's not as easy as it sounds, as we learned in a discussion with a few yet-to-be-swayed CXOs. The venue was the recent Enterprise 2.0 conference in Boston, where InformationWeek and TechWeb, our parent company, brought together senior technologists from the California Public Utilities Commission, Northeastern University in Boston, and Sudler & Hennessy to engage leading cloud vendors in an open forum on The Cloud.

Everyone agreed that cloud services such as Amazon Web Services, Google Apps, and Salesforce.com CRM have become bona fide enterprise options, but there were also questions about privacy, data security, industry standards, vendor lock-in, and high-performing apps that have yet to be vaporized as cloud services. (For a recap of that give and take, see "Customers Fire A Few Shots At Cloud Computing," June 16, p. 52; informationweek.com/1191/preston.htm.)

If we learned anything from our Enterprise 2.0 cloud forum, it's that IT departments need to know more. Our approach here is to look at cloud computing from the points of view of eight leading vendors. In doing so, we're leaving out dozens of companies that have a role to play, but what we lack in breadth, we hope to compensate for in depth.

And this analysis is just the beginning of expanded editorial coverage by InformationWeek on cloud computing. Visit our just-launched Cloud Computing blog on InformationWeek.com, and sign up for our new weekly newsletter, Cloud Computing Report, at informationweek.com/newsletters. We're also developing video content, an in-depth InformationWeek Analytics report, and a live events series in the fall.

Where does cloud computing fit into your company's strategy? We'd love to hear from you.

-John Foley ([email protected])

AMAZON

Amazon made its reputation as an online bookstore and e-retailer, but its newest business is cloud computing. One of the first vendors in this emerging market more than two years ago, Amazon is a good starting point for any business technology organization trying to decide where and when to plug into the cloud.

Amazon's cloud goes by the name Amazon Web Services (AWS), and it consists, so far, of four core services: Simple Storage Service (S3); Elastic Compute Cloud (EC2); Simple Queuing Service; and, in beta testing, SimpleDB. In other words, Amazon now offers storage, computer processing, message queuing, and a database management system as plug-and-play services that are accessed over the Internet.

A tremendous amount of IT infrastructure is required to provide those services-all of it in Amazon data centers. Customers pay only for the services they consume: 15 cents per gigabyte of S3 storage each month, and 10 to 80 cents per hour for EC2 server capacity, depending on configuration.

Already, AWS represents three of the defining characteristics of the cloud: IT resources provisioned outside of the corporate data center, those resources accessed over the Internet, and variable cost.

Amazon's first cloud service was S3, which provides unlimited storage of documents, photos, video, and other data. That was followed by EC2, pay-as- you-use computer processing that lets customers choose among server configurations.

Why is Amazon moving so aggressively into Web services? In its rise to leadership in e-commerce, the company developed deep technical expertise and invested heavily in its data centers. Now it's leveraging those assets by opening them to other companies, at a time when many CIOs are looking for alternatives to pumping more money into their own IT infrastructures. "What a lot of people don't understand is that Amazon is at heart a technology company-not a bookseller or even a retailer," says Adam Selipsky, VP of product management and developer relations for AWS.

Developers-defined as anyone, from individuals to the largest companies, who signs up for AWS-are glomming onto Amazon's infrastructure to develop and deliver applications and capacity without having to deploy on-premises software and servers. More than 370,000 developers are on board.

Amazon Web Services weren't aimed initially at big businesses, but enterprises are tapping in for the same reasons that attract small and midsize businesses-low up-front costs, scalability up and down, and IT resource flexibility. To better support large accounts, Amazon began offering round-the-clock phone support and enterprise-class service-level agreements a few months ago. For instance, if S3 availability falls below 99.9% in a month, customers are entitled to at least a 10% credit. Amazon isn't foolproof-its consumer-facing Web site recently suffered a series of outages and slowdowns.

Amazon hasn't morphed into a software-as-a-service vendor, but startups and other software developers are using AWS to offer their own flavors of SaaS. They include Vertica, which sells S3-based data warehouses, and Sonian, which built its archive service on Amazon infrastructure.

GOOGLE

Google built a supercharged business model around searching the Internet. Now it's opening its cloud to businesses in the form of application hosting, enterprise search, and more.

In April, Google introduced Google App Engine, a service that lets developers write Python-based applications and host them on Google infrastructure at no cost with up to 500 MB of storage. Beyond that, Google charges 10 to 12 cents per "CPU core hour" and 15 to 18 cents per gigabyte of storage. This month, Google disclosed plans to offer hosted enterprise search that can be customized for businesses.

Yet Google, like Amazon, has demonstrated the risks of cloud computing. Google App Engine last week was crippled for several hours. Google blamed the outage on a database server bug.

For end users, there's Google Apps-Web-based documents, spreadsheets, and other productivity applications. Google Apps are free or $50 per user annually for a premium edition. Microsoft's PC-based Office 2007 suite, by comparison, costs up to $500 per user.

More than half a million organizations have signed up for Google Apps-including General Electric and Procter & Gamble-and there are now some 10 million Google Apps users. But keep that in perspective: The majority of those users are consumers, college students, and employees of small businesses, not the corporate crowd. Google senior product manager Rajen Sheth acknowledges that Google's apps aren't in- tended to replace business tools like Office.

Google has taken steps to make its applications, originally aimed at consumers, more attractive to IT departments. Last year, the company acquired Postini, whose hosted e-mail security and compliance software is now part of Google Apps, and in April it partnered with Salesforce.com to integrate Salesforce CRM and Google Apps, including a premium package that comes with phone support and third-party software for $10 per user each month.

Google is also adjusting to the reality that users sometimes need to work offline. Google Gears is a browser plug-in for doing that.

Google has teamed with IBM to provide cloud computing to university students and researchers. The Google-IBM cloud is a combination of Google machines and IBM BladeCenter and System x servers running Linux, Xen virtualization, and Apache's open source Hadoop framework for distributed applications.

"One great advantage we have, and one of the reasons we started to explore this, is that we run one of the largest online apps in the world, if not the largest," says Sheth, referring to Google's Web search engine. The project, Sheth says, will help "foster new innovation and new ideas" about cloud computing.

Google and IBM have been cagey about any plans to extend their cloud collaboration to enterprises, but it would be an obvious next step. "There's not that much difference between the enterprise cloud and the consumer cloud" beyond security requirements, Google CEO Eric Schmidt said a few weeks ago. "The cloud has higher value in business. That's the secret to our collaboration."

With its plug-and-compute simplicity, the cloud seems ethereal, but don't be fooled. Google's cloud represents a massive investment in IT infrastructure. Google has recently completed or is in the processing of building new data centers in Iowa, Oregon, North Carolina, and South Carolina, at an average cost of about $600 million each.

SALESFORCE

Salesforce became the proving ground for software as a service with its Web alternative to premises-based sales force automation applications, and dozens of SaaS companies followed. Salesforce's next act: platform as a service.

Marc Benioff's company is making its Web application platform, Force.com, available to other companies as a foundation for their own software services. Force.com includes a relational database, user interface options, business logic, and an integrated development environment called Apex. Programmers can test their Apex-developed apps in the platform's Sandbox, then offer the finished code on Salesforce's AppExchange directory.

In the early going, developers used Force.com to create add-ons to Salesforce CRM, but they're increasingly developing software unrelated to Salesforce's offerings, says Adam Gross, the vendor's platform VP. Game developer Electronic Arts built an employee-recruiting application on Force.com, and software vendor Coda crafted a general ledger app.

At the same time, Salesforce continues to advance its own applications, which are now being used by 1.1 million people. An upgrade due this summer will include the ability to access Google Apps from within a Salesforce application, more than a dozen new mobile features, an "analytics snapshot," enhanced customer portals, and improved idea exchange and content management.

Salesforce is getting into other cloud services, too. In April 2007, it jumped into enterprise content management with Salesforce Content, which lets users store, classify, and share information similar to Microsoft SharePoint and EMC Documentum.

Salesforce has adopted a multitenant architecture, in which servers and other IT resources are shared by customers rather than dedicated to one account. "There's no question there's an evangelism involved with doing multitenancy, but, with education, customers quickly come on board with the model," says Gross.

The proof is in the sales figures. Salesforce's revenue grew to $248 million in the quarter ended April 30, a 53% increase over the same period a year ago, keeping it on pace to become the first billion-dollar company to generate almost all of its sales from cloud computing.

MICROSOFT

If any technology company has had its cloud strategy questioned, it's Microsoft. Now, after a couple of years of putting the pieces into place, Microsoft is showing progress.

Some vendors envision a future where most, if not all, IT resources come from the cloud, but Microsoft isn't one of them. Its grand plan is to provide "symmetry between enterprise-based software, partner-hosted services, and services in the cloud," chief software architect Ray Ozzie said a few months ago. More simply, Microsoft calls it "software plus services."

Microsoft's first SaaS offerings for business, rolling out this year, are Dynamics CRM Online, Exchange Online, Office Communications Online, and SharePoint Online. Each will be available in a multitenant version, generally for small and midsize businesses, and a single-tenant version for companies requiring 5,000 or more licenses. For consumers, Microsoft's online services include Windows Live, Office Live, and Xbox Live.

A handful of large companies-Autodesk, Blockbuster, Energizer, and Ingersoll-Rand among them-are early adopters. Anyone who doubts that Microsoft has entered the cloud services game should consider this: Coca-Cola plans to subscribe to 30,000 seats of Microsoft-hosted Exchange and SharePoint by next year.

Microsoft senior VP Chris Capossela says customers can mix and match hosted and on-premises versions of its software, an attractive option for companies with branch offices that lack IT staff. Microsoft hasn't disclosed pricing for its online services, but Capossela says it's naive to think that cloud services will be cheaper than on-premises software over the long haul. "You're going to pay forever," he says. "It's a subscription, for goodness' sake."

What's next? A project called MatrixDB would extend on-premises SQL Server databases to Microsoft-hosted databases in the cloud. That's still a couple of years away, but it hints at future possibilities. And Microsoft points to BizTalk Services, its hosted business process management software, as one element in a forthcoming "Internet service bus" that functions like an enterprise service bus, albeit online.

As for the Windows operating system, Microsoft's upcoming synchronization platform, Live Mesh, and some of the Windows Live services will be more tightly integrated with it.

The shift to cloud services has forced Microsoft to rethink not just the way its products are architected, but its data center strategy, too. For years, Microsoft leased its major data centers, but it has now begun to design, construct, and own them, with U.S. facilities recently completed or under construction in Illinois, Texas, and Washington, and another under way in Dublin, Ireland.

SUN MICROSYSTEMS

John Gage, Sun Microsystems' co-founder, coined the phrase "the network is the computer" nearly 20 years ago. Arguably, that was the beginning of the cloud-but the wind changed direction.

Sun "got it backwards," CTO Greg Papadopoulos now says. How so? With its Sun Grid technology, Sun focused on mission-critical, highly redundant data center environments. "We found that nobody cared about that," says Papadopoulos. "They just want it to be easy to use."

Making cloud computing easy to use is now the focus of two initiatives at Sun: Network.com, a collection of grid-enabled online applications available on a pay-per-use basis, and Project Caroline, a research effort to make cloud-based resources available to developers working on Web applications and services. They coincide with what Papadopoulos calls "Red Shift," his theory that computing demand will exceed capacity at many companies. The obvious solution: cloud computing.

Network.com is evolving into a "virtual on-demand data center" that customers can use in real time as business demands change, says senior director of software Mark Herrin.

Project Caroline is intended to become a hosting platform for SaaS providers. The goal is to make it "far more efficient to develop multiuser Internet services rapidly, update them frequently, and reallocate resources flexibly and cost-efficiently," according to Sun. An open source project led by Sun VP of technology Rich Zippel, Caroline supports applications built in several programming languages, including Java, Perl, Python, Ruby, and PHP. "We don't really think that 'all applications will tie back to Sun servers on the Internet,'" Zippel writes on his blog. "We're really bullish about the ability to develop, deploy, and deliver software services on the Internet."

Like Microsoft, Sun expects businesses to continue to need some of their own IT infrastructure. Sun's data-center-in-a-box, Blackbox, is designed for companies that face massive computing requirements but aren't ready to shift all their infrastructure to the cloud. Similarly, Sun's Constellation groups together Sun Blade 6000 servers.

"The public clouds will be spillover points for enterprises," says Papadopoulos. "They'll be able to make a judgment. I may not like my crown-jewel data living in the cloud, but it'd be good to pull in another 1,000 nodes and do some work."

IBM

IBM last year unveiled Blue Cloud, a set of offerings that, in IBM's words, will let corporate data centers "operate more like the Internet by enabling computing across a distributed, globally accessible fabric of resources." The pieces of Blue Cloud include virtual Linux servers, parallel workload scheduling, and IBM's Tivoli management software. In the first phase, IBM is targeting x86 servers and machines equipped with IBM's Power processors; in phase two, IBM will loop in virtual machines running on its System z mainframes. Blue Cloud is "not just about running parallel workloads but about more-effective data center utilization," says Denis Quan, CTO of IBM's High Performance On Demand Solutions unit.

IBM's first commercial cloud computing data center is going up in Wuxi, in southern China. It will provide virtualized computing resources to the region's chipmaking companies.

IBM's advantage in cloud computing is its expertise in building, supporting, and operating large-scale computing systems. IBM got into cloud computing a few years ago with its Technology Adoption Program, an "innovation portal" run out of the Almaden Research Center to give engineers on-demand resources, such as DB2 databases and Linux servers.

Last October, IBM announced a partnership with Google to provide cloud computing gateways to universities. Intended as a way of teaching university students how to use parallel programming models, the initiative is "critical to the next generation of cloud-based applications," Quan says. Three cloud computing centers for academia have gone live, one at Almaden, one at the University of Washington in Seattle, and one in a Google data center.

For IT departments, IBM's cloud software, systems, and services can be brought together into what the vendor calls the "New Enterprise Data Center," with quality-of-service guarantees to reassure CIOs that there's nothing hazy about the cloud.

ORACLE

Despite its sometimes-contradictory signals, Oracle was an early proponent of the on-demand model, launching Oracle Business OnLine in 1998. At that time, CEO Larry Ellison described the new Web-based delivery model as an extension of the company's existing software business. Today, it's clear that Oracle's destiny lies in the cloud, even if the company has been reluctant to switch its lucrative on-premises software license business over to a subscription model.

Speaking to financial analysts last September, Ellison downplayed the SaaS movement, saying there's no profit to be made in delivering applications over the Internet. (He's obviously wrong on that point.) President Charles Phillips has said Oracle plans a "stair-step" approach to the cloud, gradually moving on-premises customers over to Web-based software.

Oracle got into cloud computing in one fell swoop with its 2005 acquisition of Siebel Systems for $5.8 billion. At the time, Oracle executives called the deal a beachhead against SAP, but it's clear in hindsight that Siebel's on-demand CRM applications were every bit as important to Oracle's long-term strategy. Oracle On Demand comprises much of the vendor's software stack, including the company's flagship database.

Oracle has developed a "pod" architecture for its on-demand data centers. Pods can be configured for individual customers, in clusters for large companies with multiple departments, or in multitenant versions for shared use.

Oracle's on-demand business generated $174 million in revenue in the fiscal quarter ended March 26, up 23% from the same quarter last year, and it's on track for $700 million for the year. While On Demand represents only about 3% of Oracle's revenue, it's the fastest-growing part of the business, with 3.6 million users.

To support growth, Oracle, like other cloud service providers, is building a new data center. This summer, it will break ground on a 200,000-square-foot facility in Utah and puts the initial investment at $285 million.

EMC

CEO Joe Tucci barely touched on EMC's plans for cloud computing at EMC World last month, but you can be sure he's thinking about it. The cloud by its very nature is a virtual computing environment, and where there's virtualization, there's EMC, owner of VMware.

Earlier this year, EMC acquired personal information management startup Pi and, with it, former Microsoft VP Paul Maritz, who's been tapped as president of EMC's new cloud infrastructure and services division. In fact, the acquisitive EMC has been pulling in for a few years companies that bolster its abilities to deliver on cloud computing. In 2004, it bought Smarts, whose software configures distributed networks and monitors storage. And last year, EMC acquired Berkeley Data Systems and its Mozy backup services.

EMC has expertise in information life-cycle management, which is one area where it expects to have a role in cloud computing. "If we look at EMC's core asset portfolio, all of the key areas of the information infrastructure lend themselves not only to current models of up-front acquisition but also the new model of SaaS and pay-as-you-go subscription delivered over the Internet," says CTO Jeff Nick.

Nick sees companies moving to cloud storage and information management services as a way to "outtask" jobs to cloud computing vendors. "The key to storage in a cloud environment is not just to focus on bulk capacity but as much as possible make it self-managing, self-directive, and self-tuning," Nick says.

What kinds of cloud services might EMC offer? Storage is a no-brainer, though it doesn't have such an offering yet. Beyond that, EMC might be able to bridge compliance monitoring across online and on-premises storage. EMC sees opportunities for SaaS business process management and collaboration, as well as personal information management for consumers. Data indexing, archiving, disaster recovery, and security are all possibilities, too, Nick says. Several of EMC's acquired businesses, including Documentum (indexing and archiving), RSA (security), and Infra (IT service management) are likely paths to getting there.

EMC's VMware division will find its way into the mix. "We want to be the plumbing and the enabler of cloud computing," says VMware CTO Stephen Herrod.

Like his colleague Nick, Herrod is looking ahead. He hints at enabling on-premises server infrastructure to scale up via on-demand virtual servers, disaster-recovery scenarios, and using management software like that acquired in VMware's purchase of B-hive Networks to maintain service-level agreements.

In other words, today's cloud represents just the beginning of many new possibilities.

Write to J. Nicholas Hoover at [email protected].

See all our Reports at informationweekreports.com

Sidebar-

SMBs Will Rise To Cloud Computing

If cloud computing offers benefits to enterprise IT departments, it's an absolute godsend to small and midsize companies. Instead of making do with a small, underresourced IT staff trying to emulate the productivity of IT outfits with multimillion-dollar budgets, smaller companies can now access enterprise-class technology with low up-front costs and easy scalability.

Important as those things are, they're only the first steps in a larger shift. Cloud computing doesn't just level the playing field-it promises to tilt it in the other direction. Simply put, today's most powerful and most innovative technology is no longer found in the enterprise. The cloud makes leading-edge technology available to everyone, including consumers, often at a far lower cost than businesses pay for similar or inferior services.

Years ago, most people had access to the best technology at work, Google VP Dave Girouard said recently. "You had a T1 line to access the Internet at the office, for example, then went home to watch three channels of TV."

Those days are gone. Compare a typical Exchange Server, offering perhaps 500 MB of e-mail storage per user, to Web-based e-mail services that give users up to 7 GB of storage at no cost. (Google's corporate version offers 25 GB per user for $50 a year.) Likewise, compare on-premises enterprise content management systems to easier-to-use and more-flexible cloud-based publishing and sharing systems like Blogger, Flickr, and Facebook. They're free, too.

Those comparisons may not be relevant to big companies, but they are to SMBs. While large enterprises typically use the cloud for infrastructure services such as storage, SMBs are more likely to plug into the cloud for day-to-day productivity applications, says Michelle Warren, a senior analyst at Info-Tech Research.

In fact, as cloud computing matures, we'll see small companies rely on the cloud for more and more of their technology needs, gradually eschewing the costs and complexity of in-house IT infrastructure.

"We're moving toward a world where IT is outsourced," Warren says. "Maybe not 100%, but 95%. It will happen more in the SMB than in the enterprise, for sure." -Fredric Paul, publisher and editor in chief of TechWeb's bMighty.com, which provides technology information to SMBs

DIG DEEPER

SAAS STRATEGY Web-based apps can be a compelling alternative to on-premises software, but you need a plan. Download this InformationWeek Report at:

informationweek.com/1182/report_saas.htm

showArticle

Cloud Computing's Strengths Play To Smaller Companies' Needs ...

For small and midsize companies, cloud computing offers enterprise-class or even better, consumer class -- applications for far less than enterprises pay to do it themselves

If cloud computing offers significant benefits to IT departments in the enterprise, it's an absolute godsend to small and midsize companies. Instead of making do with a small, under-resourced IT staff trying to emulate the productivity of billion-dollar IT outfits, smaller companies can now access enterprise-class solutions with limited up-front costs and easy scalability.

Important as this is, though, cloud computing is only the first step in an even larger tectonic shift in the technology world. Cloud computing doesn't just level the playing field -- it promises to tilt it in the other direction.

Simply put, today's best, most powerful, and most innovative technology is no longer to be found in the enterprise. Google VP and enterprise GM Daven Girouard looked back on how it used to be at a recent product announcement. "20 years ago, you had access to the best technology in the workplace," he recalled. "You had a T-1 line to access the Internet at the office, for example, then went home to watch three channels of TV."

Those days are gone. Today, the cloud makes leading-edge technology available to everyone, including consumers, often at a far lower cost than businesses pay for similar or inferior services.

Hard to Beat Free
Compare a typical Exchange server -- offering perhaps 500MB of email storage -- to Web-based email services that offer up to 7GB of storage. Free. (Google's corporate version offers 25GB per user for $50 a year. And don't forget Apple's just-announced MobileMe service, which costs $99 per user per year but doesn't require any infrastructure.) Compare enterprise content-management systems with easier-to-use and more-flexible cloud-based publishing/sharing systems like Blogger, Flickr, and Facebook. They're free, too.

Those comparisons may not be relevant to the enterprise, but they are to SMBs. According to Michelle Warren, senior analyst at Info-Tech Research in Toronto, while large enterprises typically turn to the cloud for things like storage or disaster recovery (or for departmental requirements like CRM or HR), day-to-day cloud computing applications are more appealing to SMBs.

Many smaller companies don't really care about infrastructure, adds Agatha Poon, Senior Analyst, Enterprise Research at Yankee Group, and "have no idea what cloud computing is about." But they are driven to outsource applications to meet their business needs.

Apps Are in the Office
Ultimately, companies large and small may have little choice. As workers become accustomed to high-powered cloud applications available in the consumer space, they are sneaking them into the business environment whether IT departments are ready or not. While most corporate IT departments resist Instant Messaging, for example, rank-and-file users find creative ways to introduce IM to the workplace because they don't want to live without its proven business benefits just because they happen to be at the office.

Cloud computing concerns remain, of course. Warren advises SMBs to look for providers who deliver adequate security and support -- and be willing to pay for it when appropriate. Yankee Group's Poon warns SMBs not to underestimate network bandwidth expenses and to make sure their providers will be around for the long haul.

That's important, because there seems little doubt that over the long term cloud computing will supply more and more of smaller companies' technology needs, helping them avoid the costs and complexity of an in-house applications and infrastructure.

The only question is when. While Poon sees technology startups as today's pioneers in this area, she says traditional SMBs still face a steep learning curve on cloud computing. And she cautions that "it's going to take some time to see if the cloud model works and if it's mature enough" to fully support even smaller businesses.

Warren is more bullish. "We're moving toward a world where IT is outsourced," she says. "Maybe not 100%, but 95%. I think it will happen more in the SMB than in the enterprise, for sure."

See more columns by Fredric Paul

[June 21, 2008] Guide To Cloud Computing

June 23, 2008 InformationWeek

The market is getting crowded with Web-based software and storage offerings. Here's what you need to know about the cloud computing strategies of Amazon, Google, Salesforce, and five other leading vendors.

By Richard Martin J. Nicholas Hoover

When people talk about "plugging into the IT cloud," they generally have something very simple in mind--browser access to an application hosted on the Web. Cloud computing is certainly that, but it's also much more. What follows is the longer, more detailed explanation.

With so much happening in the technology industry around cloud computing, InformationWeek set out to define the megatrend in a way that helps IT professionals not only understand the nuances, but also make informed decisions about when and where to use cloud services in lieu of on-premises software and systems. Cloud computing represents a new way, in some cases a better and cheaper way, of delivering enterprise IT, but it's not as easy as it sounds, as we learned in a discussion with a few yet-to-be-swayed CXOs. The venue was the recent Enterprise 2.0 conference in Boston, where InformationWeek and TechWeb, our parent company, brought together senior technologists from the California Public Utilities Commission, Northeastern University in Boston, and Sudler & Hennessy to engage leading cloud vendors in an open forum on The Cloud.

Everyone agreed that cloud services such as Amazon Web Services, Google Apps, and Salesforce.com CRM have become bona fide enterprise options, but there were also questions about privacy, data security, industry standards, vendor lock-in, and high-performing apps that have yet to be vaporized as cloud services. (For a recap of that give and take, see "Customers Fire A Few Shots At Cloud Computing")

If we learned anything from our Enterprise 2.0 cloud forum, it's that IT departments need to know more. Our approach here is to look at cloud computing from the points of view of eight leading vendors. In doing so, we're leaving out dozens of companies that have a role to play, but what we lack in breadth, we hope to compensate for in depth.

And this analysis is just the beginning of expanded editorial coverage by InformationWeek on cloud computing. Visit our just-launched Cloud Computing blog on InformationWeek.com, and sign up for our new weekly newsletter, Cloud Computing Report. We're also developing video content, an in-depth InformationWeek Analytics report, and a live events series in the fall.

[Jun 16, 2008] Down To Business- Customers Fire A Few Shots At Cloud Computing ...

Information Week

It's a variation on the old straw man argument, whereby a vendor defines its customers' main concerns about a product, technology, or technology model and then proceeds to explain them away one by one. When it comes to cloud computing, the next great information technology movement, the leading vendors express customer concerns along these lines: Is it secure enough? Is it reliable enough? Does it make financial sense?

Potential customers are indeed grappling with those issues, but they have so many more questions about cloud computing. Will they get locked into a single vendor, with their data or applications held hostage? Will mostly consumer-oriented vendors such as Google and Amazon.com stay in the enterprise IT business for the long haul? Given the laws and regulations governing certain industries and business practices, what data can live in the Internet cloud and what data must organizations keep closer to the vest? Are there adequate applications and other IT resources to be found in the cloud?

At a session on June 9 at the Enterprise 2.0 Conference in Boston, a panel of executives from Google, Amazon Web Services (AWS), and Salesforce.com pitched their cloud-based services to a panel of CXOs from potential customer organizations and then answered their pointed questions. Here's what's on those CXOs' minds, and how the vendors responded

- Security. Yes, it's still top of mind for most customers. Is data held somewhere in the cloud (customers don't always know its exact location) and piped over the Internet as secure as data protected in enterprise-controlled repositories and networks?

The vendor argument usually comes down to scale and centralized control. Few enterprises can allocate the money and resources that companies such as Amazon, Google, IBM, and Salesforce do to secure their data centers. Salesforce senior VP Ross Piper makes the point that Salesforce's data centers had to pass the intense muster of security vendor customers Cisco and Symantec. Adam Selinsky, VP of product management and developer relations for AWS, which offers cloud-based storage, compute, billing, and other services to enterprises and individuals, notes Amazon's years of experience protecting tens of millions of credit card numbers of retail customers.

Data stored within the cloud, the vendors argue, is inherently safer than data that inevitably ends up on scattered laptops, smartphones, and home PCs. Google relates how one of its execs, Dave Girouard, had his laptop stolen at a San Francisco Giants game. Girouard evidently called the CIO with one concern: Do I replace it with a PC or a Mac?

But the cloud vendors realize that when it comes to security, enterprise customers aren't going to take their word for it. "We need to prove to you that security is strong with all Web apps," Google Enterprise product manager Rishi Chandra said in a keynote address at Enterprise 2.0.

- Vendor lock-in and standards. Object Management Group CEO Richard Soley, a leading standards setter in his own right, wonders if Internet specifications are mature enough to ensure data portability across the cloud. "How easily could I pick up my application from one vendor and move it to another?" Soley says.

The cloud vendors emphasize the openness and extensibility of SOAP, XMPP, and other Web services protocols. AWS's Selinsky notes that the vendor's IT infrastructure services require no capital or other up-front investments, and Piper points out that Salesforce's app service customers can start with as few as five users and commit gradually. A cloud vendor retort: How easy is it, by comparison, for customers of SAP, Oracle, or EMC premises-based wares to up and leave?

- Regulatory and legal compliance. Organizations looking to move some of their data into the cloud must navigate a labyrinth of vertical (HIPAA, PCI, FERPA, etc.) and horizontal (SOX, Patriot Act, FISMA, etc.) rules on where information must be stored and how it must be accessed, especially for e-discovery. And most of those rules are open to interpretation. The cloud vendors offer no pat answers. They can't change the laws and, in seeking clarity for potential customers, they, too, get five opinions for every four lawyers they consult.

Then there are the related privacy concerns. "We have your Social Security numbers ... we know where your children go to school," says Carolyn Lawson, CIO of the California Public Utilities Commission, emphasizing a concern among most government IT organizations: What if that sensitive data were to fall into the wrong hands?

- Reliability. Mary Sobiechowski, CIO of health care advertising and marketing agency Sudler & Hennessey, questions whether the cloud renders the capacity for transmitting the kinds of large files typical in an agency environment. "There's bandwidth issues," she says. "We also need real fast processing."

And no matter how robust their technology infrastructures are, the cloud vendors experience outages. For example, the Amazon.com site suffered downtime and slowdowns several times in recent days, and AWS's storage service went down one day in February. Google customers experienced technical difficulties with Gmail on April 16. In both vendors' cases, the performance problems weren't major. But Amazon and Google could learn a thing or two from Salesforce, which has had its share of outages, about customer-friendly transparency. Its trust.sales force.com site lets users view daily performance and availability in a traffic light format. Salesforce also is more responsive to the media about its infrastructure problems.

All the major cloud vendors point to their service-level agreements, which, of course, compensate customers for service disruptions, not for lost business. In the end, their value proposition is this: Is your application, database, storage, or compute infrastructure any more reliable than theirs? And even if it's comparable, wouldn't your IT organization rather spend its time on matters that make a competitive difference instead of managing and upgrading servers, disk arrays, applications, and other software and infrastructure? Google business applications development manager Jeff Keltner refers to the 70% to 80% of most IT budgets spent on infrastructure management and maintenance as "dead money."

- Total cost of ownership (or rental). The cloud vendors make an excellent case that it's cheaper to subscribe to their services than to buy and run premises-based hardware and software. Pay no up-front costs; pay for only what you use, with the ability to scale up and down quickly; and take advantage of the vendors' huge economies of scale. AWS's storage service, for instance, costs just 15 cents per gigabyte per month. With subscription software services, the cost equation is less clear. In most cases, it's at least a wash.

- Choice. So you're considering moving some IT resources into the cloud. What options are available?

Those options grow every day. Salesforce's Web platform, Force.com, includes an exchange for hundreds of third-party applications, as well as a relational database service, business logic services, and an integrated development environment. Google offers a range of cloud-based app and storage services. Amazon's offerings include storage and computer processing services, and a database service now in beta. EMC, IBM, Microsoft, Sun, and other major players are ramping up a range of services, and scores of tech startups are embracing the subscription approach. The big question: How much customization is possible?

- Long-term vendor commitment. Will consumer giants such as Google and Amazon get bored slogging it out with slower-moving, more deliberative enterprise buyers? The cloud vendors like to compare the current IT provisioning model with the early days of electricity, when companies ran their own generators before moving to a handful of large utility providers. But the metaphor may be apt in another way. Northeastern University CTO Richard Mickool questions whether high-energy, high-innovation companies such as Google and Amazon will lose interest in selling commodity, electricity-like services.

Of course, the vendors insist they're in this business for the long term, and that customers are warming to the movement. Says Google's Chandra: "It's not a matter of when or if the cloud computing paradigm is coming. It's a matter of how fast." That depends on how fast vendors can assuage customers' concerns.

Rob Preston, VP and EDITOR IN CHIEF ([email protected])

[May 29 2008] Response to Alastair McAulay's article on whether it will live up to its billing

By Kishore Swaminathan, chief scientist, Accenture

Published: May 29 2008 17:32 | Last updated: May 29 2008 17:32

THREE ISSUES MUST BE ADDRESSED BY PROVIDERS BEFORE CLOUD COMPUTING IS READY FOR PRIME-TIME

Cloud computing has received plenty of attention of late. Its colorful name notwithstanding, it promises tangible benefits to its users such as sourcing flexibility for hardware and software, a pay-as-you-go rather than fixed-cost approach, continuous upgrades for mission-critical enterprise software, and the centralized management of user software.

Yet, as presently constituted, cloud computing may not be a panacea for every organization. The hardware, software and desktop clouds are mature enough for early adopters. Amazon, for example, can already point to more than 10 billion objects on its storage cloud; Salesforce.com generated more than $740 million in revenues for its 2008 fiscal year; and Google includes General Electric and Procter & Gamble among the users of its desktop cloud. However, several issues must still be addressed and these involve three critical matters: where data resides, the security of software and data against malicious attacks, and performance requirements.

Data

Say you use Salesforce.com as your Software as a Service (SaaS) provider. In theory, this is where you would locate your CRM data. However, in practice, most organizations are likely to have proprietary applications running on their own data center and that use the same data. Furthermore, if one provider is used for CRM, another for human resources and yet another for enterprise resource planning, there's likely to be considerable overlap between the data used by the three providers and the proprietary applications in the company's own data center.

So where do you locate the data?

There are existing and potential options. First, is the platform scenario. Some SaaS providers have evolved from offering single vertical capability (such as CRM) into providing SaaS "platforms" that bundle their vertical capability with a development environment and an integration environment. The theory is that while an enterprise may use multiple SaaS providers and proprietary applications, there will be a primary provider that hosts all the data and provides a development environment where data can be integrated with all the applications (including other SaaS) using the data.

Next, is the database-as-a-service scenario. In this case, one provider – which could be the organization's data center – only hosts the data and provides "database as a service" to all the applications. While this model does not presently exist, it's a distinct possibility that would offer the advantage of not tying the enterprise to a single SaaS provider for all its data and integration needs.

Security

Perhaps the primary concern when it comes to cloud computing is security. While this is likely to be a significant near-term concern, one could argue that a large SaaS provider whose entire business is predicated on securing its clients' data and applications would be in a better position to do so than the IT organization of a single company itself. As an example of the confidence certain global organizations have in this new paradigm, Citigroup recently signed up 30,000 users with Salesforce.com to address CRM needs after extensive evaluation of the SaaS provider's security infrastructure.

The point is that providers are endeavoring to improve their offerings to meet clients' enterprise-grade security needs. Still, companies in certain industries, such as defense, aerospace and brokerage, are likely to avoid the cloud computing environment because of their security and compliance concerns.

However, not too far off is the emergence of "gated community" clouds that, like gated communities, would keep out the riffraff. These highly selective private clouds or consortia would have dedicated networks that connect only their members. Unlike a single company's data center, a private cloud can consolidate the hardware and software needs of multiple companies, provide load balancing and economies of scale, and still eliminate many of the security concerns.

Performance

While the Internet is fast enough to send data to a user from a remote server, it lacks the speed to handle large transaction volume with stringent performance needs if the applications must access their databases through the internet.

The emergence of a full ecosystem of enterprise SaaS (for ERP, HR, CRM, supply chain management) plus database-as-a-service, all within a single hardware cloud, will help solve performance problems. An ecosystem cloud also has the added advantage of better integration across the different providers that are part of the same ecosystem.

Just as today's enterprise software falls into distinct ecosystems (Microsoft, Oracle, SAP), cloud computing may well organize itself in a similar fashion, with a number of hardware clouds and a set of complementary (non-competing) SaaS providers and a database-as-a-service provider.

For now, cloud computing offers several potential benefits for both small and large companies: lower IT costs, faster deployment of new IT capabilities, an elastic IT infrastructure that can expand or contract as needed and, most importantly, a CIO freed from the more mundane aspects of managing the IT infrastructure. In fact, industry analyst Gartner estimates that by 2012, one in every three dollars spent on enterprise software will be a service subscription rather than a product license and that early adopters will purchase as much as 40 percent of their hardware infrastructure as a service rather than as equipment.

Although individual aspects of cloud computing are already relatively mature, how they will all come together and which of the scenarios discussed will play out remains unclear. For now, organizations that take a tactical approach that includes moving aging, non-strategic applications to a hardware cloud: using SaaS for processes that don't have a significant data overlap with other critical processes and experimenting with network-based desktop for small, deskbound workforces can gain valuable experience and insight into how to take advantage of this emerging phenomenon.

This gives CIOs ample time to develop long-term strategies as the paradigm continues to evolve in order to solve those data, security and performance issues.

[May 29 2008] FT.com - Digital Business - Utility computing - Response to Phill Robinson's article on the shift to SaaS

Response to Phill Robinson's article on the shift to SaaS

By Marc Davies, strategic management specialist and principal leader for the CIO group at CSC

Published: May 29 2008 17:32 | Last updated: May 29 2008 17:32

Whereas I agree with Phill Robinson and his view that software as a service has 'happened', I disagree with the inference of the title, which pushes the reader into envisaging a world where SaaS has successfully transformed business, all we have to do is wait for the laggards to catch on that their incumbent IT infrastructures have gone the way of the Dinosaur. What has 'happened' is the appearance of another way of consuming technology for potential competitive advantage, with its attendant risks.

To be fair, Phill does himself point out that this paradigm will '…grow and develop.' Further, that 'There will not be a sudden shift…' Of course, both these statements appear at the bottom of the article at a stage where we are already imagined to be fully onboard with this thinking.

As a serious proponent of service orientation for many years, I have one or two observations that should give business decision-makers pause for thought, let's tackle them, contextually, against some of Phill's assertions.

Transumerism; is it a new paradigm for online consumer driven expectations? I'm minded to say both no and yes. No, because in fact Transumerism is not in my opinion new either in the world of the general consumer nor the IT exposed worker. Yes, because the emergence of the internet-savvy, broadband-enabled consumer has driven the rate of perception that change and experience can be both dictated and controlled.

A useful analogy may be the invention of the printing press – which, through prolific (by comparison) production of the written word in a 'new' completely consistent form many more people had the opportunity to become entrepreneur, gathering together far more options through information than had been possible before. As for the rent-not-buy principles, there are so many examples of this approach to society, from 17th century aristocrats to 19th century industrialists to 20th century media stars, as to make it difficult to see it as a completely new paradigm in anything other than a pure IT context. As we are playing with the notion that SaaS is being driven by societal change with IT as an enabler this pose a significant contradiction.

So, if Transumerism is not necessarily new, nor necessarily a societal paradigm; then SaaS as a fundamental business driver is indeed true?

I'm not so sure we are there yet and I'm going to draw on the concept of the trysumer to bolster my case. If we consider businesses such as Facebook, then the fundamental of the trysumer is in essence consumerism as a fashion accessory and, the one certainty of fashionista society is its transcience. Now, Phill makes this point but, I'd like to just ask the question – if you are the owner of a SaaS site, how do you construct a business model around a fashion accessory? In addition, how do you profitably break out of the culture that expects most of the services to be free?

Going deeper, how does a corporate consumer ensure that an external SaaS supplier relationship is managed in their best interests' once critical business processes are in the hands of outside agencies? I am not talking about the dangers of outsourcing here – this is more fundamental by far. If a business makes a decision to outsource a service, such as its IT management for example, then there are physical and contractual guarantees on both sides that ensure the business has a degree of confidence in the service contract. If, on the other hand, the business finds that a key step in its own transactional capability had (a) moved to Eastern Europe, or (b) bought out by a rival and (c) goes offline without notice, all in the space of 24 hours – then how does the business survive such a commercial shock?

Entrants into the world of supplying SaaS products have significant challenges in ensuring their future products are sufficiently agile to move with the fashion trends of popular vs. unpopular websites, which will continue to challenge their shareholders' faith in any real return. A few will make it very big but, in a global context, this may reduce choice and market flexibility rather than enhance it.

Corporate consumers of external services still have to overcome the twin challenges of trust and cost; trust that the website delivering the services will still be there tomorrow and the cost of survival if one of the key business processes disappears overnight.

As a serious proponent of service orientation for many years, I have one or two observations that should give business decision-makers pause for thought, let's tackle them, contextually, against some of Phill's assertions.

The views expressed in this piece are the author's own.

[May 12, 2008] FT.com - Digital Business - Utility computing - Will it live up to its billing second time around

By Alastair McAulay

Published: May 12 2008 10:24 | Last updated: May 12 2008 10:24

From reading the latest vendor announcements telling us that Software as a Service (SaaS) and cloud computing are going to transform how IT is provided to the enterprise, it is easy to forget recent history.

It was only five years ago that Gartner was predicting that by 2008 utility computing (a broadly similar concept) was going to be in the mainstream. At the time, there was a huge wave of publicity from the big vendors about how utility computing was going to transform the way IT services were provided.

Plainly, utility computing has not yet become a mainstream proposition. So what has happened to it in the past five years, and will things be different this time round?

Five years ago, many of us in the industry were working with clients, helping to set up some of these utility computing sourcing deals. It soon became clear from the fine print that on-demand services were only available under certain circumstances, such as with applications hosted on mainframes. That was hardly earth-shattering when you consider that mainframe technology has allowed virtualisation and charging-per-use for decades.

Of course, there were differences. But fundamentally, the deals were never as exciting as they were hyped up to be. A large part of this was due to the fact that the necessary virtualisation technology was not mature enough to allow the foundation IT infrastructure to be run as flexibly as it needed to be for true on-demand services.

This time round, the vendors are playing a subtly different tune. They seem to be saying: "With virtualisation technology, such as the near-ubiquitous VMWare, now fully mature, we really can deliver utility computing. Trust us."

However, we are finding that while businesses are more than happy to use virtualisation technology within the boundaries of their own organisations for its flexibility and efficiency gains, they are not really paying much attention to utility computing, be it SaaS or cloud computing or whatever its being called this week. Indeed, according to silicon.com's recent CIO Agenda 2008 survey, utility computing was languishing at the bottom of the wish list next to Vista and RFID.

But let us set this indifference aside and, for the sake of argument, assume that we trust some of what the vendors are now saying, and that we are disciples of Nicholas Carr, author of "Does IT matter?" and believe that IT can be purchased as a utility. So what should we do now?

The first thing to do is to add a word to the title of Mr Carr's 2003 Harvard Business Review article to make it "IT doesn't matter . . . sometimes". The challenge therefore becomes working out what "sometimes" is. There are three steps that need to be taken to define this IT strategy for the real world

Step one is to recognise that within most organisations of more than 100 people there are unique business requirements where the IT involved does have to be tailored to individual circumstances. This may be because of unique regulatory and security requirements that strictly stipulate where data is stored and processed (eg, the public sector).

Perhaps more significantly, it could be because IT can be used in innovative ways to provide a competitive edge. The big banks, for example, will put a lot of effort into running their proprietary risk models quicker than their rivals – even down to developing their own hardware. In these instances adopting a utility computing model and migrating on to processors and storage somewhere or other in the world is not going to be viable.

The second step is to acknowledge that there may well be cases where legacy information systems are working pretty well. Typically, this is IT that was installed around five years ago (and we have seen 15-year-old systems that continue to work well). All the painful deployment wrinkles have been ironed out and if you take care of the underlying hardware, the systems more or less run themselves. Why bother replacing when the cost and pain of moving the system to a new utility computing platform is not going to achieve the payback within a sensible timescale?

The third and final step is to identify where there really are areas that IT can be sensibly commoditised and handed over to an IT utility service provider.

These are likely to be areas that do not have an unusual business process, that are not sensitive to user response times, where there are not legal implications for data storage, and where availability is not super critical (Amazon's much publicised "Elastic Compute Cloud" gives 99.9 per cent availability – which translates into more than a minute of downtime a day).

We can also say from PA's own experience of using the Second Life platform that businesses should make sure any scheduled downtimes – designed to minimise impact for Californian business – do not disrupt you too much.

Given the above, you may well conclude that for now you can use utility computing to do some one-off business data analysis that requires a burst of server and CPU resource, or you may decide that your sales team can easily be supported by the Salesforce.com service without risk to the business.

Being a cautious early adopter is not a bad place to be with utility computing. Despite the rebranding exercises and the continuing hype, this time around there may well be some benefits available, even when the real world constraints are considered.

However, it would be a very foolish CIO who bet the business on a major shift to the utility computing model. That would mean ignoring why IT still does matter to the business, and will still do so in five years, when we are calling utility computing something else again.

Alastair McAulay is a senior IT consultant at PA Consulting Group

[May 31, 2008] Gartner Identifies Top Ten Disruptive Technologies for 2008 to 2012 eHomeUpgrade

Gardner deteriorated to the level when it becomes a joke...

Speaking at the Gartner Emerging Trends and Technologies Roadshow in Melbourne today, Gartner Fellow David Cearley said that business IT applications will start to mirror the features found in popular consumer social software, such as Facebook and MySpace, as organisations look to improve employee collaboration and harness the community feedback of customers.

"Social software provides a platform that encourages participation and feedback from employees and customers alike," he said. "The added value for businesses is being able to collect this feedback into a single point that reflects collective attitudes, which can help shape a business strategy."

Multicore processors are expanding the horizons of what's possible with software, but single-threaded applications won't be able to take advantage of their power, Cearley said. Enterprises should therefore "perform an audit to identify applications that will need remediation to continue to meet service-level requirements in the multicore era."

By 2010, Gartner predicts that web mashups, which mix content from publicly available sources, will be the dominant model (80 percent) for the creation of new enterprise applications.

"Because mashups can be created quickly and easily, they create possibilities for a new class of short-term or disposable applications that would not normally attract development dollars," said Mr Cearley. "The ability to combine information into a common dashboard or visualise it using geo-location or mapping software is extremely powerful."

According to Gartner, within the next five years, information will be presented via new user interfaces such as organic light-emitting displays, digital paper and billboards, holographic and 3D imaging and smart fabric.

By 2010, it will cost less than US$1 to add a three-axis accelerometer – which allows a device, such as Nintendo's Wii controller, to sense when and how it is being moved – to a piece of electronic equipment. "Acceleration and attitude (tilt) can be combined with technologies such as wireless to perform functions such as 'touch to exchange business cards,'" said Mr Cearley.

According to Mr Cearley, Chief Information Officers (CIOs) who see their jobs as "keeping the data centre running, business continuity planning and finding new technology toys to show to people" will not survive. Instead, they will have to think beyond the constraints of conventional, in order to identify the technologies that might be in widespread use a few years from now.

Gartner recommends that CIOs establish a formal mechanism for evaluating emerging trends and technologies, set up virtual teams of their best staff, and give them time to spend researching new ideas and innovations, especially those that are being driven by consumer and Web 2.0 technologies.

"The CIO then needs to act as a conduit from the business to the technology. He or she needs to see how it might be possible to use these technologies to solve a problem the business has identified," Mr Cearley said.

Gartner's top 10 disruptive technologies 2008-2012:

Platformonomics - Book Review The Big Switch

Sunday, March 02, 2008

Nick Carr made his name with the provocative Harvard Business Review article "IT Doesn't Matter" (free version here), its expansion into a less definitively titled book Does IT Matter? and his generally erudite blog. The charge of irrelevance hit the industry hard and elicited mostly incoherent and ineffective rebuttals (e.g. "hogwash"), which hampered real discussion of Carr's argument.

I have gently mocked his thesis previously but found it a mix of the obvious (yes, things get commoditized over time, so you focus on the top of the stack and of course further commoditize the rest of the stack) and the ridiculous (IT had apparently previously been a source of everlasting strategic differentiation, but with the democratization of computing making technology widely available, we should write off the industry in its entirety). It is like arguing that since everyone has a brain, don't bother thinking...

Carr has a new book, The Big Switch: Rewiring the World, From Edison to Google, in which he contemplates the future of computing and speculates on the broader societal impact of that future. The book is lucid, well-written and uses lots of historical examples to make the narrative and arguments come alive. The first half of the book looks back at the evolution of the electrical industry and argues the computing industry will follow the same path. The later half offers up social, economic and cultural consequences of the shift, again using electrification as an example of how new technologies have secondary and unforeseen effects. Carr is less than excited about the consequences of the technology path he believes is inevitable-- no one will mistake him for an Internet optimist.

Back in the 19th century, companies generated their own power locally, whether through water, steam or early electrical generation. The advent of alternating current meant power could be generated remotely and transmitted afar, allowing companies to get out of the power business and buy electricity from the new electrical utilities.

Carr tells the story of Thomas Edison and his former clerk Samuel Insull. Edison, with his bet on direct current which didn't lend itself to long distance transmission, focused on small-scale generators that ran "on-premise". His model was to sell every business equipment to generate their own electricity. Insull predicted the rise of the electrical utility, foresaw it would eclipse the equipment business and left Edison to join what became Commonwealth Edison. (Empires of Light is a great account of the battle between Edison and direct current versus Tesla and Westinghouse who championed alternating current).

By offering electricity to multiple customers, utilities could balance demand and reap economies of scale that drove a virtuous cycle, allowing them to drive down the cost of power and thereby attract even more customers. Their strategy was predicated on maximizing generator utilization and the standardization of electrical current. Companies that outsourced their power generation to utilities no longer had to worry about generating their own electricity, reducing cost, staff, technology risk and management distraction.

Turning towards computing, Carr reprises his "IT Doesn't Matter" death knell: IT is an infrastructural commodity that every company has access to, so there is no differentiation available, which means it is a dead cost. He recounts the history of computing, showing a particular fondness for the punch card, and excoriates the industry for cost, complexity and waste. Siebel is the chief punching bag (while deservedly so, it is an easy target).

His future trajectory for the industry has the Internet playing the role of alternating current, allowing computing to be performed remotely which in turn enables a new breed of computing utilities (with Amazon Web Services, Google and Salesforce as early poster children). The end result is companies no longer have to run their own complex computing operations. He calls this new era of computing the "utility age" and states "the future of computing belongs to the new utilitarians".

Enterprise computing vendors who sell "on-premise" solutions will be marginalized like Edison, unless they can reinvent themselves (as Edison's company ultimately did, shifting both technology and customer allegience - they're still around today, a little outfit called General Electric). Carr dwells on Microsoft's recent embrace of cloud computing, but questions whether the company can navigate the difficult transition of embracing a new model while continuing to harvest profits from the old model.

Is the Big Switch Big or Not?

I have two critiques of the first half of the book. The first is mild schizophrenia. The Big Switch is -- wait for it -- as follows:

"In the years ahead, more and more of the information processing tasks we rely on, at home and at work, will be handled by big data centers located out on the Internet."

Wow. Gather now at the knee of the S-curve to learn what the future holds. Perhaps he is aiming the book at a more general audience, but with over a billion people regularly accessing the Internet, there are an awful lot of people who have already made the "big switch". He does some hand-waving about broadband penetration to explain why the book isn't over a decade late, with no mention of the failure of the late 20th century's application service providers.

Carr can't quite decide whether the big switch to his utility age is a revolution or not. He equivocates about whether a wave of creative destruction is crashing down today or if it will take decades to play out. He also qualifies the move to the cloud and how far it will go with suggestions that the future may actually consist of cloud-based services working in conjunction with local computers in corporate data centers and/or local PCs. This qualification I think stems from his general tendency to paint everything with a very broad brush. In practice, there are many segments and technologies, each with their own dynamics. He also plays fast and loose with topology, enlisting highly distributed examples to support a centralized thesis.

The Fallacy of the Perfect Analogy

My second critique is that the book turns on the idea that computing is basically similar enough to electricity that it will inexorably follow the same path. While there are similarities, it is a mistake to assume they are alike in every aspect. There are enough differences that blind adherence to an analogy is dangerous:

So while the book gets the broad trend to more computing in the cloud right, Carr's extended analogy obscures a lot of the differences and subtleties that will make or break cloud computing endeavors. Between the caveats and the broad definitions, there is a lot of leeway in his technical vision (admittedly the mark of a savvy forecaster). Victory will go to those who best exploit both the cloud and the edge of the network. Carr's own examples -- Napster, Second Life and the CERN Grid -- make this case, even if he either misses their distributed nature or chooses to ignore it.

Utility, Not Utopia

The second half of the book focuses on the broader social and economic consequences of the move to utility computing. It is the bolder and more thought provoking part of the book.

Carr again begins by looking back through the lens of electrification. He succinctly credits electrification with ushering in the modern corporation, unleashing a wave of industrial creative destruction, improving working conditions by displacing craftsmanship for the modern assembly line and the gospel of Frederick W. Taylor, improving productivity which begat a broad middle class and white collar jobs to coordinate more complex organizations, the broadening of public education, expanding demand for entertainment, and enabling the suburbs (cheap cars relied on cheap electrical power to power the assembly line).

He also notes that the early years of electrification were accompanied by great optimism and even utopianism about what the future would hold. Carr, however, leaves his rose-colored glasses at home as he ponders his utility future:

"Although as we saw with electrification, optimism is a natural response to the arrival of a powerful and mysterious new technology, it can blind us to more troubling portents.... As we will see, there is reason to believe that our cybernetic meadow may be something less than a new Eden."

Carr basically finds his utility future dystopian. He spends the remainder of the book worrying about:

The Hollowing Out of the Workforce - the utility future has little need for workers, which reverses the positive virtuous cycle of employment driven by electrification. He points to increasing returns businesses like YouTube, Skype, craigslist, PlentyofFish and giant data centers with small staffs leading the way "from the many to the few". They are free riders on a fiber backbone paid for by others and are ushering in a world where "people aren't necessary". "Social production" (aka "user-generated content") is simply digital sharecropping and reduces the need for workers further. Unlike electrification which "played a decisive role" in building an egalitarian society, the utility age "may concentrate wealth in the hands of a small number of individuals, eroding the middle class and widening the divide between haves and have-nots".

The Decline of Mainstream Media - while electrification "hastened the expansion of America's mass culture" and gave rise to mass media, the Internet is undermining the media with its explosion of voices and "some of the most cherished creative works may not survive the transition to the Web's teaming bazaar". Newspapers are of course the foremost example. The shift from scarcity to abundance of content is not a good thing to Carr and "the economic efficiency that would be welcomed in most markets may have less salutary effects when applied to the building blocks of culture." The result is a decline of media and shared culture, the polarization of virtual communities (exacerbated by personalization engines) , "social impoverishment and social fragmentation".

Bad Guys - the Internet in the utility age promises to be a magnet for bad guys, including criminals, terrorists, botnet operators, spammers, perpetuators of denial of service attacks and fiber optic cable-snapping earthquakes. The underlying infrastructure is fragile and vulnerable yet critical to the global economy. This was the least forward-looking of his pessimistic projections. He mostly reiterates issues. About the only new claim about the future was that pressure to protect the Internet from "misuse and abuse" will stress the sovereignty of nations as utility functions migrate to countries with the lowest operating costs. He is surprisingly silent on whether we should expect the heavily regulated nature of electrical utilities to also apply to computing in the future.

Privacy and the Control Revolution - don't even think about having any privacy in the utility age:

"Few of us are aware of the extent to which we've disclosed details about our identities and lives or the way those details can be mined from search logs or other databases and linked back to us."

Carr believes computing always has and always will be fundamentally a tool of oppression for the Man, the computing revolution is really just part of a broader "Control Revolution" and the empowerment of the personal computer will be "short-lived" as the Man inevitably reasserts control:

"The sense of the Web as personally "empowering"...is almost universal. ... It's a stirring thought, but like most myths its at best a half-truth and at worst a fantasy. Computer systems in general and the Internet in particular put enormous power into the hands of individuals, but they put even greater power into the hands of companies, governments, and other institutions whose business it is to control individuals. Computer systems are not at their core technologies of emancipation. They are technologies of control. They were designed as tools for monitoring and influencing human behavior, for controlling what people do and how they do it. As we spend more time online, filling databases with the details of our lives and desires, software programs will grow ever more capable of discovering and exploiting subtle patterns in our behavior. The people or organizations using the programs will be able to discern what we want, what motivates us, and how we're likely to react to various stimuli. They will, to use a cliche that happens in this case to be true, know more about us than we know about ourselves."

Carr is particularly full of disdain for the PC as a device but is conflicted about personal computing. He readily acknowledges the empowering impact of personal computing, yet simultaneously promotes a dumb terminal future while lamenting the inevitable reassertion of control by the Man (somehow those seem related...).

He concludes on the cheery note that the utility future is no less than another front on "humanity's struggle for survival". Actually, I took that quote from the Gears of War 2 announcement, but it would not be out of place in Carr's conclusion. He fears the utility age may devalue quintessential human attributes, making us (even) more superficial, undermining the coherence of the family and relegating us to mere "hyperefficient data processors, as cogs in an intellectual machine whose workings and ends are beyond us". Bummer, dude.

The second half of The Big Switch is kind of a dour read and the utility future is boldly painted with a Luddite, elitist and generally defeatist brush:

"...we may question the technological imperative and even withstand it, but such acts will always be lonely and in the end futile."

In a book full of references to big thinkers, from Jean-Jacques Rousseau to Alexander Solzhenitsyn, Ned Ludd does not merit a mention, even though the Luddite fear of automation hollowing out the workforce is repeated almost verbatim. He doesn't acknowledge the parallel or make a case for why the Luddite fears are more warranted now, despite failing to come to pass in the Industrial Revolution.

And while he bemoans the rise of "a new digital elite", the shifts in media, and survival of our "most cherished" work, he manages to come across as an elitist himself (not that there is anything wrong with being an elitist of course...). I'm just not sure the Brahmins get to decide what is and isn't worthy media.

It is hard to argue with his position on privacy (read No Place to Hide to shatter any lingering techno-optimism on this front -- large-scale databases go awry, period), but he doesn't make the case that the black helicopters of the Control Revolution are just over the horizon. Individual freedom is pretty much at an all-time high in world history and information technology gets at least some credit for that. Carr does admit technology is "dual use", but you won't find much on the positive uses in the book.

The Big Switch is well worth reading if you're thinking about the evolution to cloud computing. It provokes and stimulates as this long-winded review shows. Carr's technical foundation is shaky, but he is a good social critic and forecaster, and a great polemicist (and that is a compliment). My view is Carr's dystopian future is not inevitable, but averting it will take a conscious and proactive effort. If nothing else, the later part of the book is a call to arms for w

Inside Architecture IT to Business I won't read your mind

In any relationship, it is dangerous for one side to "decide" what the other one wants. Marriage advisors say things like "Don't control others or make choices for them." Yet, I'd like to share a story of technologists doing exactly that. Think of this as a case study in IT screw-ups. (Caveat: this project was a couple of years back, before I joined Microsoft IT. I've changed the names.)

The journey begins

Business didn't know what they wanted.

"Business" in this case is a law firm. The attorneys are pretty tight-lipped about their clients, and don't normally share details of their cases. In the past, every attorney got their own "private folders" on the server that he or she must keep files in. Those folders are encrypted and backed up daily. A 'master shared folder' contained templates for contracts, agreements, filings, briefs, etc.

Of course, security is an issue. One of the attorney had lost a laptop in an airport the previous year, and lost some client files. But security is also a problem. Major cases involved creating special folders on the server that could be shared by more than one attorney, just for that client.

None of this was particularly efficient. They knew that they wanted to improve things, but weren't sure how. Some ideas were to use a content management system, to put in a template-driven document creation system, and to allow electronic filing with local court jurisdictions. They didn't have much of an IT department. Just two 'helpdesk' guys with the ability to set up a PC or fix a network problem. No CIO either. Just the Managing Partner (MP).

To fix the problems, and bring everyone into the 21st century, the MP brought in consultants. He maintained some oversight, but he was first-and-foremost an attorney. He hired a well-recommended project manager and attended oversight meetings every other week.

Here come the geeks

The newly-minted IT team started documenting requirements in the form of use cases.

The use cases included things like "login to system" and "submit document". The IT team described a system and the business said "OK" and off they went. The system was written in .Net on the Microsoft platform, and used Microsoft Word for creating documents. They brought in Documentum for content management.

A year later, the new system was running. The law firm had spent over $1M for consulting fees, servers, software licenses, and modifications to their network. A new half-rack was running in the "server room" (a small inside room where the IT guys sat). Their energy costs had gone up (electricity, cooling) and they had hired a new guy to keep everything running. Everyone saw a fancy new user interface when they started their computers. What a success!

The managing partner then did something really interesting. He just finished reading a book on business improvement, and decided to collect a little data. We wanted to show everyone what a great thing they had in their new system. He asked each of the firm's employees for a list of improvements that they had noticed. Partners, associates, paralegals, secretaries, and even the receptionist.

He asked: Did the new system improve their lives? What problems were they having before? What problems were they having now? Did they get more freedom? More productivity?

The answer: no.

He was embarrassed, but he had told the partners that he was creating a report on the value of the IT work and so he would.

This is where I came in. He hired our company to put together the report.

Business Results: There were as many hassles as before. Setting up a new client took even longer to do. Partners and associates still stored their files on glorified 'private folders' (they were stored in a database now). There were new policy restrictions on putting files on a laptop, but many of the partners were ignoring them. The amount of time that people spent on the network had gone up, not down.

Things had become worse.

So what did they do wrong? What did we tell the Managing Partner?

The IT Team had started by describing use cases. They were nice little 'building blocks' of process that the business could compose in any way they wanted. But how did the business compose those activities? In the exact same way as before.

Nothing improved because no one had tried to improve anything. The direction had been "throw technology at problems and they go away," but they don't. You cannot solve a problem by introducing technology by itself. You have to understand the problem first. The technology was not wrong. The systems worked great, but they didn't solve measurable business problems.

The IT team should not have started with low-level use cases. That is an example of IT trying to read the minds of business people. IT was making choices for the business. "you need to do these things." No one asked what the measurable business problems were. No one started by listening.

They should have started with the business processes. How are new clients discovered? What steps or stages do cases go through? What are the things that can happen along the way? How can attorneys share knowledge and make each other more successful?

We explained that business processes describe "where we are and what we do." Therefore,operational Improvement comes in the form of process improvements. These are different questions than they had asked: What should we be doing? How should we be doing it? Where should we be? What promises do we need to make to our clients, and how can our technology help us to keep these promises?

Business requirements for an IT solution cannot be finalized until these questions are asked and answered. Writing code before understanding the process requirements is foolish. Not because the code won't work, but because the code won't improve the business. All the unit tests in the world won't prove that the software was the right functionality to create.

Our suggestions

Here are the suggestions we gave. I don't know if the law firm actually did any of them or not. (I added one that I didn't know about five years ago, but I believe would be a good approach. I marked it as "new" below).

  1. Spend one month figuring out which parts of the new system were actually adding value. Look for product features in their new software they were not using and consider the value of turning them on. Their existing investment was not being well spent. Look for ways to cut costs if their infrastructure was too big. Roll back bits that weren't working. Basically, "pick the low-hanging fruit." We even asked the managing partner to consider undoing the entire thing if necessary (not that they needed to, but we wanted to shatter the idea that technology is good because it's technology). Don't spend a lot, but don't live with a loss of productivity.
  2. Put in an operational scorecard. Use the techniques for balanced scorecards described by Kaplan and Norton. Look for the Key Performance Indicators (KPI) and those factors that are critical to quality (CTQ). Look for measures that describe success. Start tracking them and reviewing them monthly with the partners.
  3. (new) Hire a consultant to help the organization understand their key business capabilities and map them to both their business strategies and their scorecard KPIs. This helps to focus effort. "If we improve the ability to share case information, we can reduce costs" or "If we improve the ability for attorneys to keep up to date on changes in agreements, we can improve our client satisfaction and perception of value."
  4. Get buy-in from the partners to focus on ONE area of improvement at a time. Have the entire team pick one area to focus on. Improvements in other areas can and should occur eventually, but all technical investment would go to that one area. Agreement is critical. Churn is an enemy.
  5. Hire a consultant to create a set of process maps for the identified area. Think things through from the perspective of he customer (client) and not the attorneys. Have a steering committee that sees a presentation every month about what the consultant has discovered and recommendations that he currently believes. That committee must provide feedback and course corrections.
  6. Only after a good plan exists for the future business process should they invest in technology, and only then, technology to solve a specific problem.

I hate to say it, but the real mistake was starting at the middle. They started with a IT centric approach: write use cases and then write code. I love use cases. But they are not 'step one.' Step one is to figure out what needs to be improved. Otherwise, IT is being asked to read minds or worse, to make decisions for their business partners.

Inside Architecture IT to Business I won't read your mind

Naked IT Author Ed Yourdon on the IT - business divide (includes podcast) IT Project Failures ZDNet.com

The Naked IT interview series talks with innovators about the evolving relationship between IT and business. Please listen to the audio podcast and enjoy the additional information included in this blog post.

In this segment, we meet Ed Yourdon, an internationally-recognized author and computer consultant who specializes in project management, software engineering methodologies, and Web 2.0 development. He has written 550 articles and 27 books, including Outsource: competing in the global productivity race, Byte Wars, Managing High-Intensity Internet Projects, Death March, Rise and Resurrection of the American Programmer, and Decline and Fall of the American Programmer. Ed's work spans 45 years, giving him a unique perspective on the computer industry.

Our conversation ranged across a number of important IT issues - including IT / business alignment, project failures, and changing of the guard in technology - which Ed analyzed and put into historical context.

On IT / business alignment:

If you start at the strategic level, [lack of alignment] occurs when systems are proposed and budgeted and justified and launched, either without any support at all from the business community that it ultimately should be serving, or without a full appreciation on the part of the business community about what the risks and the costs are going to be.

The question is: whether IT is building the kinds of systems the business needs, or whether they anticipate, and can work strategically, to help the business make the best possible use of IT. That problem has been around for 30 or 40 years.

I remember back in the early nineties when Computerworld did annual surveys of what the top ten IT problems were, and lack of business-IT alignment was usually number one or number two on the list.

On barriers to senior-level IT acceptance:

It's amazing today how many senior executives don't even read their own email. It's mind boggling, but these people are going to die off sooner or later.

As the older generation of marketing- and finance-oriented, computer-illiterate senior managers die off and retire, you'll gradually see a new generation coming in that is fully comfortable with the day-to-day activity and the strategic possibilities of IT, and who will be able to work more closely with CIOs.

The generation of people in their forties, whether or not they are marketing people or financial people, grew up with computers all through college, and are more likely to feel culturally compatible with the CIO.

On the gatekeeper role of IT:

Because IT is clearly so critical to the day to day operation of almost any large organization, IT has to serve as somewhat of a gatekeeper guarding the crown jewels, so to speak, so that they don't get damaged or hacked into, either by insiders or outsiders. That has become a more pervasive and annoying responsibility.

Part of the alignment problem we see when users get excited about new technologies is the notion that IT is preventing the users from getting their hands on these technologies themselves. That sets up a bunch of conflicts.

On losing the battle:

Try to persuade CIOs, much as we did 20 years ago when PCs first arrived on the scene, that if they think they are going to maintain exclusive control over these technologies, and restrict the way employees use them, they are likely to find it a losing battle.

IT departments might be better off trying to figure out how to work in a collaborative and participative fashion. I think otherwise they'll just be ignored and overrun much the way we saw in the PC era, when people quickly figured out that they weren't getting any support from IT departments still focusing on COBOL and mainframes. They went out and bought their own PCs, which caused a great deal of chaos and confusion that could have been avoided.

On IT project failures:

As an expert witness, I get called in because I'm a computer guy and the presumption is if the project failed it must have been a technical failure. However, 99.9% of the time it turns out to be project management 101. They didn't have any requirements, or they kept changing the requirements, or the subject matter experts, who should have been working with the vendor to help identify the requirements, were so busy trying to do their regular job that they didn't have time to even talk to the vendor, etc. This was true 30 years ago and it's still true today.

Most of the cases I've seen have not involved a whole lot of concern on the part of senior management, or even middle management, during the course of the project about potential technical failures. There's usually awareness of potential organizational failures, or management failures, or political failures, or whatever, which may or may not be discussed openly, or shared openly, with the business partners involved.

On doomed IT projects:

In many cases you find projects that are doomed from day one, not because of poor technical capabilities in the IT department, but because of these strategic misunderstandings or misalignments.

Senior management never really understood what the true cost was going to be, or the IT department never told them what the true cost was going to be, and senior management never really provided the organizational support that was going to be necessary to make it all work.

Podcast Is Carr right Does IT not matter Gartner attendees respond TalkBack on ZDNet

IT like cars
There have been many analogies between IT and autos. Now adays, the type of car you have does
not give a real advantage to a company, but woe to those who try to use horse and buggy instead.
IT is still in the early 1900 mode. Some can
still run a business on foot, or without an auto.
Those with autos do more business, but suffer from the growing pains of an industry.
Carr's words can be applied to ALL R&D.
So why should anybody do any real R&D?
M$ gets it best ideas by buying small companies.
Do you want your company to be a LEADER?
Do you want to be a follower, falling further
and further behind?

Posted by: SirLanse Posted on: 10/13/06

ramnet
In a lot of ways Carr is right on the money. Business has a love affair with technology because it often allows senior execs to hide behind non performance in other areas of the business and to pretend that by spending on computing they are moving forward in the best direction. This view is not by default correct. A lot of big business overlook small things that could save them big $ and they invent more process which of course justifies a bigger IT spend. The real question is why do we need all this process and red tape for computers to trawl through. Someone is NOT asking the harder and more difficult questions. Many IT professional protect their own jobs this way sadly . Posted by: ramnet@... Posted on: 10/13/06
But that is not IT leadership ...
Sure, there have always been companies (like people) who cling to this or that as a panacea. That is not leadership. Being a leader is about understanding your business and taking risks so that business may make greater strides than the competition. Throwing dollars (or IT technology) at a problem instead of trying to understand it is useless.
Carr for President (lmao)
Let's see, he obviously does not have a clue, and he's a Harvard grad. Doesn't that sound like our own King George, who also believes that IT doesn't matter enough to even worry about losing all our IT jobs to competing nations? I say we push to get Nicholas Carr on the ballot for 2008, that way we can replace our current narrow minded twit with a new one. Carr and Cheney in 2008 - the clueless and the devil - now there's a great ticket!
Posted by: jonnjonnzdnet Posted on: 10/13/06
Posted by: mwagner@... Posted on: 10/13/06
Its always about the toolbox
Small businesses use IT like a carpenter would your basic hammer & saw. You can get a tremendous amount done with just a very basic small toolset. Outsourcing kills IT because it means the businesses never own their own critical tools! Large business also doesn't spend wisely on their tools. You see enourmous budgets for the glitz and hype of chrome hammers and leather grip saws instead of the power drill which would have really helped do the job.
Posted by: sjbjava Posted on: 10/13/06
Carr irrelevant
If a company needs someone like Carr to tell them whether IT is productive for their company or not, then they should probably not have IT...whether it IS productive or not.

When the dust settles Carr will admit that all he's saying is the obvious: that some companies spend more on IT than they should. He's only making headlines because there are too many IT journalists with not enough imagination to create real news.

Posted by: Langalibalene Posted on: 10/13/06
Well, that's one dead straw man, for sure
Based on the article, it would appear that Carr's speciality is creating bogus "straw man" arguments, knocking them down, and then telling everybody what a clever fellow he is.

I doubt that very many companies have actually been led to believe that "buying technology can make them more productive" absent a clear business need for that technoloy.

Yes, buying a technology product just because it's the latest or most hyped is stupid, but how often does that actually happen? In larger firms in particular, the reality is more the opposite: the IT folks have to make a very good business case for what they want to buy or the CFO and (if s/he's doing his/her job properly) neither will the CIO.

Carr is clearly a good salesman - look how successful he's been at promoting himself - but I have to see him say anything that isn't either grossly oversimplified ("stop buying technology") or crashingly obvious ("The innovator is going to pay a lot more than those who follow in the innovator's wake" - well, DUH!).

Technology news vendors, ZDNet included, aren't helping any by encouraging this kind of hucksterism.

Posted by: the_doge Posted on: 10/13/06
IT - it's a tool and by itself cannot be relevant anyway...
Like a tool in someone's "golf bag," IT matters if you are skilled in wielding it, know when to use it and use it with skill.

This overall discussion raised by Carr is not relevant since we are discussing the relevance of a "hammer" when the relevance comes from knowing when and how to apply the "hammer."

I would presume his name and brand association might provide him the right to make the relationship, but I don't see it and I think it might even hint at a lack of both an understanding of and skill with using the "tool" called IT.

I could argue that money is not relevant since it is the understanding of and the use and the skill at using money that can be relevant to differentiation.

So would Carr agree that money is therefore also not relvant?

I would be curious...?

Posted by: beardeddiver Posted on: 10/13/06
Carr idea of what "IT" is
IT depends on your point of view and the problems you have to solve with what you have (and can afford) If companys only came in 1 size and only had
1 problem they would use IT in amost the same way.
Now, smaller companys have different problems and
different resources and may be forced to use the IT
they can afford in ways the manufactor had never envisioned. that kind of inovation never happens or has a significant impact in large companys
(which tend to use products exactly they way they are intended) Because everyone has a chance to see what others are doing with a given product
(makers of IT as well) this tends to make products that can do things better and faster (and cheaper, when the big companys see the new ways printed on the IT boxes they buy in mass at the IT store)
Posted by: liverwort@... Posted on: 10/14/06
Small firms and technology...
I work for small businesses and dispute Carr's statement that small businesses don't use technology to the level that large businesses do. My experience is the opposite - not only do they use it more, but generally they have much better, more cutting-edge technology (percentage wise). This allows the small business to compete in the marketplace by keeping their organizations from becoming bloated with unnecessary payroll costs - the technology improves efficiency. In the larger businesses I have worked with, the technology is generally marginally functional and because of the high cost of large scale replacement - mostly outdated - replaced only when it becomes non-functional.
Posted by: ladyirol Posted on: 10/15/06
Carr's wants us to excell in mediocrity! The choice is ours!
The way IT has been deployed, managed, and justified to date is a good place to start evaluating Carr's thesis. We seem to be in a vortex of IT ignorance using it to automate prevailing inefficiencies-no wonder many CIO's feel IT does not matter. It's like asking the fox for a hen count! Treat IT like a business and due proper due diligence before investing in IT. Use it to create a center of excellence for business productivity nit to excell in mediocrity.
Posted by: ajitorsarah@... Posted on: 10/14/06

[May 26, 2008] US offshore outsourcing of R&D accommodating firm and national competitiveness perspectives Innovation Management, Policy, & Practice Find Articles at BNET.com

Innovation: Management, Policy, & Practice, Oct, 2005 by Thomas A. Hemphill
IMPACTS ON US GLOBAL COMPETITIVENESS

There are two primary negative competitive impacts that can emerge as a result of widespread R&D outsourcing. First, fewer job opportunities and downward pressure on wages will occur as greater numbers of scientific and engineering jobs are shifted to lower cost, overseas locations; consequently, these employment conditions are likely to discourage many of America's best and brightest students from pursuing careers in science and engineering (IEEE-USA, 2004). This could eventually weaken US leadership (i.e., the 'brain trust') in technology and innovation, leading to serious repercussions for both national security and economic competitiveness. Second, an issue that has been referred to earlier is the loss of crucial intellectual property to overseas competitors as a result of unidirectional technology transfer. Inadvertent knowledge transfers, or employees consciously transferring trade secrets to competitors (encouraged because of difficulties in enforcing foreign civil law covering confidentiality clauses in employee contracts), adversely impact US global competitiveness.

But offshore outsourcing of R&D does have potentially positive impacts on US global competitiveness. R&D collaboration involving foreign-based firms is a central part of scientific research and increases in importance as research activities grow in expense and complexity (Austin, Hills and Lim, 2003). Outsourcing can be interpreted as a form of collaboration that fosters both domestic and foreign R&D capabilities. Strategic technology alliances (which serve mutual interests) can open doors toward innovation partnerships between US and foreign corporations, governments, and academia. In addition, for US corporations there is a need to build pools of highly skilled scientific and engineering talent to replace the American intellectual capital which will be retiring over the next few years. These firms face the sobering reality that by 2010 there will be 7 million fewer working Americans, ages 25 to 45, than there were in 2000 (Malachuk, 2004).

PROPOSED BUSINESS AND PUBLIC POLICY MODEL

Offshore outsourcing of R&D, while confronted with many organizational and political challenges, is impossible for American firms to ignore as a source of sustainable competitive advantage. For American companies, capturing the valuable insights generated from global R&D require a well-coordinated business strategy. Henry Chesbrough (2003) proposes an 'Open Innovation' paradigm for managing industrial R&D that:

... assumes that firms can and should use external ideas, and internal ideas, and internal and external paths to market, as the firms look to advance their technology. Open Innovation combines internal and external ideas into architectures and systems whose requirements are defined by a business model. The business model utilizes both external and internal ideas to create value, while defining internal mechanisms to claim some portion of that value.

[May 18, 2008] Amazon's hosted storage service hits bump - CNET News.com

Amazon's hosted storage services suffered technical glitches last week, mishaps that caused some early users to think twice about using the company's nascent Web services.

Last Thursday, customers of Amazon's Simple Storage Service (S3) started a discussion thread about problems in the service. Users of the service, which lets Web site owners contract with Amazon to store data, complained of slow service and error messages.

By Sunday, a representative from the Amazon Web Services business unit offered an explanation for the service degradation, which had been resolved. The representative blamed the problem on faulty hardware installed during an upgrade.

"The Amazon S3 team has been adding large amounts of hardware over the past several weeks in order to meet and stay ahead of high and rapidly increasing demand. Unfortunately, our most recent hardware order contained several substandard machines," the representative wrote

Customers who responded to the Amazon note appeared gratified to have an explanation. But before the problem was resolved, people voiced frustration with the drop in service levels. The storage service is supposed to operate 99.99 percent of the time.

"We've switched to using s3 in production and we have millions of files on their servers now. We're paying a LOT of money for this service and need it to be stable and reliable. I'm not looking forward to moving everything off s3 to something else, but if it's not reliable, that's what we'll need to do," one customer said before the problem was addressed and resolved.

The episode points to one of the pitfalls of the utility computing, where service providers offer hosted computing services over the Internet.

Hosted application provider Salesforce.com, for example, has suffered a few high-profile outages. The company has set up a program to notify customers of performance problems.

Amazon is building up its Web Services product line with the hope of establishing a large-scale business.

The problem with S3 last week is not the first time customers have complained of service issues.

In December last year, S3 had other performance problems, which appeared to have been resolved within a day.

[May 17, 2008] Demystifying Clouds by John Willis

February 5, 2008 | www.johnmwillis.com

... ... ...

The Myths

Cloud computing will eliminate the need to IT personnel.

Using my 30 years of experience in IT as empirical proof, I am going to go out on a limb and suggest that this is a false prophecy. One of my first big projects in IT was in the 1980s, and I was tasked to implement "Computer Automated Operations" Everyone was certain that all computer operators would loose their jobs. In fact, one company I talked to said that its operators were thinking of starting a union to prevent automated operations. The fact was that no one really lost his or her jobs. The good computer operators became analysts, and bad ones became tape operators.

There will only be five super computer utility like companies in the future.

Again, I will rely on empirical data. I have been buying automobiles for as long as I have been grinding IT, and all one has to do is look at the automotive industry's history as a template to falsify this myth. Some clever person will always be in a back room somewhere with an idea for doing it better, faster, cheaper, and cleaner. In all likelihood, there will probably be a smaller number of mega-centers, but it is most likely that they will be joined by a massive eco-grid of small-to-medium players interconnecting various cloud services.

The Facts

Since cloud computing is in a definite hype cycle, everyone is trying to catch the wave (myself included). Therefore, a lot of things you will see will have cloud annotations. Why not? When something is not clearly defined and mostly misunderstood, it becomes one of god's great gifts to marketers. I remember that, in the early days of IBM SOA talk, IBM was calling everything Tivoli an SOA. So I did a presentation at a Tivoli conference called "Explaining the 'S' in SOA and BSM." Unfortunately, one of IBM's lead SOA architects, not Tivoli and not a marketer, was in my presentation and tore me a new one. I was playing their game, I forgot that it was "Their Game." Therefore, in this article I will try to minimize the hype and try to lay down some markers on what are the current variations of all things considered clouds.

Level 0

As flour is to a cookie, virtualization is to a cloud. People are always asking me (their first mistake) what is the difference between clouds and the "Grid" hype of the 1990s. My pat answer is "virtualization.

"Virtualization is the secret sauce of a cloud. Like I said earlier, I am by no means an expert on cloud computing, but every cloud system that I have researched includes some form of a hypervisor. IMHO, virtualization is the differentiator between the old "Grid" computing and the new "Cloud" computing. Therefore, my "Level 0" definition for cloud providers is anyone who is piggy-backing, intentionally or un-intentionally, cloud computing by means of virtualization. The first company that comes to mind is Rackspace, which recently announced that it is going to add hosting virtual servers to their service offering. In fact, it new offering will allow a company to move its current in-house VMware servers to a Rackspace glass house.

A number of small players are producing some rain is this space. A quick search on Google will yield monthly plans as low as $7 per month for XEN VPS hosting. It's only a matter of time before cloned Amazon EC2 providers start pronouncing themselves as Cloud Computingť because they will host XEN services in their own glass house. These services will all be terrific offerings and will probably reduce costs, but they will not quite be clouds, leaving them, alas, at "Level 0."

Level 1

My definition of "Level 1" cloud players are what I call niche players. "Level 1″ actually has several sub-categories.

Service Providers

Level 1 service provider offerings are usually on-ramp implementations relying on Level 2 or Level 3 backbone providers. For example, a company called RightScale un-mangles Amazon's EC2 and S3 API's and provides a dashboard and front-end hosting service for Amazon's Web Services (AWS) offering (I.e., EC2 and S3). AWS is what I consider a "Level 2"offering, which I will discuss later in this article.

... ... ...

Pure Play Application Specific

This is where I will admit it gets a little "cloudy."Seriously, companies such as Box.Net and EMC's latest implementation with Mozy are appearing as SaaS storage plays and piling on the cloud wagon. I am almost certain that companies like SalesForce.com will be confused with or will legitimately become cloud plays. Probably the best definition of a "Level 1 Pure Play" is with EnterpriseDB's latest announcement of running its implementation of PostgreSQL on Amazon's EC2. There are also few rumors of services that are trying to run MySQL on EC2, but most experts agree that this is a challenge on the EC2/S3 architecture. It will be interesting to see Sun's cloud formations flow in regards to its recent acquisition of MySQL.

Pure Play Technology

When ever you hear the terms Mapreduce, Hadoop, and Google File System in regards to cloud computing, they primarily refer to "Cloud Storage" and the processing of large data sets. Cloud Storage relies on an array of virtual servers and programming techniques based on parallel computing. If things like "S(P) = P â' α * (P â' 1)"get you excited, then I suggest that you have a party here. Otherwise, I am not going anywhere near there. I will, however, try to take a crack at explaining MapReduce, Hadoop, and the Google Files Systems. It is no wonder that the boys at Google started all of this back in 2004 with a paper describing a programming model called Mapreduce. MapReduce is used for processing and generating large numbers of data across a number of distributed systems. In simplistic terms, MapReduce is made up of two functions: one maps Key/Value pairs, and another reduces and generates output values for the key. In the original Google paper "MapReduce:Simplified Data Processing on Large Clusters,"a simple example of using GREP to match URL's and output URL counts is used. Those Google boys and girls have come a long way since 2004. Certainly, it is much more complicated than I have described. The real value in MapReduce is its ability to break up the code into many small distributed computations.

Next in this little historical adventure, a gentleman named Doug Cutting implemented MapReduce into the Apache Lucene project, which later evolved into the now commonly known Hadoop. Hadoop is an open source Java-based framework that implements MapRecuce using a special file system called the Hadoop Distributed File System (HDFS). The relationship between HDFS and the Google File System (GFS) is not exactly clear, but I do know that HDFS is open and that it is based on the concepts of GFS, which is proprietary and more likely very specific to Google's voracious appetite for crunching data. The bottom line is that a technology like Hadoop and all its sub-components allows IT operations to process millions of bytes of data per day (only kidding, I couldn't resist a quick Dr. Evil Joke here "Dr. Evil: I demand the sum… OF 1 MILLION DOLLARS "). Actually, what I meant to say quintillions of data per day.

Most of the experts with whom I have talked say that Hadoop is really only a technology that companies like Google and Yahoo can use. I found, however, a very recent blog on how a RackSpace customer is using Hadoop to offer special services to its customers by processing massive amounts of mail server logs to reduce the wall time of service analytics. Now you're talking my language.

Level 2

Level 2 cloud providers are basically the backbone providers of the cloud providers. Amazon's AWS Elastic Cloud Computing (EC2) and Simple Storage Service (S3) are basically the leaders in this space at this time. My definition of a "Level 2"provider is a backbone hosting service that runs virtual images in a cloud of distributed computers. This delivery model supports one to thousands of virtual images in a vast array of typically commodity-based hardware. The key differentiator of a "Level 2"provider vs. a "Level 3"is that the "Level 2″ cloud is made up of distinct single images and that they are not treated as a holistic grid like a "Level 3"providers (more on this later). If I want to implement a load balancer front end with multiple Apache servers and MySQL server on EC2, I have to provide all the nasty glue to make that happen. The only difference between running on Amazon's EC2 and running one's own data center is the hardware. Mind you, that is a big difference, but, even with EC2, I still might need an army of system administrators to configure file systems mounts, network configurations, and security parameters, among other things.

Amazon's EC2 is based on XEN images, and a customer of EC2 gets to create or pick from a template list of already created XEN images. There is a really nice Firefox extension for starting and stopping images at will. Still, if you want to do fancy things like on-demand or autonomic computing type stuff, you will have to use the the AWS API's or use a "Level 1″ provider to do it for you. I currently run this web site on an EC2 cloud. I have no idea what the hardware is and basically only know that it is physically located somewhere in Oklahoma. At least that's what one of the SEO tools says. If I were to restart it, it might wind up in some other city â€" who cares? Clouds are convectious.

The biggest problem with Amazon's EC2 is that the disk storage is volatile, which means that, if the image goes offline, all of the data that were not part of the original XEN image will be lost. For example this blog article will disappear if my image goes down. Of course, I take backups. One might say, "Hey, that is what S3 is for." Good luck. S3 is only for the most nimbus of folks. S3 is only a web services application to put and get buckets or raw unformatted data. S3 is NOT a file system, and, even though some reasonable applications can make it look like a file system, it is still not a file system. For example, the tool Jungle Disk can be set up to mount an S3 bucket to look like a mounted file system. Under the covers, however, it is continually copying data to temporary space that looks like a mounted file system. We have found most (not all) of the open tools around S3 to be not-ready-for-production-type tools. Also, remember that EC2 and S3 are still listed as Beta applications. I list at the end of this article a number of good articles about the drawbacks of using EC2/S3 as a production RDBMS data store. Recently, an interesting point was made to me that a lot of how EC2/S3 works is really based on Amazon's legacy. Before it offered EC2/S3 as a commercial service, it was more than likely used as its core e-tailor infrastructure. Although EC2/S3 might seem like an odd way to provide this kind of service, I am certain that it rocks as an infrastructure for selling books and CD's.

Another player in the "Level 2"game is Mosso. Mosso is a customer of Rackspace, and it has added some secret sauce to VMWare to provide an EC2 look alike. The good news is that its storage is permanent and that there is no S3 foolishness. It will be interesting to see if Mosso can compete with a proprietary hypervisor (VMWare) vs. an open source hypervisor like XEN, which is used by EC2.

... ... ...

[May 15, 2008] Keystones and Rivets Cloud Computing

John Willis seeks to 'demystify' clouds and received some interesting comments. James Urquhart is an advocate of cloud computing and thinks that, as with any disruptive change, some people are in denial about The Cloud. He has responded to some criticism of his opinions. Bob Lewis, one of Urquhart's "deniers" has written a few posts on the subject and offers a space for discussion of Nick Carr's arguments.

In order to discuss some of the issues surrounding The Cloud concept, I think it is important to place it in historical context. Looking at the Cloud's forerunners, and the problems they encountered, gives us the reference points to guide us through the challenges it needs to overcome before it is adopted.

In the past computers were clustered together to form a single larger computer. This was a technique common to the industry, and used by many IT departments. The technique allowed you to configure computers to talk with each other using specially designed protocols to balance the computational load across the machines. As a user, you didn't care about which CPU ran your program, and the cluster management software ensured that the "best" CPU at that time was used to run the code.

In the early 1990s Ian Foster and Carl Kesselman came up with a new concept of "The Grid". The analogy used was of the electricity grid where users could plug into the grid and use a metered utility service. If companies don't have their own powers stations, but rather access a third party electricity supply, why can't the same apply to computing resources? Plug into a grid of computers and pay for what you use.

Grid computing expands the techniques of clustering where multiple independent clusters act like a grid due to their nature of not being located in a single domain.

A key to efficient cluster management was engineering where the data was held, known as "data residency". The computers in the cluster were usually physically connected to the disks holding the data, meaning that the CPUs could quickly perform I/O to fetch, process and output the data.

One of the hurdles that had to be jumped with the move from clustering to grid was data residency. Because of the distributed nature of the Grid the computational nodes could be situated anywhere in the world. It was fine having all that CPU power available, but the data on which the CPU performed its operations could be thousands of miles away, causing a delay (latency) between data fetch and execution. CPUs need to be fed and watered with different volumes of data depending on the tasks they are processing. Running a data intensive process with disparate data sources can create a bottleneck in the I/O, causing the CPU to run inefficiently, and affecting economic viability.

Storage management, security provisioning and data movement became the nuts to be cracked in order for grid to succeed. A toolkit, called Globus, was created to solve these issues, but the infrastructure hardware available still has not progressed to a level where true grid computing can be wholly achieved.

But, more important than these technical limitations, was the lack of business buy in. The nature of Grid/Cloud computing means a business has to migrate its applications and data to a third party solution. This creates huge barriers to the uptake.

In 2002 I had many long conversations with the European grid specialist for the leading vendor of grid solutions. He was tasked with gaining traction for the grid concept with the large financial institutions and, although his company had the computational resource needed to process the transactions from many banks, his company could not convince them to make the change.

Each financial institution needed to know that the grid company understood their business, not just the portfolio of applications they ran and the infrastructure they ran upon. This was critical to them. They needed to know that whoever supported their systems knew exactly what the effect of any change could potentially make to their shareholders.

The other bridge that had to be crossed was that of data security and confidentiality. For many businesses their data is the most sensitive, business critical thing they possess. To hand this over to a third party was simply not going to happen. Banks were happy to outsource part of their services, but wanted to be in control of the hardware and software - basically using the outsourcer as an agency for staff.

Traditionally, banks do not like to take risks. In recent years, as the market sector has consolidated and they have had to become more competitive, they have experimented outwith their usual lending practice, only to be bitten by sub-prime lending. Would they really risk moving to a totally outsourced IT solution under today's technological conditions?

Taking grid further into the service offering, is "The Cloud". This takes the concepts of grid computing and wraps it up in a service offered by data centres. The most high profile of the new "cloud" services is Amazons S3 (Simple Storage Service) third party storage solution. Amazon's solution provides developers with a web service to store data. Any amount of data can be read, written or deleted on a pay per use basis.

EMC plans to offer a rival data service. EMCs solution creates a global network of data centres each with massive storage capabilities. They take the approach that no-one can afford to place all their data in one place, so data is distributed around the globe. Their cloud will monitor data usage, and it automatically shunts data around to load-balance data requests and internet traffic, being self tuning to automatically react to surges in demand.

However, the recent problems at Amazon S3, which suffered a "massive" outage at the end of last week, has only served to highlight the risks involved with adopting third party solutions.

So is The Cloud a reality? In my opinion we're not yet there with the technology nor the economics required to make it all hang together.

In 2003 the late Jim Gray published a paper on Distributed Computing Economics:

Computing economics are changing. Today there is rough price parity between (1) one database access, (2) ten bytes of network traffic, (3) 100,000 instructions, (4) 10 bytes of disk storage, and (5) a megabyte of disk bandwidth. This has implications for how one structures Internet-scale distributed computing: one puts computing as close to the data as possible in order to avoid expensive network traffic.

The recurrent theme of this analysis is that "On Demand" computing is only economical for very cpu-intensive (100,000 instructions per byte or a cpu-day-per gigabyte of network traffic) applications. Pre-provisioned computing is likely to be more economical for most applications - especially data-intensive ones.

If telecom prices drop faster than Moore's law, the analysis fails. If telecom prices drop slower than Moore's law, the analysis becomes stronger.

When Jim published this paper the fastest Supercomputers were operating at a speed of 36 TFLOPS. A new Blue Gene/Q is planned for 2010-2012 which will operate at 10,000 TFLOPS, out stripping Moore's law by a factor of 10. Telecom prices have fallen and bandwidth has increased, but more slowly than processing power, leaving the economics worse than in 2003.

I'm sure that advances will appear over the coming years to bring us closer, but at the moment there are too many issues and costs with network traffic and data movements to allow it to happen for all but select processor intensive applications, such as image rendering and finite modelling.

There has been talk of a two tier internet where businesses pay for a particular Quality of Service, and this will almost certainly need to happen for The Cloud to become a reality. Internet infrastructure will need to be upgraded, newer faster technologies will need to be created to ensure data clouds speak to supercomputer clouds with the efficiency to keep the CPUs working. This will push the telecoms costs higher rather than bringing them in line with Moore's Law, making the economics less viable.

Then comes the problem of selling to the business. Many routine tasks which are not processor intensive and time critical are the most likely candidates to be migrated to cloud computing, yet these are the least economical to be transferred to that architecture. Recently we've seen the London Stock Exchange fail, undersea data cables cut in the Gulf, espionage in Lithuania and the failure of the most modern and well-known data farm at Amazon.

In such a climate it will require asking the business to take a leap of faith to find solid footing in the cloud for mission critical applications.

And that is never a good way to sell to the business.

[May 15, 2008] The problem with Google Apps Engine by Garett Rogers

If you are considering taking advantage of the new Google App Engine service from Google, I suggest you read this article first. There are some hidden facts that you should be aware of before making your decision to adopt this platform.

First, I'd like to thank Google for providing this service - it really is a great idea, and can be very useful for people or companies making web applications from scratch without needing to worry about infrastructure. It's also a very smart move on Google's part - host the world's applications, make money off their success, even if they aren't the owners of successful applications. Popular applications will likely exceed "free" limits, giving Google the green light to start charging money.

Another advantage for Google is the ease of acquiring companies if they are already using Google's infrastructure - simply make a deposit into their bank account and slap the Google logo on the interface.

But everything that sounds too good to be true, usually is - right? In this case, I have to agree. When you choose to use Google App Engine, there are a couple of things you need to think long and hard about. If you go through this list and still think it will work for you, then it probably will. Go for it, it really is a great service after everything is said and done. It's very well thought out, and as it promises, it will scale with the growth of your business.

Things you need to think about:

Like I said, I am really glad Google has put this service out there. It's a great tool people can and should use if they are comfortable with the risks. If you have any additional things you would like to point out for people who might be considering using Google App Engine, or if you want to debunk anything I have said in this article, please feel free to post them in the Talk Back.

Garett Rogers is employed as a programmer for iQmetrix, which specializes in retail management software for the cellular and electronics industry. See his full profile and disclosure of his industry affiliations.

Comments

RE: The problem with Google Apps Engine
UUMMM google this:

Some Canadian organizations are banning Google's Web applications because the Patriot Act allows the U.S. government to view personal data held by U.S. organizations, which violates Canada's privacy laws. When Lakehead University in Thunder Bay, Ontario, implemented some of Google's tools, it sparked a backlash among professors, who cannot transmit private data over the system.

[May 15, 2008] Internet Evolution - Tom Nolle - Is Cloud Computing a Benefit or Threat

We should care a lot about this, because the "giant cloud" vision of the Internet is probably not an alternate Internet architecture for all to play in, but rather an alternate Internet business model that favors only the giant players. It doesn't take a computer scientist and a VC to create a Website, but it darn sure takes a bunch of both to create a computing cloud. You could host your site and applications in somebody's cloud, but how they work and what it costs you are now under their control. The notion of a cloud-computing-based Internet is the notion of an Internet that's tilting toward the very large players.

[May 14, 2008] Computing Heads for the Clouds

Which companies are at the forefront of cloud computing?

Google's search engine and productivity applications are among the early products of efforts to locate processing power on vast banks of computer servers, rather than on desktop PCs. Microsoft has released online software called Windows Live for photo-sharing, file storage, and other applications served from new data centers. Yahoo has taken similar steps. IBM has devoted 200 researchers to its cloud computing project. And Amazon.com (AMZN) recently broadened access for software developers to its "Elastic Compute Cloud" service, which lets small software companies pay for processing power streamed from Amazon's data centers.

What's the market opportunity for this technology?

While estimates are hard to find, the potential uses are widespread. Rather than serve a relatively small group of highly skilled users, cloud computing aims to make supercomputing available to the masses. Reed, who's moving to Microsoft from the University of North Carolina, says the technology could be used to analyze conversations at meetings, then anticipate what data workers might need to view next, for example. Google, Microsoft, and others are also building online services designed to give consumers greater access to information to help manage their health care.

What are the biggest challenges these companies face?

The technical standards for connecting the various computer systems and pieces of software needed to make cloud computing work still aren't completely defined. That could slow progress on new products. U.S. broadband penetration still lags that of many countries in Europe and Asia, and without high-speed connections-especially wireless ones-cloud computing services won't be widely accessible. And storing large amounts of data about users' identity and preferences is likely to raise new concerns about privacy protection.

Haven't we heard about efforts like this before?

Every decade or so, the computer industry's pendulum swings between a preference for software that's centrally located and programs that instead reside on a user's personal machine. It's always a balancing act, but today's combination of high-speed networks, sophisticated PC graphics processors, and fast, inexpensive servers and disk storage has tilted engineers toward housing more computing in data centers. In the earlier part of this decade, researchers espoused a similar, centralized approach called "grid computing." But cloud computing projects are more powerful and crash-proof than grid systems developed even in recent years.

[May 13, 2008] Holway's HotViews "I think there is a world market for maybe five computers"

Paul Wallis said...

Richard,

As you say it is difficult to make predictions that remain accurate for a reasonable length of time, especially with technology changing so rapidly.

That said, I think we are some way from businesses putting mission-critical applications on "The Cloud".

One of the problems at the moment is economics. The late Jim Gray of Microsoft analysed "On Demand" computing a few years ago, and he pointed out that it is only economical for very CPU intensive operations.

Although telecom prices have fallen and bandwidth has increased, processing power has increased much more rapidly, which means that CPUs in the Cloud will run inefficiently at the moment except for applications like, for example, image rendering.

On my blog I've discussed this and some of the other issues surrounding The Cloud. I've tried to place it in historical context, looking at the Cloud's forerunners and the problems they encountered before being adopted.

You can find the article here.

You may also be interested reading my thoughts about how "IT exists for one reason".

Your comments or feedback are very welcome.

A future look at data center power - Network World

Cloud computing, one approach Almaden researchers are pursuing, already has manifested itself in the Blue Cloud initiative IBM launched three months ago. Under the Blue Cloud architecture, enterprises can get Internet-like access to processing capacity from a large number of servers, physical and virtual. By not having to add machines locally, enterprises save on the cost of powering up and outfitting new computing facilities. Cloud computing also could help reduce ongoing energy consumption, as enterprises will not need to accommodate capacity they will not use all the time.

This spring IBM will take the concept further, offering BladeCenter servers with power and x86 processors, and service management software - a "'Cloud in a Box,' so to speak," says Dennis Quan, senior technical staff member at IBM's Silicon Valley Lab.

Cloud computing will mature in coming years as enterprises increasingly turn to IT to serve their markets, Quan says. Certainly Web 2.0 sites posting user-generated content will proliferate, driving the need for cloud computing. But demand will come from mainstream enterprises, too. "Financial services firms are saying, 'We've run out of space . . . so what can we do?'" he says. "They need to have a compute infrastructure that's scalable."

[May 12, 2008] Cloudy picture for cloud computing - Network World By Neal Weinberg

04/30/2008 Network World

Experts say enterprises are taking a wait-and-see approach

LAS VEGAS -- You can call it cloud computing. You can call it grid computing. You can call it on-demand computing. Just don't call it the next big thing – at least not yet.

Efforts by Web heavyweights such as Amazon and Google to entice companies into tapping into the power of their data centers are being slowed by a number of factors, according to Interop panelists.

Analyst Alistair Croll of BitCurrent said there are specific applications for which grid/cloud computing is perfect. For example, The New York Times recently rented Amazon's grid to create searchable PDFs of newspaper articles going back decades. The Times estimated that the project would have taken 14 years if the Times had used its own servers. Amazon did the entire project in one day, for $240.

But those examples are few and far between, as most companies are still in the `kicking the tires' stage when it comes to grid computing. Reuven Cohen, founder and CTO of Enomaly, said his customers are primarily using grid computing for research and development projects, rather than production applications.

Kirill Sheynkman, head of start-up Elastra, said the early adopters of grid computing are Web. 2.0 start-ups who want to get up and running quickly and without a lot of capital expenses, independent software vendors that want to offer their applications in a software-as-a-service model, and enterprises who have selected specific applications for the cloud, such as salesforce automation or human resources.

"Equipment inside the corporate data center isn't going away anytime soon," added Sheynkman. Companies remain reluctant, for a variety of reasons, to trust the cloud for their mission-critical applications. Here are some of those reasons:

1. Data privacy. Many countries have specific laws that say data on citizens of that country must be kept inside that country. That's a problem in the cloud computing model, where the data could reside anywhere and the customer might not have any idea where, in a geographical sense, the data is.
2. Security. Companies are understandably concerned about the security implications of corporate data being housed in the cloud.
3. Licensing. The typical corporate software licensing model doesn't always translate well into the world of cloud computing, where one application might be running on untold numbers of servers.
4. Applications. In order for cloud computing to work, applications need to be written so that they can be broken up and the work divided among multiple servers. Not all applications are written that way, and companies are loathe to rewrite their existing applications.
5. Interoperability. For example, Amazon has its EC2 Web service, Google has its cloud computing service for messaging and collaboration, but the two don't interoperate.
6. Compliance. What happens when the auditors want to certify that the company is complying with various regulations, and the application in question is running in the cloud? It's a problem that has yet to be addressed.
7. SLAs. It's one thing to entrust a third party to run your applications, but what happens when performance lags. The vendors offering these services need to offer service-level agreements.
8. Network monitoring. Another question that remains unanswered is how does a company instrument its network and its applications in a cloud scenario. What types of network/application monitoring tools are required.

While many of these questions don't have answers yet, the panelists did agree that there is a great deal of interest in grid computing. Conventional wisdom would say that small-to-midsize businesses (SMB) would be most interested in being able to offload applications, but, in fact, it's the larger enterprises that are showing the most interest.

As Google's Rajen Sheth pointed out, when Google started its messaging and collaboration services, it thought SMBs would be the major customers. "Lots of large enterprises are showing interest," he said, "but it will take a while."

[May 11, 2008] Q&A Author Nicholas Carr on the Terrifying Future of Computing

Carr should probably think about career of humorist...
12.20.07 | Wired

Wired: What's left for PCs?

Carr: They're turning into network terminals.

Wired: Just like Sun Microsystems' old mantra, "The network is the computer"?

Carr: It's no coincidence that Google CEO Eric Schmidt cut his teeth there. Google is fulfilling the destiny that Sun sketched out

The IT department is dead, author argues

NetworkWorld.com

Carr is best known for a provocative Harvard Business Review article entitled "Does IT Matter?"

... ... ...

With his new book, Carr is likely to engender even more wrath among CIOs and other IT pros.

"In the long run, the IT department is unlikely to survive, at least not in its familiar form," Carr writes. "It will have little left to do once the bulk of business computing shifts out of private data centers and into the cloud. Business units and even individual employees will be able to control the processing of information directly, without the need for legions of technical people."

Carr's rationale is that utility computing companies will replace corporate IT departments much as electric utilities replaced company-run power plants in the early 1900s.

Carr explains that factory owners originally operated their own power plants. But as electric utilities became more reliable and offered better economies of scale, companies stopped running their own electric generators and instead outsourced that critical function to electric utilities.

Carr predicts that the same shift will happen with utility computing. He admits that utility computing companies need to make improvements in security, reliability and efficiency. But he argues that the Internet, combined with computer hardware and software that has become commoditized, will enable the utility computing model to replace today's client/server model.

"It has always been understood that, in theory, computing power, like electric power, could be provided over a grid from large-scale utilities - and that such centralized dynamos would be able to operate much more efficiently and flexibly than scattered, private data centers," Carr writes.

Carr cites several drivers for the move to utility computing. One is that computers, storage systems, networking gear and most widely used applications have become commodities.

He says even IT professionals are indistinguishable from one company to the next. "Most perform routine maintenance chores - exactly the same tasks that their counterparts in other companies carry out," he says.

Carr points out that most data centers have excess capacity, with utilization ranging from 25% to 50%. Another driver to utility computing is the huge amount of electricity consumed by data centers, which can use 100 times more energy than other commercial office buildings.

"The replication of tens of thousands of independent data centers, all using similar hardware, running similar software, and employing similar kinds of workers, has imposed severe economic penalties on the economy," he writes. "It has led to the overbuilding of IT assets in every sector of the economy, dampening the productivity gains that can spring from computer automation."

Carr embraces Google as the leader in utility computing. He says Google runs the largest and most sophisticated data centers on the planet, and is using them to provide services such as Google Apps that compete directly with traditional client/server software from vendors such as Microsoft.

"If companies can rely on central stations like Google's to fulfill all or most of their computing requirements, they'll be able to slash the money they spend on their own hardware and software - and all the dollars saved are ones that would have gone into the coffers of Microsoft and the other tech giants," Carr says.

Other IT companies that Carr highlights in the book for their innovative approaches to utility computing are: Salesforce.com, which provides CRM software as a service; Amazon, which offers utility computing services called Simple Storage Solution (S3) and Elastic Compute Cloud (EC2) with its excess capacity; Savvis, which is a leader in automating the deployment of IT; and 3Tera, which sells a software program called AppLogic that automates the creation and management of complex corporate systems.

... ... ...

Carr offers a grimmer future for IT professionals. He envisions a utility computing era where "managing an entire corporate computing operation would require just one person sitting at a PC and issuing simple commands over the Internet to a distant utility."

He not only refers to the demise of the PC, which he says will be a museum piece in 20 years, but to the demise of the software programmer, whose time has come to an end.

Carr gives several examples of successful Internet companies including YouTube, Craigslist, Skype and Plenty of Fish that run their operations with minimal IT professionals. YouTube had just 60 employees when it was bought by Google in 2006 for $1.65 billion. Craigslist has a staff of 22 to run a Web site with billions of pages of content. Internet telephony vendor Skype supports 53 million customers with only 200 employees. Meanwhile, Internet dating site Plenty of Fish is a one-man shop.

"Given the economic advantages of online firms - advantages that will grow as the maturation of utility computing drives the costs of data processing and communication even lower -traditional firms may have no choice but to refashion their own businesses along similar lines, firing many millions of employees in the process," Carr says.

IT professionals aren't the only ones to suffer demise in Carr's eyes. He saves his most dire predictions for the fate of journalists.

Comments

Blending a few useful points into inflammatory utopia/dystopia

Submitted by JimB (not verified) on Mon, 01/07/2008 - 4:32pm.

Carr's vision is either utopian or dystopian, depending on how you look at it, but either way, it mixes a few likely trends with lots of naive wishful thinking, unsound logic, and sophomoric shock value.

The likely trends include greater commoditization and standardization, SaaS, utility computing, and the like. The IT landscape of today is already very different from the IT landscape of 20 years ago. Anyone who thinks IT won't be any different 20 years from now is being just as naive as Carr.

Yet let's look at his claims and analogies. Consider electricity. It's true that most organizations don't run their own power plants, but some still do. Light switches need no user's guides or special training, yet almost all organizations depend on having professional engineers who design the systems and trained technicians who service them -- and these specialists often don't work for the utility company. Many organizations still depend on in-house facilities people who deal with electrical systems -- as a first tier of response, as the link to whatever's been outsourced, and as the responsible party for seeing that the systems serve the organization's requirements.

Carr cites excess capacity and similarity of work as sure signs that IT departments will disappear. But think about that. (Carr hasn't, but I encourage others to.) That's true of virtually every job function you can name -- HR people, finance people, restaurant staff, teachers, scientific researchers, assembly line workers, soldiers, etc., etc., etc. If Carr's logic is sound, then virtually every job ever created is about to disappear, except at the "utility" companies that will do all the work instead.

Everybody is entitled to their opinions.

Submitted by Anonymous (not verified) on Mon, 01/07/2008 - 5:31pm.

Technology moves so fast that not one company (service providers) alone can keep up with a limited number of resources.
The outsourcing model is whats being practiced right now with large coprorations but very soon they will realize that a poor investment has been made. SLAs not being met, poor customer service and lack of resources for specialized technology.

Outsourcing

Submitted by Sergio (not verified) on Mon, 01/07/2008 - 8:34pm.

I don't know why all the fuzz around this guy... IT resources have been being outsourced from years now. I work at an IBM GDC (Global Delivery Center) and pretty much what we do is what this guy describes as the "future". We hire computing power and personnel to whomever wants to get rid of its bulked and oversized IT department. Please, somebody tell him!

RE: IBM

Submitted by Anonymous (not verified) on Mon, 01/07/2008 - 8:49pm.

Yea and then you screw everything up. ;-)

Former IBM customer.......

[May 02, 2008] Rough Type Nicholas Carr's Blog Is Office the new Netscape

Another "in the cloud" dreams form the Nicholas Carr ;-). One of the few true statements "[Microsoft] it doesn't see Google Apps, or similar online offerings from other companies, as an immediate threat to its Office franchise". Carr is blatantly wrong about Netscape. IE soon became a superior browser and that was the end of the game for Netscape. Also in order for the following statement to became true "It knows that, should traditional personal-productivity apps become commonplace features of the cloud, supplied free or at a very low price, the economic oxygen will slowly be sucked out of the Office business" you need at least to match the power of Office applications and the power of modern laptops in the cloud. The first is extremely difficult as Office applications are extremely competitive and even for Google to match them is an expensive uphill battle. As for matching power of modern laptops this is simply and a pipe dream. Like one reader commended on Carr's blog "The irony is that compared to Google Microsoft can offer greater choice - apps running on PC's or hosted. Use whichever you want/need based on your situation. I'd like to see Google match that. "

As Microsoft and Yahoo continue with their interminable modern-dress staging of Hamlet - it's longer than Branagh's version! - the transformation of the software business goes on. We have new players with new strategies, or at least interesting new takes on old strategies.

One of the cornerstones of Microsoft's competitive strategy over the years has been to redefine competitors' products as features of its own products. Whenever some upstart PC software company started to get traction with a new application - the Netscape browser is the most famous example - Microsoft would incorporate a version of the application into its Office suite or Windows operating system, eroding the market for the application as a standalone product and starving its rival of economic oxygen (ie, cash). It was an effective strategy as well as a controversial one.

Now, though, the tables may be turning. Google is trying to pull a Microsoft on Microsoft by redefining core personal-productivity applications - calendars. word processing, spreadsheets, etc. - as features embedded in other products. There's a twist, though. Rather than just incorporating the applications as features in its own products, Google is offering them up to other companies, particularly big IT vendors, to incorporate as features in their products.

We saw this strategy at work in the recent announcement that Google Apps would be incorporated into Salesforce.com's web applications (as well as the applications being built by others on the Salesforce platform). And we see it, at least in outline, in the tightening partnership between Google and IT behemoth IBM. Eric Schmidt, Google's CEO, and Sam Palmisano, IBM's CEO, touted the partnership yesterday in a joint appearance at a big IBM event. "IBM is one of the key planks of our strategy; otherwise we couldn't reach enterprise customers," Schmidt said. Dan Farber glosses:

As more companies look for Web-based tools, mashups, and standard applications, such as word processors, Google stands to benefit ... While IBM isn't selling directly for Google in the enterprise, IBM's software division and business partners are integrating Google applications and widgets into custom software solutions based on IBM's development framework. The "business context" is the secret of the Google and IBM collaboration, Schmidt said. Embedding Google Gadgets in business applications, that can work on any device, is a common theme for both Google and IBM.

Carr is too simplistic here. Microsoft Office is extremely well written and debugged set of applications with functionality and price that is not easy to match. Google will have difficulties attracting laptop users as price of Microsoft Office is ~ $100 for home and student edition -- that means a shareware price ($30 per application). So far Google apps were far from being a success... - NNB

Google's advantage here doesn't just lie in the fact that it is ahead of Microsoft in deploying Web-based substitutes for Office applications. Microsoft can - and likely will - neutralize much of that early-mover advantage by offering its own Web-based versions of its Office apps. Its slowness in rolling out full-fledged web apps is deliberate; it doesn't see Google Apps, or similar online offerings from other companies, as an immediate threat to its Office franchise, and it wants to avoid, for as long as possible, cannibalizing sales of the highly profitable installed versions of Office.

No, Google's main advantage is simply that it isn't Microsoft. Microsoft is a much bigger threat to most traditional IT vendors than is Google, so they are much more likely to incorporate Google Apps into their own products than to team up with Microsoft for that purpose. (SAP is an exception, as it has worked with Microsoft, through the Duet initiative, to blend Office applications into its enterprise systems. That program, though, lies well outside the cloud.) Undermining the hegemony of Microsoft Office is a shared goal of many IT suppliers, and they are happy to team up to further that goal. As Salesforce CEO Marc Benioff pithily put it in announcing the Google Apps tie-up, "The enemy of my enemy is my friend, so that makes Google my best friend."

Google is throwing money out of the window. You cannot match the power of modern laptop with the "in the cloud" servers. That severely limits the attractiveness of Goggle applications. Storing your business plan and other sensitive documents at Google servers is another drawback. --NNB

Like Microsoft, Google is patient in pursuing its strategy. (That's what very high levels of profitability will do for you.) It knows that, should traditional personal-productivity apps become commonplace features of the cloud, supplied free or at a very low price, the economic oxygen will slowly be sucked out of the Office business. That doesn't necessarily mean that customers will abandon Microsoft's apps; it just means that Microsoft won't be able to make much money from them anymore. Microsoft may eventually win the battle for online Office applications, but the victory is likely to be a pyrrhic one.

Of course, there are some long-run risks for other IT vendors in promoting Google Apps, particularly for IBM. A shift to cheap Web apps for messaging and collaboration poses a threat to IBM's Notes franchise as well as to Microsoft's Office franchise. "The enemy of my enemy is my friend." If I remember correctly, that's what the US government used to say about Saddam Hussein.

Comments


Nick,

You are making a mistake that many in the tech blogosphere make: getting caught up in PR-driven spin of "David vs. Goliath" when the issue is really "Goliath vs. Himself". As Steve Gillmor has remarked (click my name for gratuitous blog post), the real issue is how Microsoft, as the owner of the enterprise productivity space, defines the future of office productivity. It's their market to lose, and this is not likely. After 10 years, IBM has negligible penetration in this space. Google teaming with an also-ran is just two also-rans running a 3-legged race to nowhere.

And as for your statement that, "Google is patient in pursuing its strategy. (That's what very high levels of profitability will do for you.)" -- I'm guessing the $40 billion takeover of Yahoo aims at reducing that profitability a more than a bit...

Nick,

You are making a mistake that many in the tech blogosphere make: getting caught up in PR-driven spin of "David vs. Goliath" when the issue is really "Goliath vs. Himself". As Steve Gillmor has remarked (click my name for gratuitous blog post), the real issue is how Microsoft, as the owner of the enterprise productivity space, defines the future of office productivity. It's their market to lose, and this is not likely. After 10 years, IBM has negligible penetration in this space. Google teaming with an also-ran is just two also-rans running a 3-legged race to nowhere.

And as for your statement that, "Google is patient in pursuing its strategy. (That's what very high levels of profitability will do for you.)" -- I'm guessing the $40 billion takeover of Yahoo aims at reducing that profitability a more than a bit...

Posted by: Sprague Dawley at May 2, 2008 06:14 PM

There's one thing missing from the cloud versions of the office suite, from Google Apps & IBM's versions of OpenOffice.org (Lotus Symphony) & it has to do with -- yes, that magic word -- interoperability.

Yes, OpenOffice.org & its derivatives can open & save simple .doc files, but the 3 - 7 percent of files which are complex, or are affected by VBA scripts, or are part of even simple business processes make the Windows (XP) | Office platform impervious to alien apps & formats & intolerant of participation with non-Microsoft software within the process.

... ... ...

Posted by: Sam at May 2, 2008 06:56 PM

... It's relatively trivial for Microsoft to create an online version of Word or Excel at least as good if not better than Google Apps. If Office is sufficiently threatened - which I don't think it will be - then they can choose to canibalize Office themselves. I think a more likely scenario is that Microsoft will offer online versions of Word and Excel as companions to the PC versions - much like they offer Outlook Web Access as a companion to Outlook. Most companies and PC users choose to use the PC version of Outlook when they can but like to have the option to use Outlook Web Access when they have need to. Google is SOL in this respect. With Google it's their hosted/run by Google or nothing. The irony is that compared to Google Microsoft can offer greater choice - apps running on PC's or hosted. Use whichever you want/need based on your situation. I'd like to see Google match that.

Posted by: markashton at May 2, 2008 11:47 PM

I could be wrong, but I think most of Corporate America, which is the source of the cash cow profits at MS, doesn't really care about the operating system. Its Office, and more specifically, interoperability of Office documents that drives the world. Back in the late 80s, WordPerfect owned word processing, even more than Lotus 123 did.

... ... ...

I'm not sure that the idea of letting Google store all your business plans and contracts is an attractive idea...

Posted by: Patrick Farrell at May 2, 2008 11:57 PM

I don't think Microsoft's core capabilities have ever been well understood. In the late 80s/early 90s conventional wisdom held that Windows would never succeed in the enterprise, or in education, or for consumers, or for games. Good reasons were proferred in each case. And yet Microsoft slowly transformed the consumer experience and at the same time built the marketing expertise to capture the enterprise.

For all its achievements, Google just doesn't have that capability, and probably never will. It's boring, for one thing. And the deal with IBM reminds me of foxes guarding the chicken coop.

For Microsoft, the Office franchise is not really just about the product, but also about reliability and assurance, which are what IT managers look for. Playing experiments with IBM consulting ware and Google's latest are not really high on the list for corporate executives.

Both companies (Microsoft and Google) will do some interesting things, but I don't think, for Google, it will be in the enterprise space.

Posted by: Tony Healy at May 4, 2008 12:06 AM

I read cost analysis studies that show only a saving of about 10% using subscription based web apps over per seat licenses. I think the real issue is the need for local customization.
... ... ...

Posted by: Linuxguru1968 at May 9, 2008 02:19 PM

[May 8, 2008] Microsoft's Mesh Promise On The Horizon - Microsoft Blog by J. Nicholas Hoover

Mar 10, 2008 | InformationWeek

It's becoming clear that synchronizing information across devices and services is set to become a critical part of Microsoft (NSDQ: MSFT)'s Web strategy, and Ray Ozzie made that point even clearer last week.

In his keynote address at Microsoft's Mix conference for Web developers, Microsoft's chief software architect said the word "mesh" 14 times and some variant of "synchronization" three more times.

"Just imagine the possibilities enabled by centralized configuration and personalization and remote control of all your devices from just about anywhere," Ozzie said. "Just imagine the convenience of unified data management, the transparent synchronization of files, folders, documents, and media. The bi-directional synchronization of arbitrary feeds of all kinds across your devices and the Web, a kind of universal file synch."

... ... ...

Microsoft briefed developers on the Sync Framework and FeedSync, two of the company's early synchronization technologies, as well as something new currently called Astoria Offline, which is not even yet available for testing. Another product labeled "Astoria Server" appeared on a few PowerPoint slides, but it wasn't even mentioned. Astoria was the code-name for a project that seems to be morphing into the key data storage and modeling technology for Microsoft's data services strategy.

During the session, Microsoft program manager Neil Padgett said the company would release the final version of the Sync Framework in the third quarter and put out a test release of a mobile version of the framework at the same time.

FeedSync, the Sync Framework and Astoria Offline all look to be technologies that allow developers to create and implement bits of code and applications that can sync information, though Padgett also said Microsoft will offer some code for common scenarios like unifying contact information, and hinted that it also would craft some user experiences for how sync services look and feel to end users.

[May 7, 2008] Prefabricated data centers offer modular expansion alternatives

Data center managers got a reprieve from having to expand their facility footprint in recent years as advances in the physical density of servers made it possible to stuff far more computing power into the same space. But as Michael A. Bell, research vice president for Gartner, Inc. noted last year, today a standard rack may be supporting loads up to 30,000 watts rather than the 3,000 watts the engineers of your data center probably expected. That in turn puts a major strain on your power and cooling capacity.

... ... ...

American Power Conversion Corp. (APC) has experimented with a rapid deployment data center model since late 2004, when it unveiled a prototype called the InfraStruXure Express. This self-contained facility is actually a 53-foot long trailer pulled by a big rig. It has an onboard 135 kW generator, makes its own chilled water, and offers a satellite data uplink. Inside the trailer is a small network operations center. Painted differently, it could pass as a mobile home for an espionage support crew.

"The InfraStruXure Express is the next evolution of a packaged system in the InfraStruXure category," says APC's Russel Senesac, Director of InfraStruXure Systems. "It came from the research we did in the market showing customers wanted the ability to quickly deploy [data centers], whether it's a case of disaster recovery or data centers running out of capacity. All the components are rack based-the UPS, the cooling systems-and so we could do a high-density application inside a 53-foot container."

The InfraStruXure Express is flexible-it can hook up to land-based data communications and "shore power" as well. It's unlikely that the rolling data center will leave the prototype stage in its current form; rather, it was outfitted to the gills to get reaction from potential customers for possible development of a future line of products. One problem facing APC: many of those interested in the InfraStruXure Express (which includes not only corporations but federal agencies like FEMA and the Department of Homeland Security) need heavily customized versions, a business model APC is not keen to rush into.

Sun Microsystems seems more likely to bring a mass-produced data center on wheels to market first with its Project Blackbox. The offering, also still officially in prototype stage but scheduled to be delivered to its first customers by March, is fundamentally different from InfraStruXure Express. For one thing, the only mobile aspect to Project Blackbox is that it gets delivered as a standard 20-foot shipping container on the back of a truck (or cargo ship, airplane, or railcar). Once it's dropped off at the customer site, it needs to be hooked up to external power and data lines, as well as a chilled water supply and return. Eight 19-inch racks are cleverly packaged inside, without room for much anything else other than fans and heat exchangers. Sun won't yet reveal pricing.

[May 6, 2008] sun_unveils_data_center_in_a_box

The product, dubbed Project Black Box, is housed in a standard metal shipping container -- 20 feet long, eight feet wide and eight feet tall. A fully loaded system can house about 250 single unit rack servers.

Painted black with a lime green Sun logo, the system can consist of up to seven tightly packed racks of 35 server computers based on either Sun's Niagara Sparc processor or an Opteron chip from Advanced Micro Devices. The system includes sensors to detect tampering or movement and features a large red button to shut it down in an emergency. Once plugged in, it requires just five minutes to be ready to run applications. Sun has applied for five patents on the design of the system, including a water-cooling technique that focuses chilled air directly on hot spots within individual computing servers. The system, which Sun refers to as "cyclonic cooling," makes it possible to create a data center that is five times as space-efficient as traditional data centers, and 10 percent to 15 percent more power-efficient, Mr. Schwartz said.

[May 5, 2008] SAP's SaaS Effort Falls Short Of Design

(Information Week Via Acquire Media NewsEdge)

SAP's decision to cut the amount it will invest this year in its midmarket ERP on-demand software service, Business ByDesign, points to obstacles the company has hit in trying to extend the SaaS model to ERP. It also raises concerns about the slowing economy's impact on the new software delivery model.

SAP is cutting its investment in Business ByDesign this year by about $156 million, company executives say, from the estimated $300 million or so it had planned to spend. SAP still intends to invest between $117 million and $195 million, with $62 million of that already accounted for in the first quarter.

At the same time, SAP has scaled back its revenue projections for Business ByDesign, estimating it will take until 2011 or 2012 to hit $1 billion in annual sales, rather than reaching that goal in 2010, as the company had hoped. The service has only 150 customers, and SAP said it expects to sign "significantly less" than 1,000 this year-the subscriber base it had projected when it launched the service last September.

And while SAP said last fall the service would be widely available this year, it's still limiting it to six countries: China, France, Germany, the United Kingdom, the United States, and, later this year, India.

PERFORMANCE PROBLEM

SAP says the retrenchment is a deliberate go-slow strategy. "There are a lot of things we feel we need to fine-tune before going into full volume," says Hans-Peter Klaey, president of SAP's small- and midsize-business division. Unfortunately, one of those is Business ByDesign's performance over the Internet, which Klaey admits needs improvement.

Another is cost. Some customers want to adopt the Business ByDesign suite piecemeal rather than all at once, while others are looking to integrate it into their existing infrastructures and require more services and customization than SAP had anticipated, says Stuart Williams, an analyst at Technology Business Research. The result is "a more expensive and less scalable business than designed," says Williams.

Pressure SAP faces to increase profit margins may play a part in its decision to scale back the on-demand service. SAP's first-quarter results, reported last week, fell short of Wall Street estimates. Net income for the quarter, ended March 31, was $376.6 million, down 22% from a year ago; revenue came in at $3.83 billion, up 14%, but still short of what Wall Street had hoped for. In a conference call with financial analysts, co-CEO Henning Kagermann cited several reasons for the subpar performance, including the slowing U.S. economy, the decline of the dollar against the euro, and the expense of integrating recently acquired Business Objects.

Kagermann said SAP is sorting through the complications of how to host ERP as a service and make a reasonable profit with it. "It's very important to adopt our plan every step of the way to ensure highest quality and maximum profitability," he said on the conference call. "Now, we are much smarter, because we have some milestones behind us." The revised strategy is a more realistic one, agrees analyst Williams, noting it "will take [SAP] more time and experience to build sales momentum" for its ERP service offering.

Difficulties in expanding the SaaS model beyond one-off applications for CRM and security may not be limited to SAP. Microsoft, NetSuite, Oracle, and Workday offer ERP suites in the SaaS model, and while all have seen growth, none is having blazing success with online ERP.

http://informationweek.com/

The Problems and Challenges with Software as a Service Gear Diary

Posted on 09 April 2008 by Christopher Spera

I write for a couple of different sites. Including this one, I write or have written for CompuServe/AOL, Pocket PC Thoughts, Lockergnome, The Gadgeteer, CMPNet's File Mine, WUGNET: The Windows User's Group Network, and Gear Diary. Honestly, life is all sunshine and daisies when the site is up and your backup app is working. However, when things go south, and you find out that your data is gone due to some sort of corruption or backup problem, life can get very complicated.

Gear Diary had a problem like this not too long ago, if you remember. Judie, the site owner, ran around like a nut trying to work with the hosting company to get things turned around. Some things went well, but others did not, and we had to rebuild the site from scratch.

Gear Diary is a WordPress enabled site, so many team members use the online WYSIWYG editor to create and edit content. It saves drafts, allows you to upload (and even watermark) graphics/pictures, and is, for all intent and purposes, an online word processor, much like Google Docs. When the site went belly up, most of the content headed south the border as well. Most team members had not saved a local copy of their work…which got me thinking…

One of the biggest and hottest trends I've been hearing a lot about lately is software as a service, a la Google Docs, Office Live, etc. If you take the Gear Diary site issue as a point of reference, and apply software as a service (which is basically what WordPress is acting as), you get an interesting and fairly destructive situation. WordPress doesn't offer any kind of method of saving its documents locally, or in a format that can be read (or edited) by any other local application. Despite the fact that WordPress creates HTML documents, all data stays on the server.

If you bump into a server issue, i.e. you go down, your data gets lost. It happened to Gear Diary. It can happen to any user that uses a software as a service app. what bothers me more, is that unless there's a specific viewer or offline editing tool for the document type, the data is useless. Further, if the app doesn't allow you to save data locally, an off line viewer isn't going to do much good anyway.

Many users here (those that wor of WordPress and saving it as a text or HTML file. That at least gets the data out and saved to your local hard drive. However, it doesn't address disaster recovery on the client side (which was one of the big draws, aside from cost savings and the lack of deployment problems…all you need in most cases is a compatible browser…).

Interestingly enough, Google may be planning changes to their online suite that would allow users to do just this: save data locally. As I noted above, the ability to save data to the server is nice, but if you're on a laptop and not connected to the 'Net, you might be out of luck if you've got work to do.

If Google Docs users have downloaded Google Gears, they should be able to edit a copy of a locally stored or cached version of the data, when they open a browser, and "navigate" to docs.google.com. Users will be able to transfer the updated data back to the server when the computer goes back on line, which is huge; but I don't necessarily want to rely on data that's still in my Internet cache. The browser needs to save the file locally…and it would be nice to have it saved in an industry standard format.

I'd love to hear everyone's opinion on this. Why don't you join us in the discussion area and give us your thoughts? I just have one request…I'd like us to NOT get into a debate over software as a service vs. a client side app, if possible. I'd like to hear everyone's thoughts and ideas on getting past the problems and challenges I've outlined: saving data locally and to a server, local file viewing and editing and disaster recovery; but if we MUST debate the entire issue, that's cool too!

Distributed Computing Fallacy #9 " Kevin Burton's NEW FeedBlog

Distributed Computing Fallacy #9

September 23, 2007 in clustering, linux, open source

We know there are eight fallacies of distributed computing - but I think there's one more.

A physical machine with multiple cores and multiple disks should not be treated as one single node.

Two main reasons.

First RAID is dying. You're not going to get linear IO scalability by striping your disks. If you have one software component accessing the disk as a logical unit you're going to get much better overall system performance.

Second. Locking. Multicore mutex locks are evil. While the locking primitives themselves are getting faster, if the CPU is overloaded and long critical section is acquired your other processes are going to have to wait to move forward. Running one system per core fixes this problem.

We're seeing machines with eight cores and 32G of memory. If we were to buy eight disks for these boxes it's really like buying 8 machines with 4G each and one disk.

This partially goes into the horizontal vs vertical scale discussion. Is it better to buy one $10k box or 10 $1k boxes? I think it's neither. Buy 4 $2.5k boxes. The new multicore stuff is super cheap.

Update: Steve Jenson sent me an email stating that this could be rephrased to consider avoid considering physical units of computation as logical units.

Internet Time Blog A Sustainable Edge

Four Domains of Human Action What do we mean by edge? We will be focusing on the edges of four different domains of human action - social, enterprise, market and learning.

· The social domain involves the complex relationships between how we define our individual identities and the forms of social participation that we pursue to shape these identities.
· The enterprise domain looks at how we organize to create economic value and how we define the boundaries of these economic entities.
· The market domain explores how we compete and collaborate on a global scale to create, deliver and capture economic value.
· The learning domain seeks to describe how we learn, with particular emphasis on the interaction between individual learning and group learning.

Complex dynamic loops shape the evolution of each domain and the interdependencies across domains. Many analysts have described elements of each of these domains, but no one has sought to explore systematically how these domains interact with each other. We believe that the biggest opportunities will arise where the edges of these four domains interact and generate tensions that need to be resolved. It is this intersection that defines the first dimension of our research agenda.

To effectively pursue this research agenda, we will need to incorporate two other dimensions of investigation as well.

Four Global Forces
On the second dimension of our research agenda, we need to better understand four long-term global forces and how they interact with the four domains described earlier:

· Public policy - especially the broad movement to remove barriers to entry and barriers to competition
· Technological innovation at three levels:
- the continuing improvement in price/performance in digital hardware building blocks and new techniques for designing, building and delivering software
- the changing architectures for organizing these hardware and software building blocks
- the movement into new arenas of these components and architectures (e.g., the mobile Internet, smart objects, bioinformatics and telematics)
· Demographic - especially the changing age demographics around the globe
· Cultural - especially the emergence of global youth cultures, the growth of the creative class and the growing importance of religion in cultures around the world.

Dr. Dobb's Errant Architectures April 1, 2003 by Martin Fowler

When we let objects wander, we all pay the performance price. Here's how to avoid distributed dystopia's overhead of remote procedure calls and ignore middleware's siren song.

Objects have been around for a while, and sometimes it seems that ever since they were created, folks have wanted to distribute them. However, distribution of objects, or indeed of anything else, has a lot more pitfalls than many people realize, especially when they're under the influence of vendors' cozy brochures. This article is about some of these hard lessons-lessons I've seen many of my clients learn the hard way.

Objects have been around for a while, and sometimes it seems that ever since they were created, folks have wanted to distribute them. However, distribution of objects, or indeed of anything else, has a lot more pitfalls than many people realize, especially when they're under the influence of vendors' cozy brochures. This article is about some of these hard lessons-lessons I've seen many of my clients learn the hard way.


[click for larger image]

Architect's Dream, Developer's Nightmare
Distributing an application by putting different components on different nodes sounds like a good idea, but the performance cost is steep.

A Mysterious Allure

There's a recurring presentation I used to see two or three times a year during design reviews. Proudly, the system architect of a new OO system lays out his plan for a new distributed object system-let's pretend it's some kind of ordering system. He shows me a design that looks rather like "Architect's Dream, Developer's Nightmare" with separate remote objects for customers, orders, products and deliveries. Each one is a separate component that can be placed in a separate processing node.

I ask, "Why do you do this?"

"Performance, of course," the architect replies, looking at me a little oddly. "We can run each component on a separate box. If one component gets too busy, we add extra boxes for it so we can load-balance our application." The look is now curious, as if he wonders if I really know anything about real distributed object stuff at all.

Meanwhile, I'm faced with an interesting dilemma. Do I just say out and out that this design sucks like an inverted hurricane and get shown the door immediately? Or do I slowly try to show my client the light? The latter is more remunerative, but much tougher, since the client is usually very pleased with his architecture, and it takes a lot to give up on a fond dream.

So, assuming you haven't shown my article the door, I suppose you'll want to know why this distributed architecture sucks. After all, many tool vendors will tell you that the whole point of distributed objects is that you can take a bunch of objects and position them as you like on processing nodes. Also, their powerful middleware provides transparency. Transparency allows objects to call each other within or between a process without having to know if the callee is in the same process, in another process or on another machine.

Transparency is valuable, but while many things can be made transparent in distributed objects, performance isn't usually one of them. Although our prototypical architect was distributing objects the way he was for performance reasons, in fact, his design will usually cripple performance, make the system much harder to build and deploy-or both.

Remote and Local Interfaces

The primary reason that the distribution by class model doesn't work has to do with a fundamental fact of computers. A procedure call within a process is extremely fast. A procedure call between two separate processes is orders of magnitude slower. Make that a process running on another machine, and you can add another order of magnitude or two, depending on the network topography involved.As a result, the interface for an object to be used remotely must be different from that for an object used locally within the same process.

A local interface is best as a fine-grained interface. Thus, if I have an address class, a good interface will have separate methods for getting the city, getting the state, setting the city, setting the state and so forth. A fine-grained interface is good because it follows the general OO principle of lots of little pieces that can be combined and overridden in various ways to extend the design into the future.

A fine-grained interface doesn't work well when it's remote. When method calls are slow, you want to obtain or update the city, state and zip in one call rather than three. The resulting interface is coarse-grained, designed not for flexibility and extendibility but for minimizing calls. Here you'll see an interface along the lines of get-address details and update-address details. It's much more awkward to program to, but for performance, you need to have it.

Of course, what vendors will tell you is that there's no overhead to using their middleware for remote and local calls. If it's a local call, it's done with the speed of a local call. If it's a remote call, it's done more slowly. Thus, you pay the price of a remote call only when you need one. This much is, to some extent, true, but it doesn't avoid the essential point that any object that may be used remotely should have a coarse-grained interface, while every object that isn't used remotely should have a fine-grained interface. Whenever two objects communicate, you have to choose which to use. If the object could ever be in separate processes, you have to use the coarse-grained interface and pay the cost of the harder programming model. Obviously, it only makes sense to pay that cost when you need to, and so you need to minimize the number of interprocess collaborations.

For these reasons, you can't just take a group of classes that you design in the world of a single process, throw CORBA or some such at them and come up with a distributed model. Distribution design is more than that. If you base your distribution strategy on classes, you'll end up with a system that does a lot of remote calls and thus needs awkward, coarse-grained interfaces. In the end, even with coarse-grained interfaces on every remotable class, you'll still end up with too many remote calls and a system that's awkward to modify as a bonus.


[click for larger image]

A Better Way
Clustering involves putting several copies of the same application on different nodes. If you must distribute, this approach eliminates the latency problems.

Laying Down the Law
Hence, we get to my First Law of Distributed Object Design: Don't distribute your objects!

How, then, do you effectively use multiple processors? In most cases, the way to go is clustering (see "A Better Way"). Put all the classes into a single process and then run multiple copies of that process on the various nodes. That way, each process uses local calls to get the job done and thus does things faster. You can also use fine-grained interfaces for all the classes within the process and thus get better maintainability with a simpler programming model.

Where You Have to Distribute
So you want to minimize distribution boundaries and utilize your nodes through clustering as much as possible. The rub is that there are limits to that approach-that is, places where you need to separate the processes. If you're sensible, you'll fight like a cornered rat to eliminate as many of them as you can, but you won't eradicate them all.

The overriding theme, in OO expert Colleen Roe's memorable phrase, is to be "parsimonious with object distribution." Sell your favorite grandma first if you possibly can.

Working with the Distribution Boundary
Remote Façade and Data Transfer Object are concepts to make remote architectures work.

As you design your system, you need to limit your distribution boundaries as much as possible, but where you have them, you need to take them into account. Every remote call travels on the cyber equivalent of a horse and carriage. All sorts of places in the system will change shape to minimize remote calls. That's pretty much the expected price. However, you can still design within a single process using fine-grained objects. The key is to use them internally and place coarse-grained objects at the distribution boundaries, whose sole role is to provide a remote interface to the fine-grained objects. The coarse-grained objects don't really do anything, but they act as a façade for the fine-grained objects. This façade is there only for distribution purposes-hence the name Remote Façade.

Using a Remote Façade helps minimize the difficulties that the coarse-grained interface introduces. This way, only the objects that really need a remote service get the coarse-grained method, and it's obvious to the developers that they are paying that cost. Transparency may have its virtues, but you don't want to be transparent about a potential remote call.

By keeping the coarse-grained interfaces as mere façades, however, you allow people to use the fine-grained objects whenever they know they're running in the same process. This makes the whole distribution policy much more explicit. Hand in hand with Remote Façade is Data Transfer Object. Not only do you need coarse-grained methods, you also need to transfer coarse-grained objects. When you ask for an address, you need to send that information in one block. You usually can't send the domain object itself, because it's tied in a web of fine-grained local inter-object references. So you take all the data that the client needs and bundle it in a particular object for the transfer-hence the term. (Many people in the enterprise Java community use the term Value Object for Data Transfer Object, but this causes a clash with other meanings of the term Value Object.) Data Transfer Objects appear on both sides of the wire, so it's important that it not reference anything that isn't shared over the wire. This boils down to the fact that Data Transfer Objects usually reference only other Data Transfer Objects and fundamental objects such as strings.

-M. Fowler

Interfaces for Distribution
Use Web services only when a more direct approach isn't possible.

Traditionally, the interfaces for distributed components have been based on remote procedure calls, either with global procedures or as methods on objects. In the last couple of years, however, we've begun to see interfaces based on XML over HTTP. SOAP is probably going to be the most common form of this interface, but many people have experimented with it for some years.

XML-based HTTP communication is handy for several reasons. It easily allows a lot of data to be sent, in structured form, in a single round-trip. Since remote calls need to be minimized, that's a good thing. The fact that XML is a common format with parsers available in many platforms allows systems built on very different platforms to communicate, as does the fact that HTTP is pretty universal these days. The fact that XML is textual makes it easy to see what's going across the wire. HTTP is also easy to get through firewalls when security and political reasons often make it difficult to open up other ports.

Even so, an object-oriented interface of classes and methods has value, too. Moving all the transferred data into XML structures and strings can add a considerable burden to the remote call. Certainly, applications have seen a significant performance improvement by replacing an XML-based interface with a remote call. If both sides of the wire use the same binary mechanism, an XML interface doesn't buy you much other than a jazzier set of acronyms. If you have two systems built with the same platform, you're better off using the remote call mechanism built into that platform. Web services become handy when you want different platforms to talk to each other. My attitude is to use XML Web services only when a more direct approach isn't possible.

Of course, you can have the best of both worlds by layering an HTTP interface over an object-oriented interface. All calls to the Web server are translated by it into calls on an underlying object-oriented interface. To an extent, this gives you the best of both worlds, but it does add complexity, since you'll need both the Web server and the machinery for a remote OO interface. Therefore, you should do this only if you need an HTTP as well as a remote OO API, or if the facilities of the remote OO API for security and transaction handling make it easier to deal with these issues than using non-remote objects.

In this discussion, I've assumed a synchronous, RPC-based interface. However, although that's what I've described, I actually don't think it's always the best way of handling a distributed system. Increasingly, my preference is for a message-based approach that's inherently asynchronous. In particular, I think they're the best use of Web services, even though most of the examples published so far are synchronous.

For patterns on asynchronous messaging take a look at www.enterpriseIntegrationPatterns.com.

-M. Fowler



Martin Fowler is Chief Scientist at ThoughtWorks and a frequent speaker at Software Development conferences. This article is adapted from Patterns of Enterprise Application Architecture, Chapter 7 (Addison-Wesley, 2003). Reprinted with permission.

[May 3, 2008] Carr refines IT does not matter argument

May 12, 2006 | KiteBlue

At the recent Effective IT conference in London, industry sceptic Nicholas Carr kicked off proceedings with a modified version of the attack on IT in his book, IT does not matter. Jyoti Banerjee assesses whether the new version stacks up in the light of current IT experience.

It was the Nobel Prize-winning economist from MIT, Robert Solow, who quipped that you could see computers everywhere except in the productivity statistics. In writing his book, Nicholas Carr was simply following in Solow's distinguished footsteps by attacking the value business has got from the massive investments it has made in IT. His argument was that IT does not provide competitive advantage to a business as there is little or no differentiation in the bulk of the technologies used by modern businesses.

Those for and against the argument have had a field day in the IT press, with both sides claiming victory through case studies and stats that prove their point and disprove the opposition. With Carr given the opportunity to rehearse his arguments in front of an audience of IT professionals in London, it was clear that the heat has not gone out of the debate, though one wishes for a little more light all around.

To differentiate or not

Let's explore the debate a little further. One could ask whether it matters that 70-80% of all IT is undifferentiated. After all, the implication could well be that at least 20-30% of all IT is strongly differentiated. Although the two statements are two sides of the same coin, they actually present quite different arguments.

Take the example of the car industry. Its products are largely undifferentiated (they have four wheels, four brakes, a steering wheel, a roof, an engine, etc) but the industry has created huge brand differentiation by focusing on the bits that are different (front-wheel versus rear-wheel versus all wheel drive, for example, or 50mpg versus 35 mpg, and so on). Although 70-80% of two cars may be undifferentiated in terms of the raw materials, there is no question that the buyer can distinguish a Ford from a BMW, a Hyundai from a Honda. is there really a difference between those cars that would justify their different brand strengths or company profitability? How is it that a company like BMW can go to the same suppliers as everybody else and yet deliver unmistakeably different brand performance in the marketplace? Maybe the differentiation is all done in the few percent of components that are actually different from one car to the next.

To me that would make sense in computing terms, as well. We might all use the same computers (undifferentiated) but the few that do smart things with their computers, or build smart processes around them (differentiation), could perform vastly better than the others. So to me, the case of differentiated or undifferentiated infrastructure, as presented by Carr, is not one that makes me say anything but "so what."

Hagel and Brown in their book The Only Sustainable Edge offer an interesting argument that the secret to competitive advantage is the relentless building of distinctive capability, within an organisation, plus in the networks it operates in. The implication of offering distinctive internal capabilities is that organisations have to choose what they are going to excel in, as it is not possible for any single organisation to excel in everything. By choosing to focus on what the organisation does best, it becomes mandatory for the organisation to then seek external partners who can provide world-class capability in those areas the organisation is not distinctive in. Clearly, the outsourcing impetus comes from those organisations that audit what they do and decide that in a number of areas they cannot compete with those that offer world-class capability. Instead, they join together in networks with those who can plug their gaps.

In effect, the Hagel / Brown argument would support Carr's position that a company should focus on what it is good at, and leave the rest to others who are better equipped to deal with those issues. Outsourcing started with infrastructural issues, such as IT and facilities. It has since progressed to cover horizontal activities such as accounting and payroll. Today, companies outsource things that a few years ago would have been regarded as core to their operating processes. Why should IT be any different? Why should IT professionals hang on to processes or skills within the organisation when their own competences are best employed elsewhere? In this sense, Carr is absolutely right: why should IT see itself as something special when it probably isn't, and should be handed over to a partner more competent in delivery.

Where I hesitate to hang my hat on the Carr coat-rack is in the area of utility computing. In becoming an advocate of utility computing, Carr is making it difficult for others to buy into his argument. The reality is that utility computing is still too new and too immature to be the mechanism by which enterprises can exploit quality world-class IT infrastructure. While many of the products already exist to enable utility computing, the two big gaps right now are in hardware and in process management.

Hardware

There are just not enough server farms around right now to allow utility computing to fly, despite huge attempts by all and sundry to build them fast. This is obviously a big enough gap that Microsoft has decided to spend a large part of its $35 billion cash pile on server farms in every location around the world they can find a big enough source of electricity. As these server farms come online, the hardware argument against utility computing will go away. Till then, there are no enough utility computing providers who have world-class capability to deliver the sort of infrastructure that tens of thousands of enterprises will need.

Processes

Of course, any new product can introduce change, even revolutionary change. More importantly, it takes time to deliver widespread change. It takes time to configure processes, enterprises, and now networks of enterprises together in such a way that the resultant meld of processes delivers competitive advantage of the sort that shows up in an economist's productivity statistics. It took about thirty years before the impact of electricity could be measured across an economy because it took that long to figure out the best way to re-configure businesses in such a way that they could exploit electric machines, electric processes, etc.

We are seeing the same kind of reconfiguration taking place around digital processes. Eventually, that reconfiguration may well encompass utility computing as well. Till then, I can live with largely undifferentiated IT infrastructure if it allows us just a tiny room for innovation. Because that little margin of differentiation is often enough for people, smart people, to build competitive advantage.

That's what the entire discussion about IT and its impact on business boils down to: People. Preferably, smart people. Now there's an idea with legs….

Deutsch's Fallacies, 10 Years After @ JAVA DEVELOPER'S JOURNAL By: Ingrid Van Den Hoogen

In the fall of 1991, when mobile computing involved a hand truck and an extension cord, the idea of an everything-connected world was a leap of faith to some and a really crazy idea to most. But Sun's engineers were already working on notebook computers, and Peter Deutsch, one of Sun's original "Fellows," was heading up a task force to advise Sun on its mobile strategy.

Deutsch just called 'em like he saw 'em. When he got to Sun he began to consider some of the existing notions some engineers had about network computing, some of which were downright foolish.

Coming off a stint as chief scientist at Park Place Systems, Deutsch was looking to hang out and cogitate with and bask in Sun's intense engineering culture. He was a key designer and implementer of the Interlisp-D system and a significant contributor to the design of the Cedar Mesa language and the Smalltalk-80 programming environment. But he hadn't gotten into networking. It might have been some sort of intellectual hazing ritual that made Deutsch co-chair of a mobile computing task force. Or it might have been brilliance.

Bill Joy and Dave Lyon had already formulated a list of flawed assumptions about distributed computing that were guaranteed to cause problems down the road: the network is reliable; latency is zero; bandwidth is infinite; and the network is secure. James Gosling, Sun Fellow, Chief Technology Officer for Java Developer Platforms, and inventor of Java, had actually codified these four, calling them "The Fallacies of Networked Computing."

"It's a sort of funny thing," he says, "that in the large-scale world, networking didn't really exist in 1995. But since Sun was founded in 1982, networking has been at the core of what we do. We cut ourselves on all these problems pretty early on."

What Deutsch saw with fresh eyes was that, despite Gosling's warning, as engineers - inside and outside Sun - designed and built network infrastructure, they kept making the same mistakes, based largely on the same basic yet false assumptions about the nature of the network.

"The more I looked around at networking inside and outside Sun," Deutsch says, "the more I thought I could see instances where making these assumptions got people into trouble." For example, Deutsch could see Gosling's Fallacies coming into play as Sun moved its operations from downtown to its glamorous new campus near the San Francisco Bay.

"There was a lot of thrashing around about the topology of the network for the corporate intranet, where routers should be, etc.," Deutsch recalls. "Things broke all the time. My recollection is that it was watching all that thrashing around that led me to numbers five and six - that there's a single administrator and that the topology won't change." Number seven, that transport cost is zero, coalesced as Sun discussed creating a wide area network to connect the Mountain View campus with a new lab on the East coast.

When Deutsch wrote the list, by this time seven items strong, into a slide presentation, "It was no big deal," he says. Neither was there a roar of acclamation. Rob Gingell, Sun vice president, chief engineer, and Fellow, remembers it as an mmm-hmm moment rather than an ah-ha moment.

But putting it down on paper, codifying it, made all the difference. "The list Peter wrote down was a very pithy summary of the pitfalls people typically fall into," Gosling says. "We had many conversations about it. There would be a presentation where somebody would be proposing some design, and somebody would point out, 'You know, that kind of depends on the network being reliable. And that's false.'"

Gosling added the Eighth Fallacy - the assumption that the network is homogeneous - in 1997 or so. "It reflected a mismatch between our perception and others' perceptions of the network. That the network is homogeneous is never a mistake we've made. But it was clear that lots of people on the outside had a tendency to fall into this."

If the Fallacies pervade networking, Sun was perhaps ideally placed to identify and surmount them, says Gosling. "We have a very strong engineering culture," he says. "A lot of our work is about building solid reliable systems."

Beginning with Java, Sun has explicitly grappled with the Fallacies. Java's "write once, run anywhere" approach celebrates heterogeneity, according to Gosling. While nobody exactly tacked the Fallacies up on the wall when Jini was designed, says Gingell, "If you look at Jini, it's apparent that what it's trying to do is confront some of the Fallacies, because the Jini team and Peter shared a view of those problems." Jini lets diverse devices discover and interact with each other without any administrator at all; it also can deal with multiple administrators acting divergently.

Indeed. "Almost everything in Jini is about dealing with that list," Gosling says. "It's all about the dynamic, spontaneous reconfiguration of networks in the face of a dynamic environment. Things getting plugged in, things getting broken."

The Liberty Project is another example of Sun engineering's acknowledgment of the single administrator fallacy, according to Juan Carlos Soto, product marketing group manager for Project JXTA and the jxta.org open source community manager. "Liberty provides a federated identity mechanism that makes it convenient for businesses to interact while still respecting privacy, because there's no single point of control," he says.

JXTA goes perhaps furthest in turning all of the Fallacies into truths. It creates an ad hoc virtual private network among peer devices that aren't always at stable addresses, Soto says, so it solves the problems of multiple administrators, changing topology, and heterogeneous networks. Moreover, because JXTA makes it easy to create multiple instances of a service, and peers cooperate to move messages through the network, it creates great resiliency that counteracts the inherent unreliability of networks. It's designed to be efficient and conserve bandwidth as much as possible, reducing the impact of Fallacies Two and Three, and it accepts every authentication method for flexible and tight security.

"P2P is turning the computer into both a server and a client," Soto says. "It's way beyond a single administrator and a homogenous network. It abstracts a lot of those issues away."

The Sun days are a long time ago for Deutsch, who now, as president of Aladdin Enterprises, consults for technology and venture capital companies. He's a bit bemused that his short list of big goofs has become institutionalized as Deutsch's Fallacies. "If you had told me when I was at Sun that 10 years later, this one page of maxims was going to be one of the things I was best known for," he says, "I would have been floored."

Although networking has changed plenty since 1991, and some of the Fallacies, such as betting on a secure network, are more obvious, they are all still as applicable, Gingell says, and they continue to be driven into the hearts and minds of Sun's engineers. "Cultural reeducation is more important than any one product," he says. "If all our engineers understand networking at a very large scale... the products will take care of themselves."

Published Jan. 8, 2004 - Reads 24,289
Copyright © 2008 SYS-CON Media. All Rights Reserved.

The future of IT It's not all bad news, Nicholas Carr says - Network World

Two weak books later the author has some doubts ;-)
But will service provider really be cheaper of local model remains to be seen. Rip-offs in outsourcing are too common to ignore and I do not see why they should be less common in service provide world --NNB .

Bret: In your book, Does IT Matter?, your premise is that IT (narrowly defined as the plumbing and pathways through which data is passed and or processed) no longer provides a competitive advantage and that IT will become a commodity provided by service providers. We are seeing the beginnings of this in my company. We have outsourced device monitoring and we are planning to contract device installation and wiring in our new data center. That said, over what period of time would you expect to see utility computing becoming the norm in industry?

Nick Carr: I think the shift will occur over the next decade or two. The speed will depend on who you are. Consumers are well on their way to relying on Web apps. Smaller companies will also likely move quickly to the utility model, as it allows them to avoid the big capital and labor costs of internal IT. Big companies will be the slowest to move - they'll pursue a hybrid model for many years.

JoeB: Who will be the first to adopt such a business model? Who will experience the most difficulty?

Nick Carr: On the business side, it will be small companies with fairly routine IT requirements. Large companies who face tight regulations on data and privacy - like those in the healthcare industry - face some of the biggest barriers.

Barmijo: Nick, many reviews of your book take issue with the idea of utility computing. They note that the big vendors don't offer anything approaching your vision. You, however, focus on Internet operators like Google and Amazon. Do you believe this shift has to start in the Internet, and if so why?

Nick Carr: Like other disruptive innovations, utility computing is beginning with entrepreneurial vendors and smaller customers. But we can already see big vendors - Microsoft, SAP, IBM, HP etc - moving toward the utility model. I think they see which way the wind's blowing.

JeffB: Although Microsoft, Google and other major computer companies are building massive data center facilities in specific areas, do you believe there will also be a need for distributed server farms in high population areas throughout U.S. cities?

Nick Carr: I think we'll see a very broad distribution of server farms, and some will be in or near cities. Microsoft, for example, is building its largest utility data center - and it may be the largest data center ever - just outside of Chicago. One of the big constraints on locating centers in urban areas is the constraint on the electric grid's ability to handle the power requirements.

Tagamichi: While it makes sense to aggregate computing power, there is a big difference between computers and electrical utilities in that the devices that needed electrical power are dumb in many ways. Take a typical small appliance - it just connects, powers on, and you use it. Computing resources require more technical know-how to manage and operate. Compare appliances with even the most basic use of a computer for Web browsing and e-mail. When something goes wrong, the user will not know if it is something as simple as a loose cable connection. You would still need IT to fix that. How do you account for such needs in your vision of the future?

Nick Carr: That's a good point. I'm not suggesting that companies won't have to hire, or at least contract for, technical support people - just that the requirements will likely be reduced over time. One trend that's just beginning is the virtualization of the desktop, which will reduce the maintenance requirements. What companies may end up doing, for knowledge workers at least, is let employees buy their own machines, and then serve up the "business desktop" virtually - as a separate, secure window on the screen.

Jwff: I agree with moving work that doesn't add value to outside the IT organization. Once that systems maintenance work is removed, the focus of IT departments can be more appropriately turned to activities that add value, such as mining end-user data for business value. So isn't a move to consume commodity IT services from the cloud merely an evolutionary (not revolutionary) way to away from the 70% sustainment effort that dominates many IT shops today?

Nick Carr: Absolutely. Most companies' IT budgets and labor requirements are dominated by the need to build and run private systems. As more and more of those requirements are taken over by utility suppliers, the nature of the IT departments will shift toward information management and analysis activities. A big question, though, is whether those skills will need to stay in an "IT department" or whether they'll flow naturally into the business.
Mbaum: The IT department is dead. Long live IT. How then will corporations determine which "cloud services" to adopt, which providers to select, and how to provide oversight in determining compliance, effectiveness, and productivity?

Nick Carr: The IT department is far from dead yet - don't believe everything you read in Network World :-) - and will play the central role in managing the shift to the utility model and the coordination between Web-based services and those supplied locally.

TechnoSteve: A lot of companies may be turned of by utility computing because they do not want their data in the hands of a third-party. How does this and other privacy concerns play into the future you're describing?

Nick Carr: That's a great question - and one I hear often. The control and security of corporate information is, of course, an extremely important one for companies. But, as we've long seen with payroll data and more recently seen with customer relationship data, companies are willing to put sensitive information in the hands of trusted suppliers. Indeed, I think there's a good case to be made that outside utility suppliers, whose entire business hinges on their ability to keep data protected, may ultimately prove more secure than the current, fragmented model of data processing, which as we know has many vulnerabilities. So while I think data security is a crucial concern (and the onus is on the suppliers to prove their trustworthiness), I don't think it's ultimately a deal-breaker. You can be sure we will also see continued rapid advances in encryption and other technologies that will enhance the security of data in multi-tenant systems.

Brian: In "The Big Switch," you talk about the idea of computing power as a utility. Doesn't this model rely on a neutral Internet? Could non-neutrality force people back to a client-server model if backbone providers (such as AT&T which has announced its intention to do so) start shaping traffic to degrade certain competitive SaaS applications?

Nick Carr: That could be a concern, but my guess is that if we move away from net neutrality it will be to give preference and priority to business and commercial information (over more personal information flows, like P2P trading). The net has become so important to the economy that I doubt that business and government will allow service to degrade for crucial business functions.

AroundOmaha: Do you see this affecting the model for how offices are structure? By that I mean, will we see companies with virtual employees all over the country and even the planet? If the application isn't tied to any particular place it would seem to free the workers up also.

Nick Carr: To some degree I think we'll see that - and are in fact already seeing greater mobility. But we also know that people like to work in close proximity to their colleagues - conversations in hallways will always be important - so I don't think we'll see the end of good-sized physical offices.

Vcleniuk: Even if IT departments and jobs move from corporations into the "cloud," this cloud will still run on something, somewhere, right? Won't IT jobs move into those areas? Wouldn't there still be a need for a few large data centers, or even several smaller data centers in various countries?

Nick Carr: Yes, I think we'll see many of the technical jobs shift from the users to the suppliers. At the same time, though, we're also seeing a great deal of progress in the automation of IT work, through virtualization, for instance. So we can anticipate that certain kinds of IT jobs will probably diminish. It's also important to recognize that as the 'Net becomes our universal medium, software capabilities will increasingly be built into a whole range of consumer products - which will increase demand for talented computing professionals. It's not all bad news.

Jim: I have not read "The Big Switch" yet, but from the excerpts you seem to be on the right track. I am an old-timer IT director and I can see the writing on the wall. Question, what would you suggest a smaller organization IT person should be doing to prepare for the "switch"? I am personally thinking about retirement, but I have many colleagues who are part of the conventional IT world who are much younger.

Nick Carr: Retirement's probably a good option. (Just kidding.) I think IT professionals, particularly younger ones, need to take a close look at the trends in IT - utility computing, virtualization, consolidation, automation - and anticipate how these trends will reshape the labor force. You want to position yourself to be in the employment areas that have the best job prospects, which may mean seeking out additional training or different experiences.

Undecided: As the shift occurs, what would be some ways IT professionals can shift as well, re-tool/re-train, where should they focus their efforts?

Nick Carr: I touched on this in the previous answer. To be more concrete, you probably should be wary of setting a career path focused on administrative and maintenance jobs that can be automated and thus require much lower labor requirements in the future. And you should also look at the new big server farms and how they operate - skills related to large-scale parallel processing, grid computing and hardware virtualization will likely be in very high demand.

Paul: Do you see a market for consultants that specialize in helping small and medium businesses move to utility computing?

Nick Carr: I do - even though I would hope that in the long run the greater simplicity of the utility model will mean that companies should be able to operate with fewer consultants and outside advisors. For the foreseeable future, consultants and other vendors focused on smaller and mid-sized business will likely have greater service opportunities related to the transition to Web apps and other utility services.

AroundOmaha: In this model how does the future look for corporate application developers? How will this affect them and what steps should they be taking for their future?

Nick Carr: The need to keep existing custom apps running, and to write new ones, is going to be around for quite a while. Software-as-a-service is not the only software model for utility computing. Running custom apps on utility grids - like that being pioneered by Amazon Web Services - is another important facet to the shift. In the long run, the demand for internal app developers will probably diminish, as the SaaS market expands and matures.

ITAntiMatter: I should have become a plumber. :-)

Moderator-Keith: Nick, ITAntiMatter is joking, but that brings up a good point. Can you give the audience some good news regarding their careers and jobs?

Nick Carr: I need to be honest: We're going to see a great deal of disruption in the IT job market over the coming years. But the good news is that IT is becoming more, not less, important to the global economy, and that means if you have the right set of skills you're likely to have rich opportunities - not just as an employee but as a potential entrepreneur, too.

littleone: When the power goes out, the systems go down causing the business to go down. In most cases do you think that any company is willing to risk their entire system with continual power outages? How do you think the upcoming utility computing companies are going to account for that when they can only generate power for what they control?

Nick Carr: It's ironic that an old utility - electric power - is proving to be one of the weak links in building a new utility. What I think we're seeing from utility giants like Google is an extremely sophisticated approach to managing power and to ensuring redundancy in their systems. But you're right that in the short run, at least, the grid is very vulnerable to disruptions - from power shortages to botnet attacks.

MJ: How does this shift fit with end user devices? There are massive amounts of computing power now showing up in all sorts of end devices. Are we just revisiting an old question -- kind of the mainframe-versus-distributed-computing debate repackaged?

Nick Carr: Great question. It's wrong to think of the utility model as a 100% centralization model. In fact, one of the things that fosters the efficient use of centralized computing and data-storage resources is the availability of powerful processors and data stores in local devices. The central services can make use of those local assets. Think of how World of Warcraft works - using code and data on both central servers and local devices. That's an important part of deploying Web-based services effectively.

JeffB: The fiber to the curb initiatives of Verizon and AT&T will provide bandwidth capable of true utility computing - do you agree that communication speed is necessary for the true revolution?

Nick Carr: Yes. It's the fiber-optic net that makes this all possible. As we see network capacity, particularly at the last mile, continue to expand, the applications will proliferate. People have been talking about utility computing for 50 years, but it isn't until recently that we've had the high-capacity transmission network that is making it a reality.

Moderator-Keith: A theme to many of the questions we're receiving is security. How does this shift affect security issues, both on the security of data (private credit-card numbers), as well as privacy and compliance issues -- and will botnet attacks will ever go away?

Nick Carr: As for botnet attacks and other malicious behavior, I think they're likely to intensify rather than diminish. (The bad guys can also reap the benefits of the utility model.) So there's not going to any end to the cat-and-mouse game. But I think the technologies for protecting data, such as encryption, are also going to advance rapidly - and that the utility system will ultimately prove more secure than the existing system of fragmented but networked systems.

Barmijo: A lot of folks are focusing on the first half of your book (and frankly on Does IT Matter), but the second part of "The Big Switch" seems almost like a cautionary note about the dangers of sharing too much data. I think we can already see the start of this with Google's selective enforcement of their own version of English grammar and spelling in Adwords and also in the way Google presents ads that link to content in e-mail. Do you believe we will have to regulate who can own and process some of these huge data stores the way we used to regulate how many newspapers or TV stations a company could own?

Nick Carr: Much hinges on how much control and power ends up being consolidated in the hands of big companies like Google. I have no doubt that Google is sincere in trying to be a good corporate citizens, but as you point out it also has commercial and even ideological interests in how it presents and filters information. And that doesn't even touch on the potential for using the vast stores of personal and behavioral data to actively manipulate people. So my guess is that if Google and others continue to consolidate power and control, they will face governmental regulation - perhaps most heavily in countries outside the U.S., who will not want their economic and cultural infrastructure to be controlled by big U.S. companies.

[Apr 27, 2008] Wired Campus Nicholas Carr Was Right IBM's Project Kittyhawk Unifies the Internet - Chronicle.com

Nicholas Carr, a notorious writer in the technology world for his brazen views about the future of the IT industry, writes about an IBM white paper describing Project Kittyhawk.

The white paper, uncovered by The Register, a Britain-based technology Web site, describes the project as an earnest attempt to build a "global-scale shared computer capable of hosting the entire Internet as an application."

Comments

We already had computing as a public utility in the IBM 360 generation and that produced a revenge of the little people in the form of PCs which upended IBM big time. If we get computing as a public utility again we will get another reaction from the little people breaking free of the dumbing down, the mass audience stupidity of editor and viewer, the crass worship of leaders and celebrities, that constitute the vilest parts of American and most other cultures.

- Richard Tabor Greene Feb 8, 12:16 AM

[Apr 27, 2008] Q&A with Nicholas Carr, author, The Big Switch - By Kevin Ryan,

Mar 18, 2008 | Search Engine Watch

Just as cheap power delivered over a universal grid revolutionized the processing of physical materials, cheap computing delivered over a universal grid is revolutionizing the processing of informational or intellectual goods. That's the premise of The Big Switch: Rewiring the World, from Edison to Google, the latest book by Nicholas Carr, former executive editor of the Harvard Business Review.

Carr believes that ubiquitous computing is leading to an upheaval in traditional media businesses, which will spread to other sectors of the economy as more products and processes are digitized. He'll share his thoughts on why we're entering a new and even more disruptive era of computerization today, when he delivers the opening keynote at Search Engine Strategies New York.

Kevin Ryan, VP of global content for SES, recently interviewed Carr about the forces driving the adoption of cloud computing, the dangers of Big Brother, and the ethics of search engine optimization.

Kevin Ryan: Your 2004 book, Does IT Matter? Information Technology and the Corrosion of Competitive Advantage, described the plug-in IT world. Now we are looking at a plug-in computing world. What factors do you think will force this change?

Nicholas Carr: I think it's a combination of technical and economic factors. On the technology side, advances in networking, virtualization, parallel processing, and software are combining to make it possible to deliver rich applications over the Net in a way that is often superior to running similar applications on local machines.

Shared software makes collaboration much easier – for instance, as we see in social networks and other Web 2.0 sites. Consolidating apps in central data centers also provides big economic benefits through scale economies in hardware, software, labor, and other resources like electricity. So that combination of ongoing technological advances and growing economic benefits are going to continue to push computing functions out into the so-called cloud.

KR: The central data and application central warehousing issue is interesting, and we have already seen some applications go virtual from Google. Do you think consumers will adopt this en masse, or are we looking at a slow adoption process over a period of many years?

NC: What we've seen with the Internet from the very start is that people draw little distinction between applications installed on their hard drives and applications running over the Net – they just go with whichever option provides the greatest convenience, the best features, and the lowest cost.

As web apps continue to improve, in terms of features and speed, I think we'll see consumers accelerate their shift from private to public apps, and they'll store more and more of their data online. The shift will proceed over many years, but if you look at how young people in particular are using their computers, you could argue that they're already mainly operating in the cloud. The idea of buying, installing, and maintaining packaged software is becoming foreign to them.

[Apr 26, 2008] Q&A Author Nicholas Carr on the Terrifying Future of Computing

12.20.07

Carr: Most people are already there. Young people in particular spend way more time using so-called cloud apps - MySpace, Flickr, Gmail - than running old-fashioned programs on their hard drives. What's amazing is that this shift from private to public software has happened without us even noticing it.

Wired: What happened to privacy worries?

Carr: People say they're nervous about storing personal info online, but they do it all the time, sacrificing privacy to save time and money. Companies are no different. The two most popular Web-based business applications right now are for managing payroll and customer accounts - some of the most sensitive information companies have.

Wired: What's left for PCs?

Carr: They're turning into network terminals.

Wired: Just like Sun Microsystems' old mantra, "The network is the computer"?

Carr: It's no coincidence that Google CEO Eric Schmidt cut his teeth there. Google is fulfilling the destiny that Sun sketched out.

Wired: But a single global system?

Carr: I used to think we'd end up with something dynamic and heterogeneous - many companies loosely joined. But we're already seeing a great deal of consolidation by companies like Google and Microsoft. We'll probably see some kind of oligopoly, with standards that allow the movement of data among the utilities similar to the way current moves through the electric grid.

Comments

Posted by: RichPletcher

109 days ago0 Points

You know .... it never fails to amaze me for as smart some people are they are capable of uttering such total gibberish. This smacks of the Larry Ellison proclamation of 1996 that the only thing you need is the "internet appliance"

Posted by: ka1axy

110 days ago1 Point

The only problem with PC as network terminal is the same one that's been dogging that paradigm since the Sun X-terminal concept...link bandwidth and latency. Neither then nor now, will users be willing to put up with slow refreshes. I'll keep my co...

Posted by: jgeada

110 days ago1 Point

I

Posted by: eliatic

110 days ago1 Point

"Nicholas Carr is high tech's Captain Buzzkill - the go-to guy for bad news." Buzzkill. Now there's a word that sums up the western tendency to see things as either/or dualities. As echoed in the word manic. Only, there's a choice, a reality, that l...

[Apr 26, 2008] AT&T Denies Resetting P2P Connections

Posted by Soulskill on Saturday April 26, @08:14AM
from the it-was-the-one-armed-isp dept.

betaville points out comments AT&T filed with the FCC in which they denied throttling traffic by resetting P2P file-sharing connections. Earlier this week, a study published by the Vuze team found AT&T to have the 25th highest (13th highest if extra Comcast networks are excluded) median reset rate among the sampled networks. In the past, AT&T has defended Comcast's throttling practices, and said it wants to monitor its network traffic for IP violations. "AT&T vice president of Internet and network systems research Charles Kalmanek, in a letter addressed to Vuze CEO Gilles BianRosa, said that peer-to-peer resets can arise from numerous local network events, including outages, attacks, reconfigurations or overall trends in Internet usage. 'AT&T does not use "false reset messages" to manage its network,' Kalmanek said in the letter. Kalmanek noted that Vuze's analysis said the test 'cannot conclude definitively that any particular network operator is engaging in artificial or false [reset] packet behavior.'"

Re:Confirmed? (Score:5, Informative)

by budgenator (254554) on Saturday April 26, @09:19AM (#23206838) Journal

No and Vuze was quite up-front about the study, they basically measured the number of RST messages and divided by the number of network connections. The numbers weren't intended to be accurate but rather to give an indication of realevive trends.
For example,
37 users on Telecom Italia France using ASN 12876 experienced a median of 2.53% RST messages;
27 users on AT&T WorldNet Services using ASN 6478 experienced 13.97% RST messages;
24 users on AT&T WorldNet Services using ASN 7018 experienced 5.35% RST measages;
40 users on Comcast Cable using ASN 33668 experienced 23.72% RST messages.
One thing you have to remember is the forged RST packets is a man-in-the-middle-attack, the Vuze plugin connected on a AT&T connection doesn' know if the RST came from AT&T at ASN 6478 , AT&T at ASN 7018, Comcast or Telecom Italia France.

[Apr 26, 2008] IT Commandment Leave the ideology to someone else Rational rants ZDNet.com by Paul Murphy

February 21st, 2006 | Managing L'unix

About two o'clock in the morning, I heard Bukowski's publisher talking about the New Formalists, a group of poets that wanted to take poetry back to the strict forms, such as sonnets and metered verse, alledgedly because they were offended by the likes of Bukowski's rude honesty in free verse.

... ... ...

"As the spirit wanes, the form appears," Bukowski had written...

Our IT Commandments:
  1. Thou shalt not outsource mission critical functions
  2. Thou shalt not pretend
  3. Thou shalt honor and empower thy (Unix) sysadmins
  4. Thou shalt leave the ideology to someone else
  5. Thou shalt not condemn departments doing their own IT
  6. Thou shalt put thy users first, above all else
  7. Thou shalt give something back to the community
  8. Thou shalt not use nonsecure protocols on thy network
  9. Thou shalt free thy content
  10. Thou shalt not ignore security risks when choosing platforms
  11. Thou shalt not fear change
  12. Thou shalt document all thy works
  13. Thou shalt loosely couple

The Big Switch – It Sucks.

The Big Switch by Nicholas Carr is a new book about how the IT department does not matter. What the book really represents is hunk of wasted paper chalked full of inaccuracies without any real backing. To be honest I've only read the first chapter and cannot read any further.

The book starts out with a story in the prologue about how the author met with some people in Boston who showed him that IT could be a utility. It asserted that this was his idea from some previous book he wrote. The story itself was boring and meaningless. It didn't grab my attention nor was there any good information to keep me reading.

Hoping that the prologue was just something that would not give a grasp on the rest of the book I continued reading to see where this was going. The author asserts that there is this new there is a new technology that would allow the entire IT department to be outsourced. I don't understand, how this is a new technology? Companies have been outsourcing this department for over ten years now. The Carr does not explain himself in any fashion to prove his theory.

The next problem I saw was the order in which content was presented. For example, the author mentions placing an order on Amazon's site and doing so on dial up services. The very next paragraph it talks about the spread of broadband and how it has changed everything. Amazon did not become popular until broadband was in use. Most people did not order anything online until after they had broadband. People would go into work and order items online due to the fact they had broadband at work.

The final straw for me was when the author mentioned Napster. The book states that Napster was created by a college dropout and that it started Web 2.0. Napster was started by Shawn Fanning while he was attending college. He later dropped out after the success of this software. The software was still in the first dotcom bubble. The technology might have shown people how to create the services we use today; but it did not start web 2.0.

The author clearly does not know technology and should not be writing about it. I've only read the first chapter of this book and will not read any more; it's worthless to me. The book first section of this book is boring and filled with inaccuracies, it is like a bad business blog trying to write about technology – so bad its almost humorous.

The Big Switch by Nicholas Carr

The Critical Blind Spots

I'll finish up with the critical blind spots I mentioned. So long as you are aware of these, the book is a safe and useful read:

Not that I have better treatments of these problem areas to offer, but for now, I'll satisfy myself with pointing out that they are problem areas.

Cloud computing - in your dreams Storage Bits ZDNet.com

A particularly odd bit of goofiness has hit the infosphere: cloud/utility computing mania. Nick Carr has written a book. IBM has announced, for the umpteenth time, a variation on utility computing, now called cloud computing. Somebody at Sun is claiming they'll get rid of all their data centers by 2015.

R-i-i-i-ght.

You know the flying car in your garage?
The syllogism is:

  1. Google-style web-scale computing is really cheap
  2. Networks are cheap and getting cheaper fast
  3. Therefore we're going to use really cheap computing over really cheap networks Real Soon Now

Can you spot the fallacies?

Fallacy #1: Google is Magick
The world's largest Internet advertising agency does have the cheapest compute cycles and storage (see my StorageMojo article Killing With Kindness: Death By Big Iron for a comparison of Yahoo and Google's computing costs). But they do nothing that the average enterprise data center couldn't do if active cluster storage were productized.

Google built their infrastructure because they couldn't buy it. They couldn't buy it because no one had built it. But all Google did was package up ideas that academics had been working on, sometimes for decades. Google even hired many of the researchers to build the production systems. Happy multi-millionaire academics!

Blame vendor marketing myopia for missing that opportunity. But their eyes are wide open now. If your enterprise wants cluster computes or storage you can buy it. From Dell.

Fallacy #2: Networks are cheap
Or they will be Real Soon Now.

10 Mbit Ethernet from Intel, DEC and Xerox came out in 1983. A mere 25 years later we have 1000x Ethernet - 10 GigE - starting down the cost curve.

About the same time a first generation 5 MB Seagate disk cost $800. Today a 200,000x disk - 1 TB - costs 300 vastly cheaper dollars.

Also in 1983 the "hot box" - a VAX 11-780 - with a 5 MHz 32-bit processor and a honking 13.3 MByte/sec internal bus cost $150,000. Today a 64-bit, 3 GHz quad-core server - with specs too fabulous to compare - is $1300. Call it 1,000,000x.

Networks are the bottleneck, not the solution. Hey, Cisco! Get the lead out!

What's really going on?
There are - currently - economies of scale, which Google is exploiting and MSN and Yahoo! aren't. So the latter two are going out of business.

But when you look at the cost of going across the network compared to the rest of infrastructure you realize that local - what we used to call distributed - computing is the only way to go.

Ergo, cloud computing will remain in the clouds and real computing will remain local. Where you can kick the hardware and savor fan hum and blue LEDs.

Sure, some low data rate apps - like searching - can move to the web. But if you want a lot of data and you want it now, keep your processor close and your data closer.

Comments

  1. "Cloud Computing" is nothing but a high tech perpetual motion machine
    I see that Donnieboy is all fired up in sales to give us the latest BS of how wonderful central control is. He tells us that it is cheaper to let someone else do it for us and save the money. Problem is someone else has to do it and pass the costs on to us and charge us extra for the profit margin. Anyone can run their own data center and the retail customer is starting now with home servers. We also give up all control to let someone else do it. This is lost data, slow speeds, ransom, loss of service, theft and lower productivity that makes distributed systems quite attractive. We got away from this octopus 20 years ago because distributed computing lead to the PC instead of the IBM lock step total control. Can't wait for these IBM road kill to finally die off so we can move on to the next phase instead of hearing all this distraction of ancient technology, the ultimate rental ware with a price to match.
  2. You left out the most important reason.

    Companies providing a service must make a significant profit, or it's a self-indulgence. Because of its inflated stock price and advertising revenues, Google can allow some employees to pursue their hobbies.

    When cloud computing is considered in future, the topic will be why people in the press allowed themselves to be persuaded that internet computing was significant. The chapter heading will be: Deluded hopes to reduce Microsoft dominance

  3. Steve Jones said,
    on January 30th, 2008 at 2:27 am

    I'm always a bit concerned when cost-curves for different product groups are combined as it depends on what factors you choose. Yes the cost per MB of disk storage has reduced several orders of magnitude faster than networks (although comparing a 10Gbe switched network vs a 10Mbps shared bus topology isn't quite right; there's far more that 1,000 x the data carrying capacity in a 10Gbe switch than a (what was very expensive) coax LAN.

    However, if we look at a different performance metric - IOPs and MBps on the hard disk market then you get a very different picture. Today's 15K hard drive can do maybe 180 random IOs per second and perhaps 80MBps. Those are respectively around 5 and 100 times better than what was available 25 years ago. Given that many applications are limited by storage performance this is a real issue.

    So when you make you comparison, be careful over the choice of metrics. You get wholly different answers depending on what you chose. Many of the reasons are governed by physics and geometry. Disk storage capacity increases to the square of density (as, famously does semi conductor capacity). However, sequential read speed only goes up linearly with bit density when constrained by the physical limits of moving parts. What governs basic network speed for a single serial links will be constrained by all sorts of things including transistor switching speeds (which have increased by maybe 1,000 in the 25 years). That's another of those linear relationships.

    So, square relationships (storage capacity, RAM, aggregate switching capacity etc.) proceed at a different rate to linear relationships (like sequential read speeds) with mechanical ones far behind those two.

  4. Gavin said,

    on January 29th, 2008 at 2:13 am

    Also none of this is really new even conceptually. The grid folks have been pushing the 'compute-anywhere' vision for years.

    The issues that limit grid aren't being solved by calling it 'cloud'. Data is valuable - companies need to control where it is, who gets access. Algorithms being executed can also be extremely valuable and are trade secrets in many industries. Moving data is slow and expensive and the trend isn't for this to improve (data sets are doubling per year in many industries). However CPU cycles are cheap and getting cheaper.

    Google is a very special case with many advantages when it comes to 'cloud' type computing - however even they are secretive about algorithms used, and I doubt they'd be interested in letting their computes be shared by just anyone.

  5. nyrinwi said,

    on January 28th, 2008 at 11:36 am

    And what about the security implications of "cloud computing?"

    Security has been an afterthought in so much of technology history, it's embarrassing. We've got computer viruses on picture frames now for god's sake! When we can make appliances that are virus-free, then I'll trust my data the the clouds.

    Cloud computing? I can't get the visions of Skynet and the terminator out of my head.

Bill St. Arnaud The IT department is dead - Nicholas Carr

The IT department is dead - Nicholas Carr

[Another provocative book by Nicholas Carr. But in general I agree with him, except I think his vision is too limited to utility computing. Web 2.0, SOA, IaaS (Infrastructure as a web service) will be additional services that will be offered by clouds operated by Google, Amazon and so forth. And as I have often stated in this blog, I believe elimination or reduction of CO2 emissions may also be the critical enabler, as opposed to the putative business case benefits of virtualization and web services. The classic example is cyber-infrastructure applications where take up by university researchers has been disappointing (and is one of the themes at the upcoming MardiGras conference on distributed applications).

http://www.mardigrasconference.org/

Carr uses analogy of how companies moved from owning and operating their own generating equipment to purchasing power off the grid. This was only possible once a robust and widescale transmission grid was deployed by utilities. However, this is the component that is missing in Carr's vision with respect to utility computing and virtualization - we do not yet have the communications networks especially in the last mile, to realize this vision.

And finally while the traditional IT techie may disappear I can see a new class of skills being required of "orchestration engineers" to help users and researchers build solutions, APNs and workflows from the variety of virtual services available from companies like Google, Amazon, IBM etc. Some excerpts from Network world article
-- BSA]

Dell's Kevin Rollins vs. business guru Nicholas Carr - February 01, 2006 by Richard McGill Murphy

"When you take those standard components, which are now low cost, the question becomes, Which technology do you implement? and then, What you do with it? So IT does matter. But now what counts most is the execution and implementation of all the standard pieces. And you can do that poorly. You can buy the wrong pieces, or you can buy the right pieces and do it well."
January 31, 2006 | money.cnn.com

(FORTUNE Small Business) - Ever since tech pundit Nicholas Carr published a provocative Harvard Business Review article titled "IT Doesn't Matter," business leaders have been debating Carr's thesis that information technology is becoming a commodity input like water or electricity. Although tech vendors tend to bridle at this assertion, nearly all firms run Office applications on Intel and Windows-based machines. Hardware is now so cheap that just about everyone can afford it. Your competitors have access to the same online information you do. It's not clear that ubiquitous, standardized technology can help any particular firm stand out. In short, where's the competitive advantage? For two very different takes on those weighty issues, we checked in with Carr (http://www.nicholasgcarr.com), a sought-after writer and speaker on technology issues, and Kevin Rollins, CEO of Dell.

CARR: Over the years, modern computing has become more accessible, more standardized, more homogenized. It is getting cheaper and cheaper all the time. And that just means that it's harder for companies to sustain an advantage by being a technology leader. As these trends play out, it becomes easier and easier for other companies to catch up. So as we move to this next stage, we're going to see even smaller companies being able to catch up. This, I think, will further neutralize any advantage that IT provides.

ROLLINS: That's absolutely wrong. If that's true, then why don't all companies perform the same way? They've all got access to this standard technology! When you take those standard components, which are now low cost, the question becomes, Which technology do you implement? and then, What you do with it? So IT does matter. But now what counts most is the execution and implementation of all the standard pieces. And you can do that poorly. You can buy the wrong pieces, or you can buy the right pieces and do it well.

You could make a similar argument for your telephone system. Clearly, telephones matter--you can't run a business without them ...

ROLLINS: No, you can't, but telephones are all exactly the same, essentially.

And what matters is whom you call and what you say to them?

ROLLINS: I think that's probably true. But IT is a little different still, because you can take the components and put them together a little differently. You can use a website or not. You can use standard servers, or you can use proprietary technologies, which is very expensive. And you can also buy your computer system from a reseller, or you can buy directly from a company like Dell. So [it matters] how you put these things together and whether you use the full benefits of standardized products or not.

CARR: Companies can outperform their rivals for all sorts of reasons. So variations in performance among firms with access to the same technology can probably be traced to factors other than technology, such as superior products or better customer service or an outstanding reputation. It's a fallacy to assume that IT is the only source of differentiation. But I agree with Mr. Rollins that companies can gain an advantage by managing information technology better than others. It's important to remember, though, that that's a management advantage, not a technology advantage.

[Mar 21, 2008] IT Department Dead Hardly. Why Nicholas Carr is (mostly) wrong about SAAS.Network Performance Blog, Network Performance Management News, Tutorials, Resources - Network Performance Blog by By Brian Boyko

January 08, 2008 | www.networkperformancedaily.com

...SAAS is a wonderful development, and apps like SalesForce are, to the people that use them, godsends. However, unique company problems require unique solutions - SAAS services are looking to appeal to the largest common denominator. For that reason alone, IT will always have a place in the enterprise.

Additionally, if you want to connect to the network, which you most certainly will have to do to access your SAAS applications, you need network engineers to build and maintain the network - even if it's just for Internet connectivity. And what about application performance?

Google or other SAAS providers will not design your WAN to deliver large backups during off-peak hours, won't get your VoIP service to work with your data applications without clogging the lines, and won't help maintain your company's computer security. (Heck, if nothing else, when a key Ethernet cable gets unplugged, you need at least a sysadmin to find out which cable was unplugged and to physically run down there and plug it back in.)

Relying solely on SAAS is problematic at best. You're at the mercy of another company's quality control - and if the site goes down, so does your business. Your company's data - important and confidential data - resides on another company's servers. Finally, what about capacity planning?

That last one is crucial. You are usually not privy to the capacity of third parties. Larger SAAS services like SalesForce probably scale well and overprovision. But if Carr's thesis - that eventually most enterprise software will be SAAS - holds true, there will be some applications that are further down the long tail and service a much more limited number of customers.

ifbook nicholas carr on the amorality of web 2.0

Carr is skeptical that the collectivist paradigms of the web can lead to the creation of high-quality, authoritative work (encyclopedias, journalism etc.). Forced to choose, he'd take the professionals over the amateurs. But put this way it's a Hobson's choice. Flawed as it is, Wikipedia is in its infancy and is probably not going away. Whereas the future of Britannica is less sure. And it's not just amateurs that are participating in new forms of discourse (take as an example the new law faculty blog at U. Chicago). Anyway, here's Carr:

The Internet is changing the economics of creative work - or, to put it more broadly, the economics of culture - and it's doing it in a way that may well restrict rather than expand our choices. Wikipedia might be a pale shadow of the Britannica, but because it's created by amateurs rather than professionals, it's free. And free trumps quality all the time. So what happens to those poor saps who write encyclopedias for a living? They wither and die. The same thing happens when blogs and other free on-line content go up against old-fashioned newspapers and magazines. Of course the mainstream media sees the blogosphere as a competitor. It is a competitor. And, given the economics of the competition, it may well turn out to be a superior competitor. The layoffs we've recently seen at major newspapers may just be the beginning, and those layoffs should be cause not for self-satisfied snickering but for despair. Implicit in the ecstatic visions of Web 2.0 is the hegemony of the amateur. I for one can't imagine anything more frightening.

He then has a nice follow-up in which he republishes a letter from an administrator at Wikipedia, which responds to the above.

Encyclopedia Britannica is an amazing work. It's of consistent high quality, it's one of the great books in the English language and it's doomed. Brilliant but pricey has difficulty competing economically with free and apparently adequate....

...So if we want a good encyclopedia in ten years, it's going to have to be a good Wikipedia. So those who care about getting a good encyclopedia are going to have to work out how to make Wikipedia better, or there won't be anything.

CEM9616

This article argues that the principal forces driving the new economics of information technology are:

(1) it is steadily increasing in value,

(2) academic demand for information technology and computing power is virtually unlimited;

(3) the per unit price of information technology is declining rapidly; and

(4) the total cost of owning and maintaining these systems is steadily rising. In other words, the potential benefits are truly revolutionary and the demand is insatiable -- but the falling prices mislead many to expect cost savings that will never materialize.

These forces, combined with the breathtaking rate of change inherent in IT, produce a unique economic environment that seems to breed financial paradoxes. The new economics are formidable. Shortening life cycles will force fundamental changes in how institutions manage these assets; the increasing value of IT and the pressure to spend more on it will make the financial crisis facing many institutions worse; and the ability of new technologies to transcend time and distance will intensify competition among institutions. Information technology will represent the single biggest opportunity to either enhance or damage an institution's competitive standing. Academic, technology, and financial leaders will have to come together as never before to address these issues.

New Statesman - Minds over matter

Terry Eagleton's The Function of Criticism (first published in 1984) is perhaps the best, and certainly the most accessible, of the 12. Where Eagleton explains why criticism is so essential, Raymond Williams, one of the founders of cultural studies, shows in Culture and Materialism how cultural criticism should be carried out. But not all the texts are as readable as these. Louis Althusser's love letter to Marx, For Marx (1965), would test the patience of even the most ardent admirer of dialectical materialism. None the less, it is a decisive book - and it is probably easier to read this than to plough through the work of the master him- self. Despite its age, it retains a degree of contemporary relevance. Similarly, Signs Taken for Wonders, Franco Moretti's superb take on the sociology of literary forms, is as enlightening to read now as it was when first published in 1983. For good measure, we have two of the most influential postmodern philosophers of the 20th century, Jacques Derrida and Jean Baudrillard, vying with each other in intellectual acumen and impenetrability in The Politics of Friendship (1994) and The System of Objects (1968).

Taken together, the works selected by Verso embody the creation and development of a dissenting tradition that set out to question and subvert the established order. Yet while this was once the prin-cipal strength of these thinkers, it has become something of an Achilles heel. A collective reading exposes all that has gone wrong with radical thought in the 20th century. Traditions, and intellectual traditions in particular, rapidly ossify and degenerate into obscurantism. They have to be constantly refreshed, renovated and reinvented. It is time that radical thought broke out of its confining structures. It is time to put Adorno's anxieties about mass culture and media to rest; to move forward from Baudrillard's and Derrida's postmodern relativism to some notion of viable social truth; and for criticism to stop messing about with signs and signifiers, and instead confront the increasing tendency of power towards absolutism.

Radical thought, as exemplified by this list, suffers from three fundamental problems. First, jargon. These thinkers have developed a rarefied terminology that they employ to talk among themselves to the exclusion of the majority - on whose behalf they presume to speak. This tendency has been directly responsible for the intellectual decline of the left. Obnoxious right-wing thinkers such as Francis Fukuyama and Michael Ignatieff command worldwide audiences because their message, whatever its political connotations, is clear and accessible. By contrast, jargon-ridden radicals are confined to the backwaters of academia, quoted and promoted only by like-minded academics in an ever-decreasing circle of influence.

The Sokal hoax shows just how ridiculous the situation has become. In 1994 Alan Sokal, a physicist, submitted a paper to the highly respected journal Social Text. He used modish jargon to talk nonsense - for instance, he claimed that gravity was culturally determined - and his bibliography comprised a Who's Who of radical scientists, even though the content of his paper bore little relationship to their work. The editors of Social Text believed the jargon and published the paper.

The second problem is theory. The function of theory is to predict and to provide a framework for analysis. In radical circles, theory started out as an expression of dissent, a mechanism for exposing and predicting the uses and abuses of power. A system of thought that tells you what to expect and how to react can itself become a substitute for genuine thought and analysis, however. At its worst, theory becomes little more than a tool of tyranny. Paul Virilio's The Information Bomb (2000) provides a good example. Virilio's analysis of information technology and the relationship between science, automation and war is knee-deep in theory but perilously short on insight, offering hardly any advance on Jerry Ravetz's 1971 classic Scientific Knowledge and its Social Problems.

The third problem that radical thought faces is irrelevance - which brings me to Slavoj Zizek. Much of what thinkers of this sort have to say has little relevance to the non-western world: that is to say, the vast majority of humanity. While they pay much lip-service to the "voiceless" and the "marginalised", they rarely consider the perspectives of these groups. Zizek, whom Eagleton describes as "the most formidably brilliant exponent of psychoanalysis, indeed of cultural theory in general, to have emerged in Europe in some decades", is a good illustration of this. In The Metastases of Enjoyment (1994), perhaps his best book, he argues brilliantly that even though Nazi racism focused on subordinating most other non-Aryan races and nationalities, its main concern was the annihilation of the Jews. This focus on annihilation, he says, is now a characteristic of all racism: the logic of anti-Semitism has been universalised. In which case, why doesn't Zizek draw any parallels between anti-Semitism and Islamophobia? Why is he silent about the racism inherent in Serbian nationalism? Instead, he offers an absurdly down-and-dirty reading of western popular culture, women, sexuality and violence - the kind of thing all too familiar from cultural studies courses.

The work of radical thinkers has played an invaluable role in dethroning western civilisation and decentring its "natural and superior" narratives. But yesterday's dissent easily becomes today's tyranny. We urgently need new ideas and new tools for reconstructing thought. That cannot be done by preserving and worshipping the ashes of the fire ignited by yesterday's radicals. We need to transmit its flame.

Ziauddin Sardar's What Do Muslims Believe? is out in paperback from Granta Books

Does Nick Carr matter CNET News.com

No he doesn't, but thanks for asking :-)
By strategy+business
Special to CNET News.com
August 21, 2004, 6:00 AM PDT

When the Harvard Business Review published "IT Doesn't Matter" in May 2003, the point was to start an argument, or, as they say in the more genteel world of academia, a debate.

The provocative title of the article, as well as its timing--it was published at the tail end of a long slump in technology spending--ensured that a dustup would ensue. The resulting debate has been impassioned and often revealing. And it's still going on.

The essay was written by Nicholas G. Carr, then editor at large of HBR and now a consultant and author. The central theme: There is nothing all that special about information technology.

Carr declared that information technology is inevitably going the way of the railroads, the telegraph and electricity, which all became, in economic terms, just ordinary factors of production, or "commodity inputs." "From a strategic standpoint, they became invisible; they no longer mattered," Carr wrote. "That is exactly what is happening to information technology today."

The reaction was swift. Within weeks, Carr was branded a heretic by many technologists, consultants and--especially--computer industry executives. Intel's Craig Barrett, Microsoft's Steve Ballmer, IBM's Sam Palmisano and others felt compelled to weigh in with varying degrees of fervor to reassure corporate customers. Their message: Don't listen to this guy. Keep the faith in IT's power to deliver productivity gains, cost savings and competitive advantage.

And the reaction continued. HBR got so many responses that it set aside a portion of its Web site to accommodate them, and Carr kept the controversy bubbling on his own Web site. He became a traveling celebrity of sorts, defending his stance in forums across the country, from the Harvard Club in New York City to the Moscone Convention Center in San Francisco, where he traded verbal jabs with Sun Microsystems' Scott McNealy. The article became fodder for countless columns in newspapers, business magazines and trade journals.

When "IT Doesn't Matter" was published in HBR, I thought Carr had delivered an important, thought-provoking reconsideration of the role of IT in the economy and inside companies. Now that his analysis has been expanded to book length, I still do. This time, his ideas are packaged with a less incendiary title, "Does IT Matter? Information Technology and the Corrosion of Competitive Advantage." His message is unchanged, though more fleshed out and nuanced.

Carr's thinking, in my view, is flawed -- at times seriously flawed -- but not necessarily in ways that undermine his essential thesis. So let's first examine what his fundamental point is, and what it is not.

The title of the original article was misleading. Carr is not arguing that information technology doesn't matter. Of course it does.
The title of the original article was misleading. Carr is not arguing that information technology doesn't matter. Of course it does. Among other things, IT improves productivity by reducing communications, search and transaction costs, and by automating all sorts of tasks previously done by humans.

But Carr asserts that as IT matures, spreads and becomes more standardized, the strategic advantage any individual company can gain from technology diminishes. Paradoxically, the more the economy gains from technology, the narrower the window of opportunity for the competitive advantage of individual companies.

This was the pattern for railroads, electricity and highways, which all became utilities. In the IT world, Carr sees evidence of mature standardization all around him. The strategic implication, according to Carr, is clear. "Today, most IT-based competitive advantages simply vanish too quickly to be meaningful," he writes.

Thus, IT strategy for most companies should become a game of defense. The shrewd executive, Carr writes, will not, in most cases, keep his or her company focused on the trailing, rather than the leading, edge of technology. He offers four guidelines for IT strategy: "Spend less; follow, don't lead; innovate when risks are low; and focus more on vulnerabilities than opportunities."

In Carr's view, there are two kinds of technologies: "proprietary technologies" and "infrastructural technologies." The first yields competitive gain, whereas the second is just plumbing, at least from a strategic standpoint. Technologies shift from proprietary to infrastructure as they mature. When a technology is young, companies can gain a big strategic advantage, and Carr deftly describes how companies like Macy's, Sears Roebuck and Woolworths exploited the new economics of retailing made possible by rapid, long-distance shipments by rail, and how a new breed of national high-volume manufacturers like American Tobacco, Heinz, Kodak, Pillsbury and Procter & Gamble sprang up by gaining advantage from modern transportation, the telegraph and electricity.

Once a technology moves into the infrastructure category, however, corporate opportunity wanes. In IT these days, Carr sees just about everything being folded into the infrastructure, including the Internet, Web services, Linux and Windows. Carr is particularly insightful on the subject of enterprise software, such as SAP's enterprise resource planning offerings and Siebel's customer relationship management programs. As he does throughout the book, he succinctly draws an analogy between technologies of the present and those of the past. In this case, enterprise software is depicted as the modern version of machine tools.

Before the 20th century, machine tools were gadgets made by each factory to suit its own requirements. But then machine-tool vendors emerged. Their economies of scale brought standardization and lower costs to the machine-tool industry. Innovation continued, but it was the vendors who developed and distributed those innovations for all manufacturers--and thus no competitive advantage accrued to any individual manufacturer. Carr sees a similar "vendorization" in enterprise software, where core business processes like supply chain management and customer relationship management are handled by standard software packages. The result is a straitjacket of standardization, leaving little room for a company to distinguish itself. Small wonder, Carr writes, that in the late 1990s enterprise systems came to be called "companies in a box."

Even the companies that seem to be
IT-based success stories, notably Dell and Wal-Mart, are not, Carr tells us.
Even the companies that seem to be IT-based success stories--notably Dell and Wal-Mart--are not, Carr tells us. Yes, Wal-Mart was a leader in using advanced computing and private networks to link sales, inventory and supply information. But Wal-Mart's real edge today, Carr says, is the scale of its operation, which enables it to strong-arm suppliers and zealously pursue efficiencies everywhere. And Dell, he contends, has an edge over rivals because of its direct marketing and build-to-order strategy. "It's true that IT has buttressed Dell's advantage, but it is by no means the source of that advantage," Carr writes.

More generally, Carr observes, strategic advantage derives not from technology itself but from "broad and tightly integrated combinations of processes, capabilities, and, yes, technologies." Translation: How you use technology, not the technology itself, is the crucial variable. "Indeed," Carr writes in his preface, "as the strategic value of the technology fades, the skill with which it is used on a day-to-day basis may well become even more important to a company's success."

It has the ring of innocuous truism, but wait a moment: Does that statement really apply to a utility-like infrastructure technology? Does the skill with which we use electricity, commuter rail service or the telephone have anything to do with corporate success or failure? No one seeks insights from research companies like Gartner or advice from consultants, now including Carr, on how to use real infrastructure technologies. This suggests that information technology may be a bit different after all.

The main difference between computing and the industrial technologies Carr cites is that the stored-program computer is a "universal" tool, which can be programmed to do all manner of tasks. The general-purpose nature of computing--especially software, a medium without material constraints--makes it more like biology than like railroads or electricity. It has the ability to evolve and take new forms. Speech recognition, natural language processing and self-healing systems are just three of the next evolutionary steps on the computing horizon.

Carr might dismiss such comments as romanticized nonsense--and he certainly could be right. Yet understanding the nature of the technology is crucial to determining whether computing is truly graying or, more likely, whether some parts of the industry are maturing while new forms emerge further up the food chain. Are we seeing old age, or merely the end of one stage in a continuing cycle of renewal?

Carr notes that the technology bubble of the 1990s resembled the booms and busts of railway and telegraph investment, which marked the passing of youthful exuberance in those industries. In the computer industry, however, there already had been two previous boom-and-bust cycles--in the late 1960s, when mainframe time-sharing services appeared to be the computing utilities of their day, and in the mid-1980s, when legions of PC companies were founded and soon perished. Again, the pattern seems to be cyclical and evolutionary, as innovations accumulate and eventually cross a threshold, opening doors to broader market opportunities.

Let's take one potential example, Web services. The nerdy term refers to the software protocols that could allow a new stage of automation as data and applications become able to communicate with one another over the Internet. More broadly, Web services are seen as the building blocks of a new "services-based architecture" for computing. Carr briskly brushes Web services into his "vendorization" bucket. He writes, "Here, too, however, the technical innovations are coming from vendors, not users." The vendors, IBM, Microsoft, Sun Microsystems and others, are working jointly only on the alphabet soup of software protocols: XML, UDDI, WSDL and so on.

Yet when technologists talk of a services-based architecture, they are speaking of a new computing platform that they see as the next big evolutionary step in decentralizing the means and tools of innovation--much as the minicomputer was a new platform that decentralized computing away from the mainframe, and then the PC put power in many more users' hands. Computer scientists regard the Web as a "dumb" medium, in a sense. It is, to be sure, a truly remarkable low-cost communications tool for search, discovery and transactions, but the Web is mostly raw infrastructure, because it is not very programmable. Web services hold the promise of making the Internet a programmable computing platform, which is where differentiation and potentially strategic advantage lie.

Carr approvingly cites studies showing a random relationship between total IT spending and corporate profits.
I cite this as only one example of where Carr's desire to fit everything neatly into his thesis leads him astray. There are others. He mentions Linux, and its adoption by Internet pacesetters such as Google and Amazon, as proof that commodity technology is plenty good enough for almost any need. Linux, the open-source operating system, does allow those companies to build vast computing power plants on low-cost hardware from the PC industry. But the other great appeal of Linux--and open-source software in general--is that it also frees those companies from the vendors. The rocket scientists at Google and Amazon can tweak the software and change it without seeking permission from Microsoft or Sun Microsystems or anyone else. Today, Google is both a brand name and verb. But technological differentiation has been the bedrock of its comparative advantage. It is the better mousetrap in Internet search. As an example, Google undermines, rather than supports, Carr's point.

His thesis is often the same kind of straitjacket of standardization that packaged software, as he says, is for companies. Carr approvingly cites studies showing a random relationship between total IT spending and corporate profits. But these merely demonstrate that aggregate technology spending is not the only, nor even the crucial variable in determining corporate profitability. That is hardly surprising. Again, it is how companies use the technology--integrating the tools with people and processes--that counts the most. And Carr can be quite selective in citing the work of others. He points to research from Paul Strassmann, an industry consultant, that supports his case while gliding over the fact that Strassmann was a prominent critic of Carr's original HBR article.

Still, these can all be seen as quibbles. They do not necessarily shake the accuracy of Carr's central point--that the period of sustainable advantage a company can derive from technology is diminishing. But is that really surprising? Everything, it seems, moves faster than it did 10, 20 or 30 years ago, including technology. To say that the advantages technology gives a business are more fleeting than they once were is not to say those advantages aren't worth pursuing. Dawn Lepore, vice chairman in charge of technology at Charles Schwab, estimates that a lead in new IT-based financial products lasts from one to 18 months. "You still get competitive advantage from IT, but there is no silver bullet," she observes.

Carr's book is a thoughtful, if at times overstated, critique of faith-based investment in technology, and it makes a real contribution to the field of technology strategy. But Carr understates the strategic importance of defense. The old adage in baseball is that defense and pitching win championships; in basketball it is defense and rebounding. In business, if you don't make the defensive technology investments to keep up with the productivity and efficiency gains of your industry peers, you simply lose.

The drift toward more-standardized technology that Carr describes also points to a different kind of pursuit of strategic advantage. It may not be IT-based, but it is certainly dependent on technology. This is what Irving Wladawsky-Berger, a strategy executive at IBM, calls the "post-technology era." The technology still matters, but the steady advances in chips, storage and software mean that the focus is less on the technology itself than on what people and companies can do with it.

The trend is already evident in companies and in universities. The elite business schools and computer science programs are increasingly emphasizing multidisciplinary approaches, educating students not only to be fluent in technology, but also in how to apply it. In companies, the same is true. The value is not in the bits and bytes, but up a few levels in the minds of the skilled businesspeople using the tools. Large chunks of the technology may be commoditizing, but how you use it isn't. That is where competitive advantage resides.

To read more articles like this one, visit http://www.strategy-business.com/.

Copyright © 2004 Booz Allen Hamilton Inc.

Reprinted with permission from strategy+business, a quarterly management magazine published by Booz Allen Hamilton.

Nicholas Carr - IT Doesn't Matter - Summary by Yann Gourvennec

IT doesn't matter or does IT, really?

If there is one article worth reading at the moment it must be Nicholas Carr's "IT doesn't matter" which was published by the Harvard Business Review in May 2003. Its sharp criticism of today's IT worship is devastating and it echoes greatly the sweeping changes that are currently imposing themselves on the IT world, especially in Europe.

Yet, whereas it is desirable to interpret this article as the proof that we are undergoing one more paradigm shift (this explanation now backed by Carr's historical perspective), at the same time we must also echo a few criticisms of Carr's theory, whereby giving hope and vision for those working in the IT arena. Jean Mounet, President of Syntec (the French IT industry association) reminded us in a recent article that one in three graduates from French engineering schools are recruited by the IT industry in France. One can therefore imagine what such an industry represents in the lives of so many people and the future of our modern economies.

Many in the industry have interpreted this article as a manner of threat on the efforts that are made to convince businesses that they should invest more on IT. However, French economist and IT industry expert Michel Volle has developed some very interesting counter-arguments to Carr's theory which are available in French at Volle.com. I wish to draw your attention too to the other string of articles entitled "Does IT matter" available from HBR.

A summary of Carr's "IT Doesn't matter" article By Yann Gourvennec

I. Ubiquitous computing reinforces the triviality of IT

IT has deeply transformed today's business world and all businesses use information technology on a large scale. As a consequence, capital expenditure devoted to IT has increased dramatically over the years and is still tremendous in spite of the current economic situation. Besides IT tools are no longer considered for low-level employees, but are used intensively by top managers who openly value the supposed competitive edge that they can derive from its usage. Behind all that lies the thought that the pervasiveness of IT usage has led to its becoming more strategic.

On the contrary, Nicholas Carr shows us that IT has in fact become the latest item in a list of commodities that helped shape business and industries as we know them. Being a commodity, IT also becomes transparent to its users.

II. Proprietary vs infrastructural technologies

Proprietary technologies, may generate a competitive advantage to their owners provided adequate protection of their investors' rights. Conversely, Nicholas Carr proves that Infrastrutural technologies are more productive when they are shared, although owning them may prove more cost-effective at the beginning of their existence. Once standards are in place, that type of infrastructural technologies is more effective when shared.

Nicholas Carr uses the striking examples of electric power production or trains to prove his point, showing that no company would benefit today in purchasing and maintaining its own railway network.

Also, one of the major pitfalls that managers fall into is the belief that competitive advantages brought by infrastructural innovations will last forever. At the end of the buildout phase of a new infrastructural technology, new standards will emerge, competition will rise dramatically and prices will fall. Even the usage of the new technology will become standardised. Therefore, the advantage of infrastructural technologies will shift from the micro to the macro-economical level for when they become pervasive, only countries and regions benefit from their presence, whereas individual companies are all competing on the same level.

Likewise, infrastructural technologies are often subject to overinvestment therefore causing sweeping economic trouble. What we have witnessed with the 'Internet Bubble' happened in a similar fashion with the overinvestment in railroads in the 1860s. The analogy shows that there is a risk for deflation to settle on our 21st century economies as in 1860. N Carr would like the analogy to end here but the risk cannot, in his mind, be overlooked.

III. Information Technology: this new commodity

Despite appearances, IT is truly an infrastrural technology and according to Nicholas Carr, it is particularly prone to commoditisation due to the following characteristics:

Throughout the buildout of the IT infrastructure, a myriad of companies have been able to derive significant competitive advantages from IT. Some have been able to establish a durable competitive edge (e.g. Dell Computers, Wal-Mart, ...) whereas others have only been able to generate a temporary advantage. But the ability to generate a competitive advantage from IT is becoming very rare nowadays, as is always the case with infrastructural technologies according to Mr Carr.

Whereas it is not possible to predict the end of the buildout of an infrastructural technology, there are many signs that the ramp-up of IT infrastructure is nearing its completion:

The incentive for customisation will now be marginal and reserved to a few niche vendors which offer some highly specialised software.

IV. What should companies do?

According to Mr Carr, the more an infrastructure becomes pervasive, the more it emphasises risk as opposed to generating competitive advantages. As soon as an infrastructure is shared and open, its non-availability is more crucial than its intrinsic value. As a consequence, all organisations should focus on trying to avoid the risk of the non-availability of this infrastructure, according to Mr Carr. Yet, very few have analysed the threats that could paralyse their whole businesses.

IT managers, according to Mr Carr, should focus on:

  1. Spending less: This is made necessary by the fact that IT is no longer considered strategic and because overspending is the biggest threat to companies. Apart from the requirement to look for cheaper alternatives, it is also necessary that IT managers cut out waste, mainly with regards to personal computing which is mostly used for standard tasks and do not require much computing power. Should vendors balk at reducing costs, Mr Carr suggests that IT managers resort to Opensource software packages and bare-bone network computers,
  2. Following vs innovating: It should no longer be necessary to be on the cutting edge of technology, most requirements being fulfilled by existing software and equipment,
  3. Focus on risks because IT is mostly judged on what does not work as opposed to its vanishing competitive advantage.

Mr Carr goes on with a study of the 25 companies with the highest economic returns and shows that they are spending far less on IT than the average. He therefore encourages managers to focus on costs and get back to basics, however boring it may prove.

Follow up to Nicholas Carr's article:

The IT department is dead, author argues - Network World by By Carolyn Duffy Marsan,

01/07/08 | Network World,

New Nicholas Carr book predicts utility computing will replace internal IT shops. The IT department is dead, and it is a shift to utility computing that will kill this corporate career path. So predicts Nicholas Carr in his new book, The Big Switch: Rewiring the World from Edison to Google.

Carr is best known for a provocative Harvard Business Review article entitled "Does IT Matter?" Published in 2003, the article asserted that IT investments didn't provide companies with strategic advantages because when one company adopted a new technology, its competitors did the same.

The Harvard Business Review article made Carr the sworn enemy of hardware and software vendors including Microsoft, Intel and HP, as well as of CIOs and other IT professionals.

With his new book, Carr is likely to engender even more wrath among CIOs and other IT pros.

"In the long run, the IT department is unlikely to survive, at least not in its familiar form," Carr writes. "It will have little left to do once the bulk of business computing shifts out of private data centers and into the cloud. Business units and even individual employees will be able to control the processing of information directly, without the need for legions of technical people."

Carr's rationale is that utility computing companies will replace corporate IT departments much as electric utilities replaced company-run power plants in the early 1900s.

Carr explains that factory owners originally operated their own power plants. But as electric utilities became more reliable and offered better economies of scale, companies stopped running their own electric generators and instead outsourced that critical function to electric utilities.

Carr predicts that the same shift will happen with utility computing. He admits that utility computing companies need to make improvements in security, reliability and efficiency. But he argues that the Internet, combined with computer hardware and software that has become commoditized, will enable the utility computing model to replace today's client/server model.

"It has always been understood that, in theory, computing power, like electric power, could be provided over a grid from large-scale utilities - and that such centralized dynamos would be able to operate much more efficiently and flexibly than scattered, private data centers," Carr writes.

Carr cites several drivers for the move to utility computing. One is that computers, storage systems, networking gear and most widely used applications have become commodities.

He says even IT professionals are indistinguishable from one company to the next. "Most perform routine maintenance chores - exactly the same tasks that their counterparts in other companies carry out," he says.

Carr points out that most data centers have excess capacity, with utilization ranging from 25% to 50%. Another driver to utility computing is the huge amount of electricity consumed by data centers, which can use 100 times more energy than other commercial office buildings.

"The replication of tens of thousands of independent data centers, all using similar hardware, running similar software, and employing similar kinds of workers, has imposed severe economic penalties on the economy," he writes. "It has led to the overbuilding of IT assets in every sector of the economy, dampening the productivity gains that can spring from computer automation."

Carr embraces Google as the leader in utility computing. He says Google runs the largest and most sophisticated data centers on the planet, and is using them to provide services such as Google Apps that compete directly with traditional client/server software from vendors such as Microsoft.

"If companies can rely on central stations like Google's to fulfill all or most of their computing requirements, they'll be able to slash the money they spend on their own hardware and software - and all the dollars saved are ones that would have gone into the coffers of Microsoft and the other tech giants," Carr says.

Other IT companies that Carr highlights in the book for their innovative approaches to utility computing are: Salesforce.com, which provides CRM software as a service; Amazon, which offers utility computing services called Simple Storage Solution (S3) and Elastic Compute Cloud (EC2) with its excess capacity; Savvis, which is a leader in automating the deployment of IT and 3Tera, which sells a software program called AppLogic that automates the creation and management of complex corporate systems.

Carr points out that many leading software and hardware companies - Microsoft, Oracle, SAP, IBM, HP, Sun and EMC - are adapting their client/server products to the utility age.

"Some of the old-line companies will succeed in making the switch to the new model of computing; others will fail," Carr writes. "But all of them would be wise to study the examples of General Electric and Westinghouse. A hundred years ago, both these companies were making a lot of money selling electricity-production components and systems to individual companies. That business disappeared as big utilities took over electricity supply. But GE and Westinghouse were able to reinvent themselves."

Carr offers a grimmer future for IT professionals. He envisions a utility computing era where "managing an entire corporate computing operation would require just one person sitting at a PC and issuing simple commands over the Internet to a distant utility."

He not only refers to the demise of the PC, which he says will be a museum piece in 20 years, but to the demise of the software programmer, whose time has come to an end.

Carr gives several examples of successful Internet companies including YouTube, Craigslist, Skype and Plenty of Fish that run their operations with minimal IT professionals. YouTube had just 60 employees when it was bought by Google in 2006 for $1.65 billion. Craigslist has a staff of 22 to run a Web site with billions of pages of content. Internet telephony vendor Skype supports 53 million customers with only 200 employees. Meanwhile, Internet dating site Plenty of Fish is a one-man shop.

"Given the economic advantages of online firms - advantages that will grow as the maturation of utility computing drives the costs of data processing and communication even lower -traditional firms may have no choice but to refashion their own businesses along similar lines, firing many millions of employees in the process," Carr says.

IT professionals aren't the only ones to suffer demise in Carr's eyes. He saves his most dire predictions for the fate of journalists.

"As user-generated content continues to be commercialized, it seems likely that the largest threat posed by social production won't be to big corporations but to individual professionals - to the journalists, editors, photographers, researchers, analysts, librarians and other information workers who can be replaced by . . . people not on the payroll."

Carr's argument about the future of utility computing is logical and well written. He offers a solid comparison between the evolution of electrical utilities in the early 1900s and the development of utility computing that's happening today.

Carr's later chapters - about the future of artificial intelligence and the many downsides of the Internet - seem less integral to his utility computing argument. And his discussion of Google's vision of a direct link between the brain and the Internet seems far-fetched.

Nonetheless, The Big Switch is a recommended read for any up-and-coming IT professional looking to make a career out of providing computing services to corporations. If Carr's predictions come true, strong technical skills will still be valued by service providers.

All contents copyright 1995-2008 Network World, Inc. http://www.networkworld.com

Pretending to know the area while in reality he is limited in its understanding he managed to provide the most dangerous advice to CEOs has come from people who either had no idea of what they didn't know, or from those who pretended to know what they didn't.

Amazon.com Customer Reviews Does IT Matter Information Technology and the Corrosion of Competitive Advantage

A look in the mirror, May 29, 2004
By Mike Tarrani "www.tarrani.com" (Deltona, FL USA) -
Carr does not diminish the value of technology in this book, but instead shows how it is improperly acquired and managed by most IT departments. As such, this book's central message is an indictment of how [all too common] IT mismanagement erodes business profits, shareholder value, and business operational efficiency.

The key question is, "is it as bad as Carr reports?" I can share my experience as a consultant who has worked some of the largest US and global corporations by answering "unfortunately, yes". I've seen the symptoms Carr cites in one engagement after another. It is not the fault of technology. I've witnessed the implementation of technical solutions that should have added value to business operations, yet were so mismanaged by IT that the solutions never came close to the projected ROI that justified their acquisition and implementation. Indeed, this book is similar to a collection of anti-patterns - common bad practices - which, sadly, reflect a typical IT department.

Although this book is short on solutions (which accounts for the lower rating I gave it), it does provide a conceptual framework from which to derive solutions. For example, much of what IT does can be classified as commodities - services such as desktop support, development (especially for web services), system administration, etc. These activities do not represent intellectual capital within IT in the same manner as architecture, business systems analysis, and service level management, all of which require an in-depth knowledge of technology and business requirements, and are not commodities.

Who will get the most value from this book? CIOs and IT managers who recognize there is a disconnect between IT and business operations, and who have the courage to look in the mirror that Carr provides will benefit. Executives at the COO, CFO and CEO level, or members of corporate governance will also benefit because they will be able to spot common problems in their own organizations that are so clearly reported in this book. The gap Carr leaves between symptoms and problems, and solutions to those can be filled in part by other books that are more solution-oriented (I recommend "RoadMap: how to understand, diagnose, and fix your organization" ASIN 0964163527; and "Connecting the Dots: Aligning Projects with Objectives in Unpredictable Times" ASIN 1578518776

IT departments focus myopically on , August 1, 2005

By Michael Davis "www.byvation.com" (Arlington, TX) -
I fully concur that Information Technology (IT) is too technology focused. As a former IT Manager at Metlife, and Allstate Insurance, I attest to the fact that the IT department focuses myopically on solving (and sometimes even creating more) computer problems. Therefore, as Carr illustrates, technology offers no competitive advantage - only the prospect of avoiding competitive disadvantage.

Without differentiation, organizations will do well to cut off IT from the rest of the organization and outsource it as a commodity. Carr's premise is that IT is undergoing commoditization and may cease to provide a sustainable competitive advantage. Hardware is already a commodity. Software is quickly becoming commodity like. What Carr illustrates is a clear and compelling reason to step up our efforts to use technology strategically to support the core business. After all, technology exists to serve the business, not the other way around.

Nothing new & no clear prescription, June 1, 2004

By Linda Zarate "IT Ops Consultant" (Azusa, CA United States) - See all my reviews
If possible I'd award 3.5 stars. There is value in the book, which has two main levels:
(1) makes a case for the deplorable state of IT as a business enabler
(2) claims IT is now viewed as a commodity

In (1) what Carr has to say is true and has been for as long as I've been in the profession, which is over 25 years. Carr's contentions parallel my experience on this level. When I started out we "MIS" professionals were the priests and priestesses who worked our magic in glass rooms. We were merely arrogant then. Life was simpler and some vendors worked closely with us. IBM, which is my main background, had a reputation for never letting their customers fail. That is not to say that their recommendations and solutions always translated into business value for their customers, but rarely did they result in disasters either. As time went on though MIS became IS, then IT. Systems grew more complex, proprietary systems gave way to interoperability, then open systems, and new vendors started arriving in droves. Innovation fanned the flames of complexity, and IT remained arrogant, but began focusing so much on the technology (and trying to keep up with it) that they lost sight of business needs. Methods devolved into chaos and the chasm between IT and the business widened to the point where IT was in some cases counter-productive to business needs.

On the second level, where Carr claims that IT is now viewed as a commodity, is where this book gets interesting. In many areas that has been going on for over a decade. Mass storage systems are measured in pennies per MB, powerful desktop systems are priced in the same range as consumer entertainment electronics, and certain classes of applications software are bargains. Further, the open source movement is changing the dynamics as I write this review, which may one day render much software as a commodity.

What Carr does not do is give clear advice about how to deal with problems and new dynamics that have roots in the distant past. The status quo is clearly and accurately documented in this book, but the prescription is vague. That is where this book falls short.

The main value of this book is as stated by others: use it as a mirror if you work in IT. Perhaps a few brave souls with leadership skills will start changes in the profession that will have a ripple effect. Especially if those brave souls work for large enterprises and get sufficient press for their successes. At the very least the CxOs who may stumble across this book may see their own IT organization in the descriptions Carr gives and decide to do something about it. I would also hope that those who work on the vendor and product development side take a few hints from this book and craft their technology innovations into actionable solutions that will get IT back on track. It can be done, especially if the ingenuity those vendors exhibit in technology innovation can be extended to bundled products that also contain innovative solutions and consulting.

One final comment: a colleague distinguished between intellectual capital and task-oriented work performed by IT. Obviously the task-oriented work is a commodity, and with some analysis can be identified and outsourced. That is a step towards getting IT back on track and delivering value to business. There are other such ideas hiding in this book if one reads it with an open and inquiring mind.

Useless except as a catalyst to get you to do your own thinking, July 5, 2005
By Anonymous Reader (USA) - See all my reviews
This book, as Nicholas Carr has claimed about IT, "doesn't matter". As one reviewer stated, Carr is a good writer but should have kept his assertion to a short article.

Carr claims that IT (hardware and software technologies) is becoming a commodity and therefore that by itself it does not provide competitive advantage. This is eye-opening and insightful only if one believes all the claims of the dot-com era (some of which are still turning out to be true after all) and if one does not understand that the economy is getting more competitive all the time. So what? Isn't everything becoming commoditized? What is left after the Information Age and outsourcing of everything? Some say it is the Creative Age, in which creativity and innovation are what confer true advantage - human mental processes, some of which have to do with using or applying technology differently.

Carr readily admits good USE of IT does confer an advantage - but again, isn't this true with any input or tool? It is management and innovative use of the input rather than the input itself that confers some advantage.

One needs a much more sophisticated hands-on understanding of IT besides the superficial observation that hardware and software technologies are becoming commodities available to all -- besides, this argument is only true in a 30,000 foot view of the world.

When one looks closer, in most cases the "free" open source software that is theoretically available to all is not truly available to all because the expertise needed to use it is very limited. Can all organizations use Linux, Perl, MySQL, etc. equally well? If not, are they really "available to all", or only to those who can actually use them? That everyone can "buy" them does not equate with them being "commodity inputs" -- they are just "technologies" not actual "INPUTS" if they are bought and not used. These questions are intertwined and more complex than they at first seem. For better or worse, one needs an experiential, not an academic or theoretical understanding, of IT in order to arrive at an answer.

In the last chapter, Carr backs off somewhat, saying it is too early to tell the impact of IT - but if it is too early to tell the impact, how can he already conclude it doesn't matter? I suppose that is why he modified his title from the article title of "IT Doesn't Matter" to the book title of "Does IT Matter?". This question seems to be unanswered despite agreement that many information technologies (just as other technologies, products, inputs, processes, and so on) become commodities very quickly, and at an ever increasing rate.

Bottom line: you do not need to bother reading the book. If you wish to understand Carr's argument, read his original article.

As with so many popular "management books", Peter Drucker had already summed up what a manager should know and think about in a more concise way -- for example, that it is the "I" in "IT", not the "T", that matters. Organizations need INFORMATION not TECHNOLOGY and in particular INFORMATION about the OUTSIDE. For better guidance on strategy and IT, see Drucker's Management Challenges of the 21st Century.

Yes, technology does matter., May 16, 2004
By Gaetan Lion
Carr makes a thought provoking but flawed case that technology does not matter all that much. According to him, competitive advantages of companies do not depend on technology. He does point out that many segments of information technology are mature, and have become low return commodities. He also points out that companies are better off not buying the overpriced state of the art technology, but instead should wait till prices come down. Thus, Carr makes many relevant and somewhat self-evident points. Nevertheless, his overall case that technology does not matter falls apart.

Contrary to what Carr suggests, the technological race is endless. There is no finish line. We are in a 24/7 society hooked on perpetual improvement. Whatever gismo you come up with, people are going to imitate it and better it very quickly. Thus, no one can rest on their laurels for long. By the same token, you can't afford to run a business that is not up to date technologically. Technology is both the right of entry, and the key to success for almost any business you can think off.

Also, innovation is the U.S. raison d'etre. You figure everything that can be commoditized is going to be either offshored to China or outsourced to India by American companies themselves as pressured by American stockholders. If the U.S. stops to innovate proprietary technology, our labor force will not remain internationally competitive. We have to add value to our products. We have to constantly innovate and create new markets. If we do, as we have done so far, we will remain the most advanced and productive society. If we don't, we will fall behind just as many high cost Western European countries already have.

If you interested in this subject, here is a couple of books I recommend: "Rational Exuberance" by Michael Mandel. He makes a convincing case that technology does matter, and that the U.S. remains the undisputed leader in innovation, and more importantly in implementing innovation. Another good book is Roger Alcaly's "The New Economy." This is an excellent analysis on the history and prospect of technological innovation. Both these authors get that technology is crucial to the present and future of the U.S. Carr does not [get it].

Don't waste your time, June 24, 2004 By A Customer
I started this book with an open mind and read it in about 2 days. It is an easy read but delivers little. The Cliff Notes version, if there was one, could be summarized in 2 or 3 paragraphs. Many of the authors predictions are based on silly analogies. In the book he compares electricity to information technology. He mentions a few electrical related job titles that are no longer part of corporate America but fails to mention that there are still plenty of Electrical Engineers, Electricians, Electrical Contractors, etc. still employed in our economy. He takes a small segment of technology and predicts it's commoditization. Big Deal! Technology is ever-evolving and his predicitions are not that revolutionary. What makes this book ridiculous is his prediction that all of information technology will eventually be a commodity. This book is an obvious attempt to create a controversy to sell books. Don't fall for it. Save your money and look elsewhere.
Urge all your competitors to read this book!, May 28, 2004

By A Customer

Two Harvard professors summarized Carr's ideas ... the most dangerous advice to CEOs has come from people who either had no idea of what they didn't know, or from those who pretended to know what they didn't. What you want is for your competitors to read this book, hoping they will buy into Carr's misconceptions and dangerous recommendations... so rate it 5 stars for them, while your company pursues IT that does matter.

If any of your technology-challenged board members are reading this book, be sure to point them to Don Tapscott's May article in CIO magazine so they will quickly understand Carr's Blueprint for Failure, and to Smith and Fingar's book, IT Doesn't Matter--Business Processes Do, for a complete critical analysis of Carr's superficial premises and misguided recommendations. You may also want to google: Does IT Matter, An HBR Debate, whence the opening comments of this review came. Meanwhile, be sure all your competitors know how wonderful and meaningful this book is. ;-)

Dr. Martin Bushton, former CEO, consumer products company

Opinion No End in Sight Frank Hayes

May 16, 2005 (Computerworld) The last time we heard from Nicholas Carr, in 2003, he was pitching the idea that IT doesn't matter . Now he's back with an article in the spring MIT Sloan Management Review called "The End of Corporate Computing." Carr seems to have learned something in two years: You don't get high-dollar consulting gigs by telling potential clients that their products and job functions don't matter. So now he's taking a 100-year view, saying the end of corporate computing could take a lonnnng time. He's also getting behind vendor pitches for grid, on-demand and utility computing.

Trouble is, he still doesn't understand much about IT.


In "End," Carr compares IT to electrical generation 100 years ago. He lovingly details how individual companies once generated 60% of all electricity in the U.S. and how that changed when Sam Insull created Chicago's Commonwealth Edison, the first big electric utility. Insull used economies of scale to drive down costs, worked out metering and pricing, then rolled out sophisticated marketing to convince manufacturers to shut down their generators and buy juice from him.

IT, Carr says, can be outsourced in much the same way. Corporate IT is scattered and wasteful, with miserably low capacity utilization. Centralizing IT is an irresistible trend, and supercentralizing it in outside utilities is inevitable. We're just waiting for a new Sam Insull to create the vision and define the utility computing industry.

Well ... no. High-capacity utilization is important when a production resource is expensive. Thanks to Moore's Law, computing gets so much cheaper so fast that economies of scale are trivial. That's why spreadsheets run on PCs, not mainframes.

And centralization isn't so much a trend as a cycle. Users decide central IT's prices are too high, so they buy their own servers or Web sites or network gear. Then the cost of managing decentralized IT gets too high, so it's recentralized into the data center. Then the cycle starts again. Takes about 10 years to go around. Watch, and you'll see it.

And utility computing has its own Sam Insull -- Ross Perot, who realized in 1962 that he could sell computing instead of computers and left IBM to found EDS. (The idea wasn't even new then; ADP had been a payroll data-processing utility for five years.) Utility computing is mature. And it works. But it hasn't replaced corporate computing the way Commonwealth Edison replaced private generators.

Why not? Because corporate computing is no longer about big data-processing generators. Hasn't been for years. IT shops still process data, but the real action comes from business people who use computers to communicate, to monitor current business processes and to simulate new business scenarios.
Users are the ones who experiment and create business innovation. So the most important place to put computing, and control of that computing, is in users' hands. Everything else -- networks, data, back-end applications -- is there to support those users. They do corporate computing. We in IT just help.
And if we replace their flexible, too-cheap-to-meter computing with thin clients and a fixed-cost, fixed-services utility, as Carr recommends? IT gains manageability, centralization and higher utilization. Business users lose the ability to innovate.

Yeah, that would sure align IT with business needs, wouldn't it? Will Carr ever understand corporate computing? Probably not. He's got a vested interest in his Industrial Age utility model and the end of IT -- his best shot at the big time.

But corporate IT's interests had better remain with the users -- whose scattered, wasteful computing is the best generator of business advantage we've got.

Frank Hayes, Computerworld's senior news columnist, has covered IT for more than 20 years. Contact him at [email protected].

My response can be found at my blog.

Rough Type Nicholas Carr's Blog IT doesn't matter, part 1

IT doesn't matter

In 1968, a young Intel engineer named Ted Hoff found a way to put the circuits necessary for computer processing onto a tiny piece of silicon. His invention of the microprocessor spurred a series of technological breakthroughs – desktop computers, local and wide area networks, enterprise software, and the Internet – that have transformed the business world. Today, no one would dispute that information technology has become the backbone of commerce. It underpins the operations of individual companies, ties together far-flung supply chains, and, increasingly, links businesses to the customers they serve. Hardly a dollar or a euro changes hands anymore without the aid of computer systems.

As IT's power and presence have expanded, companies have come to view it as a resource ever more critical to their success, a fact clearly reflected in their spending habits. In 1965, according to a study by the U.S. Department of Commerce's Bureau of Economic Analysis, less than 5% of the capital expenditures of American companies went to information technology. After the introduction of the personal computer in the early 1980s, that percentage rose to 15%. By the early 1990s, it had reached more than 30%, and by the end of the decade it had hit nearly 50%. Even with the recent sluggishness in technology spending, businesses around the world continue to spend well over $2 trillion a year on IT.

But the veneration of IT goes much deeper than dollars. It is evident as well in the shifting attitudes of top managers. Twenty years ago, most executives looked down on computers as proletarian tools – glorified typewriters and calculators – best relegated to low-level employees like secretaries, analysts, and technicians. It was the rare executive who would let his fingers touch a keyboard, much less incorporate information technology into his strategic thinking. Today, that has changed completely. Chief executives now routinely talk about the strategic value of information technology, about how they can use IT to gain a competitive edge, about the "digitization" of their business models. Most have appointed chief information officers to their senior management teams, and many have hired strategy consulting firms to provide fresh ideas on how to leverage their IT investments for differentiation and advantage.

Behind the change in thinking lies a simple assumption: that as IT's potency and ubiquity have increased, so too has its strategic value. It's a reasonable assumption, even an intuitive one. But it's mistaken. What makes a resource truly strategic – what gives it the capacity to be the basis for a sustained competitive advantage – is not ubiquity but scarcity. You only gain an edge over rivals by having or doing something that they can't have or do. By now, the core functions of IT – data storage, data processing, and data transport – have become available and affordable to all. Their very power and presence have begun to transform them from potentially strategic resources into commodity factors of production. They are becoming costs of doing business that must be paid by all but provide distinction to none.

IT is best seen as the latest in a series of broadly adopted technologies that have reshaped industry over the past two centuries – from the steam engine and the railroad to the telegraph and the telephone to the electric generator and the internal combustion engine. For a brief period, as they were being built into the infrastructure of commerce, all these technologies opened opportunities for forward-looking companies to gain real advantages. But as their availability increased and their cost decreased – as they became ubiquitous – they became commodity inputs. From a strategic standpoint, they became invisible; they no longer mattered. That is exactly what is happening to information technology today, and the implications for corporate IT management are profound.

Nicholas Carr IT still doesn't matter Tech News on ZDNet

Nicholas Carr, a perennial thorn in the side of the IT industry and author of the 2003 Harvard Business Review article "IT doesn't matter," looks set on stirring fresh controversy in the industry, telling companies to stop spending on technology.

Last week, Carr told an audience in London that companies have been misled to believe buying technology can make them more productive.

He said: "Smaller firms are more productive than large firms and yet they have less technology." And though he conceded it would be naďve to assume that represents the grounds for a hard and fast rule, he added it should at least "lead anybody to question the importance of IT."

Carr said: "Companies should spend less on IT." "But when I say spend less I don't mean use less or get less," he added, saying companies need to reduce their vulnerabilities when buying IT - such as over-spending or buying into expensive projects that ultimately fail to deliver.

He said: "Successful IT management comes down to successful management and not just those who are more innovative or take more chances."

As such he said companies should resist the urge to buy new technologies and should discard any notion of the cutting edge, saying there are few companies likely to see competitive advantage by being an early adopter.

He added: "The vast majority of companies should be IT followers not IT leaders. The innovator is going to pay a lot more than those who follow in the innovator's wake."

Though one might expect Microsoft to wholly reject Carr's ideas, Bob McDowell, VP of information worker business value at Microsoft, admitted the industry and business customers haven't always done themselves any favors.

He said: "There was over-hype in the 90s and there was overspend." And he added that the IT industry is "still paying the price now".

James Governor, analyst at Red Monk, said Carr's comments are welcome in an industry that is short on "comedians". But he said there is also a greater relevance to some of Carr's words. Governor said: "Frankly we should all be shifting uncomfortably in our seats," adding Carr's words should ring true with many people who may rather forget past over-spend or poor buying decisions.

While he said it's unlikely any CIO "will stand up at the end of the year and say 'please reduce my budget'", Governor said businesses should be savvier.

He said: "A" href

Ted Leung points me to TechDirt and an editorial by Hal Varian who rebuts Nicholas Carr's thesis titled "IT Doesn't Matter".

Hal Varian makes an observation that we all too often forget: "Profit comes from scarcity". He then argues that well all have to agree with Carr's main thesis, that it is not information technology itself that matters, but it is how you use it. Although, its quite an obvious statement however is surprising how many MIS departments fall into this trap. This fact points out the scarcity that prevails in the industry, most MIS departments don't know how to use IT technology. I personally have been in 3 companies in the last year and I've got to admit, none of them seem to understand how to exploit IT.

Varian's main arguments focus on the higher value activity of component integration:

In my view, companies cannot afford to ignore information technology, or relegate it to the back burner. Commoditizing it does not necessarily mean innovation slows. If anything, it could accelerate as more and more innovators experiment and tinker with those cheap, ubiquitous information technology commodities.

How you integrate functionality so that its useful for the corporation is today the high margin business of IT, just ask IBM. IBM's largest growing revenue stream has been in its Global Services division (i.e. consulting). However, if you think about it, isn't consulting just another commodity? In short, I think Varian's arguments are pretty weak.

However, I can think of two better arguments against Carr's thesis. The first is that commoditization of the hasn't truly happened yet. After all, why do we still have applications that are built in a stove pipe manner? That is, despite all the componentization of various technologies, IT continues to build monoltithic applications. The monolithic applications continue to be extremely inflexible and lack the agility to integrate with other corporate functions in a rapid way. This is the crux of my argument in a piece I wrote last June.

The second argument is one I a law that I had recently stumbled on. That is Christensen's Law, "the conservation of attractive profits". Christiansen theory is about the migration of value of time, high margins move up and down the value chain over time. It doesn't move always in the direction of going up the food chain, it can go the other way too! What has become commodities, may become profit centers in the future.

In summary, today's high margin business will be based on the Manageability of our software. However, that doesn't mean it will always stay that way, that'll be a commodity some day. When that happens, the higher margins will belong to those specialists in component development. Look at it this way, if complete systems becomes a commodity, parties will attempt to derive differentiation from its various components. It's just the natural flow of things.

[May 12, 2003] Business Technology IT Doesn't Matter - News by InformationWeek by Bob Evans

May 12, 2003 | InformationWeek

Whichever ancient sage who first divined that the gods do not always smile upon us surely knew of which he spoketh. I need to cut my shaggy grass, but it keeps raining; my beloved but bungling Pittsburgh Pirates have lost six in a row; the drain trap below my shower is leaking; and on top of all that comes word that the Harvard Business Review has decided that IT doesn't matter.

Yep, that's what they say in the table of contents, in the headline, and at the top of each page in the lengthy article: "IT Doesn't Matter" (HBR, May 2003, p. 41). And here I was, fussing about leaks and losing streaks when IT was just stopping mattering. Can it be so?

The article is thoughtful and sweeping and quite interesting to read. I'd heartily recommend it. But that doesn't make it either accurate in its conclusions or even properly focused, and that's the problem I have with it. Written by HBR editor-at-large Nicholas Carr, the article is intent on proving the thesis that because IT has become widespread, then it must perforce become a commodity, as happened to other one-time breakthrough and industry-jarring innovations, such as steam engines and railroads, telephones and telegraphs, electric generators and internal-combustion engines. And Carr's unshakable belief in that inevitability leads him to a conclusion that's no doubt provocative, which I think was his primary intent, but also profoundly shortsighted and dangerous.

Now, please allow me a brief digression to the Full Disclosure Department: I will certainly admit that given my position with a publication like InformationWeek, I have a vested interest in the ongoing relevance, growth, and vitality of what I believe Carr is referring to as the IT industry. And I can certainly say that my view of the future that lies ahead for you readers could not possibly be in more extreme opposition to what Carr forecasts in the final paragraph of his article: "IT management should, frankly, become boring. The key to success, for the vast majority of companies, is no longer to seek advantage aggressively but to manage costs and risks meticulously." So, yes, I am a bit subjective on this topic, but I also feel that while the jobs of Randy Mott and Ralph Szygenda and Rob Carter and many tens of thousands of others in top IT management positions will be many things, one thing they will must assuredly not be is boring.

Where the article should have gone, I think, is outside the realm of embedded infrastructure and applications and into some attempts to look at what the future might look like. Instead, it assumes that the futures that befell railroads and steam engines will, inexorably and inevitably, be the future of IT. And I think that's astonishingly shortsighted. Only 10 years ago, how many of you had heard of the World Wide Web? And today, we've all heard of Web services--heard too much and seen too little, some would say--but can any of us really imagine what business will be like when the potential of those new technologies begins to be expressed? Or when global and mobile get more stable, and true collaboration becomes less psychology and more process and software, and the recent focus on internal technology becomes redirected on customer-centered possibilities? If we've learned anything in the last several years, it's that the balance of power in the world of business has tipped to the buyer and they will continue to get more demanding, more fickle, more selective, and more willing to spend their bucks elsewhere unless businesses mold their efforts around those customers.

Will that be done with people? With paper? With singing telegrams? I don't think so; the key will continue to be technology.

IT doesn't matter. Or does it?

Bob Evans
Editor in Chief
[email protected]

Tom Huber, CEO of Collegis, took issue with my May 5 column about two IT workers who found child porn on a New York Law School professor's computer. His response can be found at: informationweek.com/938/collegis.htm.

Bill Gates' Web Site - Speech Transcript, CEO Summit 2003

Remarks by Bill Gates, Chairman and Chief Software Architect, Microsoft Corporation
CEO Summit 2003
Redmond, Washington
May 21, 2003

BILL GATES: Good morning. I hope the Tablets are working, but I also hope they won't be too distracting.

We're going to start out with a topic that we've touched on at every CEO conference going back to the very first one in 1987, and this is talking about IT and its role in corporate competitiveness: What are some of the key issues, and what are some of the key opportunities?

Microsoft's view on this has been pretty constant throughout. When it became over-hyped, we were a little concerned about the promises that were being made during those times. At this stage, in a sense, you could say it's almost under-hyped, and a good example of that is that there are various articles that have come out. The Economist said "Paradise Lost." Even IBM, the other largest company in our industry, talks about the post-technological period. The most extreme was probably the Harvard Business Review sort of suggesting that railroads and IT had a certain similarity, and now that the tracks have been laid, there was no competitive advantage to be had from having better IT systems. The New York Times said, "Has technology lost its special status?"

Well, our view on this is that IT long ago moved away from being simply about back-office activities, simply about printing checks and keeping the account books. And, over these last seven years, it's moved to become the tool that determines whether your information workers can do their job effectively. Do they know what's going on with customer satisfaction? Are they engineering new models in a very effective way? Are they finding partners to work with in a strong fashion?

And, although a lot of that is very difficult to measure, it has been a very big challenge for IT departments to step up to these new things. Historically, the IT department knew that its equipment was all in the glass house, and understood how to deal with that. Today, it's cell phones that people are carrying around and downloading information to. It's portable devices, it's spreadsheets that people have on different desktops, and in a sense, the scope of their responsibility, and how much they should invest in making those people more effective, is something that a lot of companies have had a hard time seeing exactly what that level should look like.

I think the good news is that the advances in the technology are strong enough that, literally for the kind of investment people have been making, they can get best-in-class, very exciting advances. And so, we don't need to be even at the peak levels that existed during some of the more ebullient years, and yet, cleverly applied - being on the forefront and getting a lot more out of the very substantial investment that's made in the workers themselves - then that's achievable.

Our industry, of course, benefits from the so-called Moore's Law, the doubling of power every two years. The key investments that drive that forward have not slowed down, despite what's going on with the economy or IT spending. If you ask how fast will the chips be over the next six or seven years, it's that same exponential increase. How large will the disk be that's connected up to these machines that are now typically 20 or 40 gigabytes? Those will continue to grow in size, at the same time that, actually, the cost of those are coming down.

There's been an interesting crossover point that we've been looking forward to for quite some time, and that's the point at which the high-volume, so-called industry standard, machines that use Intel or Intel-compatible chips: When will they have the performance of the more expensive, proprietary type systems, which have been lower volume? Classic mainframes are very high-end UNIX type systems.

We've passed that in terms of price performance a long, long time ago. In fact, that's sort of the business model of the industry standard offering has implicitly because the R&D cost is spread across so many units, millions of servers and over 100 million desktop machines. Whether it's chip R&D or software R&D, it's really a very different world.

The one footnote to that, though, is that if you wanted the absolute highest levels of performance, there were still some things that these devices couldn't do. A few years ago, for all but the applications that had to run on a single system, we reached the crossover. That is, in any application, like a Web front-end where you could use multiple machines and split the task up, the industry standard price performance and absolute performance was in the lead.

More recently, as part of the launch of our new Windows Server 2003 product, we - together with HP - showed through a wide range of industry benchmarks, transaction type benchmarks that actually, even in the most demanding case where the database or another application has to run on a single system, we have passed over.

So, this intersection point that I'm showing here on this timeline - we are now just past that crossover. That's a nice thing, because it means in terms of simplicity, what you have to have - development models, development tools and just plain the price of the equipment involved - now, it's not just some applications that can ride this curve, but it's everything that you're doing.

So, that's a very interesting milestone. In fact, one of our top researchers, Jim Gray, who writes about transaction systems and the great things we're doing, wrote a paper saying that it's a zero-dollar business over time, because as the hardware gets more powerful, the hardware prices go down. It is a true map mathematically that at least the hardware piece, that the price approaches zero. It's not at zero yet, but that kind of performance really opens your mind to thinking about using these transactions and using these rich systems -- that the trade-offs will be very different than they've ever been.

Let's look at the different eras that we been through, going back really to the Internet explosion. By the time of the Internet explosion, which was '97 - '98, PCs were in place by and large, but they weren't all that well connected together. It was really this period, '98 to 2000, where we saw the explosion of a number of things, and, coupled with this, of course, we saw IT spending in absolute, as a percentage of capital spending, achieve record levels.

The Internet came along, and there was a lot of discussion of what that meant, some of it in retrospect overly optimistic, but it did mean information accessibility. It did mean that companies needed to drive to have Web sites to do things very rapidly. It meant that some of the ways business relationships were handled were becoming very different.

It was during this era that the number of devices, PCs and servers really exploded, and it was just necessary that if you had a PC for every information worker, if you had servers for all these tasks, you were going to have these large numbers. E-mail exploded, almost became for most companies the standard way that people share information. The number of vendors of software was very large, and because people were in a rush to get the best pieces put together, they often found themselves with literally dozens of software suppliers. And integrating those things together, understanding how to work across those different things, this was the era where it became common sense that enterprise applications - particularly SAP's R3, but also some of the others - that everyone would move to use those as the foundation piece, instead of writing in-house software to do those things.

And so, there were huge benefits. This was an era where people were moving very rapidly, and certainly well over half of what people expected to get out of these things came to pass.

We've gone through a period that I'd say started March 2001, if you use stock price as a nice demarcation, to when the mentality kind of shifted. We've gone through a period where the view of looking at this, the glass that's half full, and seeing what are the harsh realities of what's really not there, and what should be done better: that's been the dominant theme.

So the Internet bust, you can measure that not only in financial terms, but think about what people said about B2B exchanges, where there'd be these middlemen organizations, and all the business would flow through those things. I think of the several hundred of those that were put together, only two or three have found enough value-add that they still exist today.

That vision of e-commerce is still very important, but it will be done without hubs. It will be done with each business essentially being its own hub, and being able to reach out to its suppliers and customers using software to get all the benefits that the hub was supposed to provide - the ability to find everybody out there, the ability to map their data formats into your data formats - and so there really doesn't have to be any friction, any middlemen, in that.

But those dreams were strong. People have looked at the complexity of IT management, all these different packages. You know, do they know if their systems are up to date? Do they know that they can say there won't be any security problems that come along in the systems? Even e-mail, as wonderful as it is, some of these things like spam are now at pretty unbelievable levels. And it's not just spam against consumer e-mail accounts like Hotmail or AOL. I'm sure many of you have noticed that spam has come into corporate e-mail domains as well. And e-mail can often be a distraction unless the tool is very good, and people are smart about using it. In some ways, the time benefits you get, a lot of that can be thrown back away unless it's done in a very clever way.

And finally, the idea of those enterprise applications, as good as they've been: the panacea that you'd be able to really dive into data in a way that's meaningful to manage, or get down to any level of information and run complex processes like sales analysis and forecasting in the most effective fashion. That dream has not been realized.

And so, many people looking at these harsh realities sometimes say, well, this IT stuff, it's messy. Let's outsource all of this. Let's get somebody else to do it. They can get the benefit of Moore's Law, and we'll just sign a five-year or 10-year contract that drives that outside.

Certainly, for some parts of IT, which are very measurable, repeatable type things, there is validity in that. But we're from the camp that says when it comes to defining new applications and thinking about business processes, IT is so central to the way work gets done and the quality of that work, and there are so many opportunities to do that better, that staying in control of this to have it as part of the overall business strategy is very, very important.

Now, the industry saw these harsh realities not just in these last few months. The issues of reliability, cost of ownership, security - those go back three or four years. And so, what we're seeing in the products that are coming out this year and next year is how the industry has responded by making investments to deal with those things. And this is where I think in some ways people are really underestimating what can be done. It's kind of natural if you overestimate what an industry can deliver, and then that you cycle back to where you underestimate those things. But I think we're on the verge of particular software advances that really address these harsh realities.

For example, e-commerce. Many of you have heard about XML Web services, and although that's a fairly technical thing, it's a fundamental thing, because it's the infrastructure that allows companies to exchange information for buying and selling and collaborating without the two IT departments having to build special applications that only relate to that one particular relationship. It's a general way that no matter whose software you use - whether you use IBM's or Microsoft's or someone else's - as long as that software adheres to these Web services standards, that ability to buy, sell, collaborate is available. And not just in the trivial sense of taking the paper invoice and sticking it in e-mail, but in the deepest sense of how you find those partners, how you do secure exchanges with them, how you track the workflows, so that if something is delayed or something comes in that's not right, the electronic path is the effective way of dealing with those issues.

As people move to partially do e-commerce, in some ways it was even more complex because you had the straightforward information passing electronically, but all the exceptions would result in phone calls and faxes and e-mail, and having the understandings that were created in parallel in that knowledge worker side, and getting the backend systems to understand that sometimes the impedance and the mismatch there even took away the benefit of having a piece of it be electronic.

The idea that when you have these Web services, that you can capture the full richness of what's going on with complete visibility to the knowledge workers, to update those things and be notified appropriately of things, that is where you get real benefit of saying that the paper approach really is completely obsolete.

Web services are something the industry started on several years ago. We bet our company on it in the year 2000, calling it our .NET strategy, and there's been great progress on that, in fact, lots of interoperability between the different software stacks, the key specifications all put together, and now we have pioneering customers doing very well with that.

The second issue is about managing all these things. The IT department generally measures its difficulty in handling things as proportional to the number of servers or proportional to the number of desktops. That really shouldn't be the case. I mean, the software should automatically keep things up-to-date, and if you say apply this new version for all of these workers the software, the network should make that happen. So things shouldn't be proportional to those large numbers.

Well, that takes a new generation of software. It didn't exist on those earlier systems because the numbers just weren't large, and so having manual involvement was not impossible. Here, it is impossible. The vision has been articulated by many companies in the industry. Sun talks about M1. IBM talks about autonomous computing. Microsoft calls it the Dynamic Data Center. And it's really the same thing; it's software tracking the system, and IT only having to see the exceptional cases that can't be solved through that software intervention, so none of the efforts having to be proportional to the large number of systems out there.

Another key thing is this idea of Trustworthy Computing. How can we make these systems so that virus attacks or software holes are extremely rare, and that recovering from those things, even when they do happen, is very straightforward, where you understand exactly what needs to be done to reset so those aren't a problem. This has required a lot of invention by the industry. It's required a new way of doing software development, new testing tools, and the progress here has been even faster than I would have expected.

This is not a completely solved problem, but the rate of improvement, the degree of improvement, the monitoring tools, the ability to audit and say, is somebody following the best practices on these things, has improved very dramatically in the last few years, partly because this became the top priority for the industry. We took products like the Windows Server I mentioned earlier and we actually stuck an extra six months in the schedule because we knew this was such an important thing.

E-mail: People love to complain about e-mail, but if you say to them, OK, we're going to take your e-mail away, then they'll complain even more, so it's that love-hate relationship, and the key is how do you take and do things that get the love part to be preserved and take away all those things about the distraction. And I'm going to show you in this next generation of e-mail products how we think that's been achieved.

Another big advance is the understanding what's going on with these systems. Microsoft four years ago had no visibility when people were using PCs and things wouldn't fit together. If a device driver didn't work, if it was hard to install, if software conflicted with each other, we didn't have a real picture of where those mismatches were.

Now the software has built into it reporting-back facilities, and so for anybody who's willing to let those reports flow back, which 70 percent do, we see those things, and that drives our priorities in terms of who's got to work with these driver makers, who's got to work to make these systems more resilient, having a total picture of what people are frustrated about, what's not working for them, which we call our Watson Initiative. It's been night and day in terms of being able to understand exactly how engineering resources should be applied, and being very numerical about what the PC experience or PC server experience is like.

And finally this issue with these silos of information, serendipitously it turned out that the architecture, this Web services approach that is necessary for e-commerce, also is necessary for moving information even within the company, and so that one approach, we're getting massive payoff for that. In fact, we're using it for systems management, we're using it as the basic connecting architecture for everything that goes on. And so the leverage there, because now we have tools there, we have broad industry understanding, it means information flow within a company, across company boundaries, and then the specific IT issues are dramatically advanced because of that one architecture.

So for any company, our view is you have to think about how software, the software strategy, information sharing, the rich applications that you license and that you create, how those affect the four things I'm showing here: How the people work together, how the processes work, and even simple processes like expense report management or the human resources review type process, is all the information really there in the simplest, most effective form? As you talk to somebody about their headcount budget, is it easy for them to immediately go and see how that's changed and where things are versus forecast? Very basic processes that, I'd say, the state of achievement in most companies is way, way less than even commonsense would dictate that it should be. Information about the market and what's going on; there also people who haven't gotten things that really would help them drive their processes.

Web sites today, although they have lots of information, they're not gathering as much customer information as they should, and that information, the way it flows inside the company, is not a simple as it should be.

Creating relationships where you think about key partners and how you do work with them, that's another thing where partly because the security infrastructure hasn't been in place, partly because the Web services things weren't there, people are only getting a little bit of benefit once you've crossed that corporate boundary.

So all of these things tie back into whatever the core business goals are, and really require a strategy focus.

One of the accounts I work with set goals for their IT budget, and unfortunately it was decoupled from the idea -- this is an auto company -- was decoupled from the key initiatives in their engineering group to get the design cycles down to be 30 percent shorter than they were. And when you get that kind of decoupling, of course the IT people aren't going to do the things, some of which are very modest in expense, you know, the basic expense of the PCs and the network are there, the idea of building these sharing Web sites and getting people able to use them, it's about a 5 percent increment on top of just keeping that normal infrastructure in place. And yet, because the investment decision and that strategic goal were not tied together, the engineering initiative thing was delayed, and may not happen in the timeframe that it should.

Jack Welch talked when he was at the CEO conference about how do you get competitive advantage. The people you're hiring come out of the same pool, the techniques that you're using are not that proprietary, and so he talked about this information advantage being the key thing: understanding customers, understanding competition, moving at a speed that you take that information and translate it into action.

And so every software system should be benchmarked by exactly that kind of criteria. And so when somebody says, to take the extreme quote from the Harvard Business Review article, they say IT doesn't matter, they must be saying that with all this information flow, we've either achieved a limit where it's just perfect, everybody sees exactly what they want, or we've gotten to a point where it simply can't be improved -- and that's where we'd object very strenuously.

We have seen in terms of new technology a lot of waves come along: the original PC, where you had the development and you had big adoption; then we had Windows, which was a wave like that. I talked about the Internet wave; really it was the biggest, because it moved out from just the individual to the way that organizations worked. And we have some of these waves that we're just at the early cusp on, things like wireless networks. Of course, these Tablets are connected up to a wireless network. Everywhere at Microsoft, that wireless network is in place. And so when people go to meetings they take their PC, they have the latest information, they can write their notes, share those with other people, check information, so we just kind of take it for granted. Wireless networking: one of the beauties of this Wi-Fi is that it doesn't have any ongoing costs. Once you get it into place, other than maintaining it, it's there, it's simply using spectrum that's available to everyone, and so as an increment to the base network cost that you have it's not that significant.

So that's just catching on, catching on in the home, catching on in the places that people travel, so-called hot spots, catching on inside corporations. I'd say about 20 percent of corporations have done what we've done and made it something that's pervasive in terms of what they do.

If you look at the investments that are made in IT, of course it breaks down into some very significant buckets. Licensed software tends to be, even if you take all the different applications, on the order of 5 percent or so. The biggest expense -- and the ones that really new software advances will be measured against how well they do -- is how they can help with these other pieces, letting you take advantage of low-cost hardware. I'd give the industry a very high grade for doing that. Letting you rationalize your network cost so that you can benefit from the price declines that are taking place there; a good example of that is that now we allow people to take their mail servers and put them all in one place, you only have the mail servers in one location for a global enterprise. That makes it a lot easier to pool those things and to have the IT expertise, as opposed to before where, because of network delays and sometimes reliability issues, the software didn't get around those problems, so you had to have the e-mail servers out on a very distributed basis.

Making the IT staff not having to visit desktops so that, as they diagnose issues, they come to them in a very high-level form, and making sure whatever IT services get used are not applied to writing essentially glue code that just connects applications together. I think that we'll see very substantial changes in how these investments are made, but the enabling factor is the advances that come in that platform and application software.

Because of the volume economics, literally billions, if you take the aggregate R&D of these things together, it's about $40 billion, of which Microsoft would be about $6 billion of that. The advances will come very rapidly here and allow for either cost reductions or more effective use of the other parts of those investments.

Well, let me now switch and talk about some of the particular things that drive our excitement and optimism about how people are going to work, and how that's going to change information flow inside a company.

Communications today is nowhere near the ideal. Your multiple mailboxes and phone numbers, and the different devices and people trying to get a hold of you, a lot of integration could be done here, and that means integration across the different devices, integration between the PC and the phone and allowing the PC to be more than just e-mail, letting you communicate in real time and share and do things together. We're investing very heavily in this because any improvement to communications has this very dramatic effect. Knowledge workers spend most of their time in communication activities.

The idea of collaboration, sharing information, this is another area where the choices have been pretty limited. Web sites are very hard to build. If somebody in the office says, OK, I want to make a new Web site, they have to go to IT and get it approved, they have to use complex tools, so they're not likely to share that way. Sharing files -- all you get is a list of files up there. And the final way of sharing, that's the most common right now, is just doing enclosures in e-mail, but that doesn't let people see the different versions, your e-mail gets flooded, you have different people working in parallel with documents that may be out of date. Really what you want is that Web site, but you want the Web site so that anybody can just sit down and create one without having to go to IT, without writing a line of code, and pick a template to choose for the Web site, and then easily customize what they want to create on that. This kind of sharing and collaborating is a big step forward for us. We call it using SharePoint.

So, just to give you an example of some of these communications tools, and how the PC will be viewed in a different way in this, I would like to ask one of the people who has really driven some of these products, JJ Cadiz, to come out and show us real time collaboration.

Good morning.

JJ CADIZ: Good morning. Thanks very much.

It's my pleasure today to be able to show you PlaceWare, which is a technology that you can get in your company today to drive a lot of improved productivity by enhancing telephone conference calls you may have in your company, but also to reduce costs by reducing the amount of travel that happens in your company to talk between distributed team members.

So, let's go and take a look at PlaceWare. What is PlaceWare? PlaceWare is a Web-conferencing application being used to basically take any Web browser and telephone to have meetings online. And when we talk about meetings online, we're talking about meetings with remote colleagues, with customers, and also with partners. And when I say meetings, I'm not talking about just the small one-on-one group meetings that are out there, but also huge group meetings. So we're talking about meetings of up to 2,000 people. So a lot of times those would be meetings of a whole division.

Now, I'm actually using PlaceWare to talk about PlaceWare here. So in the main screen, what you see here is PlaceWare, and we're actually logging into a room that I've created for this demo, and in the lower right hand corner you should see Bill is actually logged into the same room. That way we can give you the sense of the two different rooms, and how they look, and also how quickly things update.

Let's talk about the basics of PlaceWare. Now, as you think about meetings, there are kind of three major stages of meetings. There's the before-meeting, the during-meeting, and also the after-meeting. Now, the way that you can schedule meetings is very simple. You just need to go to a Web portal and schedule meeting, a PlaceWare meeting, when you schedule a meeting with Outlook, or you can also go and use the Outlook client to do that as well.

Now, when it's actually time to enter the meeting, you can do things with Messenger, you can click a link within Messenger to go in there, you can go into the same Web portal to do it, or you can just go to an e-mail invitation, which is probably the major way that most people get into PlaceWare meetings. Let me show you an example of one of those e-mail invitations. I have the ones here for the demo that I'm showing you here. And all any employee would have to do with a Web browser is just click this link, and they'd be able to get into that meeting.

Now, if I actually wanted to show someone an example of this e-mail, you'll notice that in the lower right-hand corner there, you aren't actually able to see this e-mail, all I would have to do is hit the snapshot tool here in PlaceWare, position it over the e-mail, and then just take a picture. Let's go ahead and do that here. Now, so we have that within the PlaceWare client, and then all we have to do is say, here's the link you need to click. You can just kind of ignore these other links down here. So you can do things like that.

Now if I actually wanted to show someone a live demo of something going on, let's say we want to have an Excel spreadsheet that we're working with, OK, I can actually just bring it up here, and then do an application-sharing slide that we can do with PlaceWare. I'm going to do that. So, I just position the frame, and then press Play, and this entire anything that's in the frame -- will be shown up in the lower left-hand corner. Now, the cool thing about that is that I can, of course, have my e-mail client over to the side here and be doing things, knowing that it's not actually being shared with anyone else.

I can also show live technology demos, saying here's the way you can modify Excel pie charts and such.

The other thing is, I can also give other people control of this spreadsheet. So, if I wanted to, and someone else had newer numbers up here, they could go there and I could give them control to modify the numbers within here.

Now, PlaceWare also has a bunch of other tools. And one of the kind of most fun tools we can do is ask questions of people who are in the room. So, let's say that I have all of you in this PlaceWare room here, and I went to ask you a question. All I have to do is press the polling slide, and I could ask you a question like, how about, what should Microsoft's next dividend be? So, I could say zero, I could say the same thing it was last time, or a little wild thing, what you do think, $1.50. You can vote here, and you get a live update of how people are voting. We can look at that person's vote, so we can see how people are voting from the presenter's view over here. So, polls are very easy to create. And the other cool thing is that all of these poll answers are kept for me later on. So, actually the presenter will get all the poll answers later on. So, you can imagine even being able to use this to grade students if you had them in a class in PlaceWare.

And I also get a list of all the attendees who showed up. So that way, if I knew 100 people were supposed to show up, and only 95 did, I could actually go back to those five people and say, hey, if you want to look at the recording, it would be great if you could do that, because with PlaceWare that's as easy as bringing up the recording control panel and pressing start here if you want to record any meetings. Now, the other nice thing is that the meeting rooms that you use within PlaceWare all persist. So, if you want to go back and visit them afterwards, you can do that.

So, PlaceWare is something that we're very excited about here at Microsoft. The last fiscal year we actually spent over $3 million on tools like PlaceWare to help our employees cooperate, and now that we have PlaceWare, we're using it in a variety of settings, including regular meetings between employees and field offices, and employees here at Redmond. We are using it to present to customers. And even at one of the last board meetings, where not all board members were able to attend, we actually had them attend remotely using PlaceWare, so there are a lot of great examples of methods that you can use PlaceWare for out there.

Now, what I would like to do is change gears and switch to a second demo that also has to do with real-time collaboration, and it has to do with a real-time collaboration tool that all of us are probably very familiar with, and that's this thing sitting right here, the common telephone that all of you probably have on your desk. Now, the telephone is a very powerful real-time collaboration tool, but it doesn't right now interact at all with the PCs that we also have sitting on our desks.

So, the enhanced telephony demo is all about how can we help bring together PCs and telephones to create a powerful user experience for people. So, I'm going to bring up the Enhanced Telephony client here. And the way we have this set up is that integrating with kind of existing systems, so we're not trying to turn the PC into a telephone or replace existing telephone systems, but rather integrate the systems that are currently out there.

So, let's get the music going here, that will be part of the demo. And when we talk about helping the PC and the telephone to be better together, what do I mean? Well, I mean kind of two specific things I'm going to show you, and the first thing is that it should be really easy to dial someone. Any time I see someone's name or phone number on the PC screen, I should be able to dial it. And, of course, it sure would help me when I'm receiving calls as well.

Let's go ahead and do the scenario where I want to call Bill. So, here's my favorite list of people to call, and so I'm going to call Bill. Now, what just happened there? First of all, notice that the music isn't playing, right, so the PC knows that I'm using the telephone, so it can mute the music any time that I'm using the telephone. We have a phone set up here to be me, and we have that phone set up to be Bill. So, when I click the link, what happened is that the phone went off hook, it actually went to speaker-phone mode, and then dialed that phone sitting over there.

Now, once I'm in the phone call here with Bill, you know, I can take notes here, you can imagine these being ink notes if it's on your Tablet PC over there. I can so do easy conference calls and transferring of calls because the interface on the phone right now sometimes kind of makes that difficult. So, when I transfer calls I forget how to use the transfer button on the phone here. I can share my screen with Bill using PlaceWare type technology. And, then once we're ready to hang up, we can actually just press the hang up button here, and then we go back to where we started from.

The other cool thing about merging together phones and PCs is I can allow people to search across the entire corporate directory as well as their personal directories for phone numbers. So, if I went to look for Laura C., not only would it show the Laura Cadiz, who is in my personal address book and all her phone numbers, but also all the Laura Cs in the Microsoft Corporation, OK, and I can dial them just by clicking them.

That's how we make things easy to dial, and how to make people easy to dial. So, again, imagine anywhere in the operating system where you see a person or a phone number, you can just click it and dial it.

Now, let's talk about the incoming calls here, all right. What happens if Bill calls me. Let's go ahead and do that. Okay. So, there are two different things that are going on there. One is that you might have heard faintly in the background that the PC can actually act as the ringer. The other thing is, notice that I didn't actually answer that call. And because I didn't answer that call, ET automatically sent me an e-mail that says, hey, Bill Gates just tried to call you. That way, if I wasn't sitting in front of the exact PC where I was running ET, I could tell that people were trying to call me.

The other cool thing is that, notice that the only thing coming out of the phone here is a normal phone cord. There are no other actual wires. That means I can actually receive notifications of people trying to call me no matter where I am. So, let's say I have the common cell phone sitting here with me, and I'm sitting in a Florida hotel room like I was a few weeks ago, and I get a notification of an incoming call. I can actually set the transfer button here, I can attach this to my cell phone, and that way even though I'm in a completely other state, and I'm not next to my phone I can still take calls. Now, because ET can also programmatically transfer phone calls you can imagine setting up a variety of rich rules, for example, things like any time I'm not at my computer forward all my calls to my cell phone during business hours, and during work hours rather during home hours, go ahead and forward them to my home phone, but only if one of my coworkers calls me, not if any other random person calls me.

Now, the cool thing about ET, if we could go back to the main slide, is that ET is not actually just a demo that we bring out on the stage every once in a while, it's actually something that's deployed throughout the entire Puget Sound Microsoft campus. We first made it available on January 9th, and only told about 150 people, and didn't do any advertising, and we relied on just word of mouth to spread the word about ET, and right now we have over 5,000 people who have installed ET on the Puget Sound Microsoft Campus. It's also an excellent example of a technology that can be available using .NET, over 90 percent of ET is built completely using .NET.

Now, the feedback on ET has also been very good from our users. We've had a few unsolicited feedback emails that people sent us, saying a variety of things, like just because ET is able to make me more available it saves me often on several big issues. And actually, I especially like the second quote where Alice talks about the fact that one time she was sitting in a meeting, not next to her phone, and saw that one of the managers tried to call her, and she wasn't able to excuse herself, but she was on e-mail, because of her laptop, and was able to resolve the manger's issue there immediately.

These are two examples of real-time collaboration tools that we're working on for you, and the enhanced telephony demo specifically, that's working right now within Microsoft. We're going to need a lot of help from companies represented here in this audience to make it a reality for everyone else, and we very much look forward to partnering with those companies.

Thank you very much.

(Applause.)

BILL GATES: It's interesting, that getting the phone and PC to work together; a lot of people thought about that as requiring you to change the whole telephony infrastructure to work across the Internet, so-called IP-based approach. But it turns out that that example is a traditional PBX, it's a fairly simple piece of software, that talks between the network, the computer network, and the PBX network. And so even without changing out any of the existing infrastructure you can start to get these benefits. Likewise with PlaceWare, what most people do, because the Internet telephony, also called voice over IP, isn't high enough quality yet, they're placing a traditional phone call in parallel with that screen connection. And so they're the best of both worlds, they're getting the screen interaction, and yet the voice quality and all that is the same. And yet, it all gets set up, when you click to join Net Meeting, in one simple step.

I'm going to quickly talk about some applications that our IT group has built. This is along the lines of... there really are those things that were exploding in the late '90s, there have been some really concrete benefits that have come out of those things, procurement benefits, purchasing all purchasing being done electronically, making that a paperless process, invoicing a paperless process, expense reports a paperless process, where you can see the history of what somebody has done on expense reports, and simply approve those things electronically. This is pretty straightforward stuff, you can see the time between starting the project and rolling it out is in these cases on the order of six months. The actual volume of these systems, because the servers have so much capacity, the idea of handling 400,000 orders, or a million invoices, or 200,000 expense reports, a single server is not overloaded running those applications, and that's a $20,000 piece of hardware to do it. So really the software is the key element here. I'd expect if we surveyed your companies, about half would have done what's described here, that is, created a very straightforward user interface, and moved these processes into a paperless fashion. These other two examples I don't think we'd see quite at that same level of usage at this point.

We have another thing that's been very important for us, which is taking all our information about customers and bringing it into one place, the Siebel information, the SAP information, the Ad Hoc information, and letting you navigate that in one way. This is a classic problem, and it's one where the Web services approach is very apt, because every one of those major applications lets you pull the data out, and then create the user interfaces that make sense for the various people. And depending on their role, that user interface is going to be very different, you want to be able to refine that, you don't want that locked in by somebody outside the company, you simply want the data sourcing out of their rich applications into the single user interface front end. And so I think, although only about 20 percent of companies have done this, this will become pretty standard stuff.

We've also taken most of the processes and said, OK, we're going to create a SharePoint portal for every one of these processes, the task activities where they found savings opportunities by seeing what could be done in different venues that were pretty dramatic, paying contingent staff, employee self service, we had a lot of paper forms related to these things. One way to find the opportunities is to go where the paper is, and although that's been said over the years, I think that's still really the best way to see where things can be done better.

Now, these are administrative systems. The actual biggest benefit to us, the collaboration software, is in our product development process. That's harder to benchmark, because we're doing unique products, it's not like there's some other company there. But, our use of Web services and sharing, the impact of these typical back-end systems I'm certain here is outweighed by more than an order of magnitude by how it relates to our basic product design, product testing, product collaboration, on those different product issues.

We mentioned that the Tablet is something that we've embraced pretty extensively. The Tablet launched late last year, I think the first unit shipped in November. Those of you who were here last year got a chance to use the early Acer prototypes; actually that machine is now moving to a second version that's even faster, and a really very fine piece of work. There's an HP Tablet that has a convertibility thing that's really great. There's a Toshiba, sort of a no-compromise in terms of its rich portability. There's a Gateway machine. There's quite a few of these with different design points in mind.

We've got over 5,000 now, and if you go to a meeting, it's typical now, people are taking their notes, sharing their notes, we don't even bring Power Point print outs to a lot of meetings, because it's a lot better if you have the PowerPoint where you can take sales figures, and dive in and say, what was that by country, or time period, or product, which if you have it on paper, if somebody wants more detail, maybe you anticipated that and slide 200 in the appendix, you say, aha, I knew you'd ask that, I have the Japanese expansion on slide 200, so you flip to the paper. So that means you have to hand out something that anticipated every possible question that the executive might ask. It's a lot better simply navigable in a live presentation that's connected up to that data. So there's no doubt we are being a pioneer on the Tablet. There is no other corporation that has 5,000 Tablets in use today. And yet we see this over the years to come. But, all portables work this way, and even more of these machines will be portable type devices. You've probably seen the speed in terms of reducing the thickness of the device, and making it an attractive reading device has been pretty substantial there.

I thought I'd do one last thing, which is just to give you a sense of how I work in a typical day, using these tools; what am I doing sitting in my office. I've got here a pretty nice system, this is a 23-inch LCD, you can see it's got an interesting aspect ratio to let you see lots of information across like this. This is still a pretty expensive display, they're about $2,500, they've just come out, it's a Sony display. These will come down in price over the next three years, we think, to about a third of that, to $700 or $800. So even though today, maybe only the executive staff should have these things, these are going to be commonplace. You've probably seen a rapid shift in your company from CRTs and desktops to LCDs because the 15- and 17-inch LCDs are already at that $600-700 price point. So we're reached crossover where for any new system the LCD is superior, partly because of the text readability, partly because it requires less desk space. But as those LCDs get larger you'll see a couple of cases where that extra screen area really is very helpful in terms of that productivity you get out of it.

So the computer is getting smaller, while the display is actually getting bigger. I'll just go into e-mail... I'd say that of my time sitting in my office, that is, time outside of meetings, which is a couple of hours, two-thirds of that is sitting in e-mail. E-mail is really my primary application, because that's where I'm getting notifications of new things, that's where I'm stirring up trouble by sending mail out to lots of different groups. So it's a fundamental application. And I think that's probably true for most knowledge workers, that the e-mail is the one they sit in the most. Inside those e-mails they get spreadsheets, they get Word documents, they get PowerPoints, so they navigate out to those things, but the center is e-mail.

So here you can see I've got a lot of different e-mails. I'm in my Inbox here, and you can see I've got a bunch these are different folders here, these are actually the e-mails, and I'm using this three-pane view, so that when I navigate to different e-mails I'm automatically seeing the e-mail there. And that works, because I have this big screen, so I'm not having to go in and out of the e-mail a lot, I can navigate around with a lot of speed. Now, let's say, I have an e-mail here, there's a classic question when you get e-mail, is it something you can just read and say, OK, I'm done with that, that's very satisfying, you just delete it, and never think about it again, but often it will be something where you'll send a reply, and expect a follow up, or you think, oh boy, I'd better read that more in-depth. What you want to do is just flag it, you'd want to just right-click and pick some kind of flag, I picked a blue flag there, and then have that indicated, and then be able to see all the mail that you've flagged. So what we have here is a thing called For Follow Up, and it sorts it according to which things I put into the different categories, the blue, the yellow, the green, and that's just a view on the e-mail. I didn't have to move things into different folders. Moving things into folders is a lot of trouble, because then you have to navigate all those folders to see what's in there. It's easier if they're all still available in one place, and yet you just do this simple flagging. Although you can also move them down into those different folders.

One thing that you often want to do is look for your unread e-mail. Typically, if I see something and I think, gosh, I don't have time for that, I remark it as unread, but then I want to navigate those things. Here we just have automatically a place where you can click and see all those unread items, and navigate through all the different things that are unread, and decide exactly what you want to do about those things, do you want to forward them off to someone, or what might make sense. And you can group it, you can do it by time, you can group it by the people involved, that's a pretty interesting thing. You can see the way it's displaying this e-mail, it's a lot more terse than it's been in the past, and the reason is that it understands that this is the mail from today. I hope the font is big enough to read that. So this group here is mail from today, so it doesn't go and put, take up a lot of space, describing that, it's just the mail from today, this is the mail from yesterday, this is the mail from Monday. And so when you squeeze these things down to use less space, it knows that it doesn't need to display that date information, because it's today, you just need the time information, and that's why you can see it in such a succinct fashion. And having things organized that way is pretty natural. I think, OK, that was yesterday, and if you go down here you'll see Monday, and then it starts to group it into bigger things, last week, so this is everything that's in that group. And then you can select all of those different things.

One of the problems you get with e-mail a lot is, there will be an issue that's a controversial issue, and so one person will send that mail to 20 people, saying, we should do it this way. And somebody will respond and say, no, we shouldn't do it this way, we should do it this other way, and then somebody will decide we should add some more people to this discussion, and people start disagreeing. And say you've been gone in a meeting for three or four hours, you can come back and your e-mail box can have 20 messages that are all on this one thread, where people are pingponging back and forth. Maybe people try not to expose the CEOs to that too much, but that is life in e-mail, that you get those things going back and forth. So sometimes what you want to do is collapse things so that it's mail that all relates to each other, you just want to see it as one item, and see a conversation. And so you can just use this view button here, which is what you use to control whether you have this right pane, you can just say, do I want to group conversations together? And so you can have different folders that you've set up to show things that way. So here we have people talking about a hot controversial topic, user interface discussion. And instead of seeing the individual mails you just have them so you see the most recent, and then you can go down and just see the ones you haven't read, or see all the different ones that relate to that. You can see this topic, people going back and forth, an immense amount, and yet you can just see the most recent, you can just see what's there. So having conversations as something that e-mail understands is very helpful.

Another thing that's painful, has been painful, is dealing with calendars. Here I can see my calendar, I can see that as seven days, five days, today's calendar, but I may want to see other people's calendars. Now you can just click on the different people and it will bring in their calendars, and you can see the different things they have going on. It uses different colors, so it's kind of nice. Now the system can automatically try and find free dates, but, in a way, you often want to see the calendar; assuming they've marked it, you can see their calendar. They can choose either to share nothing, they can choose to share busy, or they can choose, in this case, these are workers who have said, at least for this colleague, I'm willing to share the whole calendar. So you see that all there, side by side. And you can actually take events, like I can take the CEO calendar and publish it, so it will show up here as one of these things I can show up side by side, or you can take your regular executive meeting calendar, and have that be one of these calendars, so you can always see that, and either copy it onto your calendar, or have it there by itself.

Junk mail is a big deal. We do have a lot, but we're getting smarter at automatically moving things. And you see here, there's this junk mail folder; things are automatically put in there. Right now the junk mail people are getting smarter as we get smarter, but we have a few tricks that we don't think they'll have any counter tricks to get around. So that's something we can't defeat, even though it's at a pretty high level right now.

JJ talked about being able to call people, or connect up a PlaceWare. Any place I see a name, I can click on that name and choose whether I want to call them, send mail to them, do a PlaceWare with them, any one of those things is an option that I can get into. Another complaint that we have about mail is the fact that when you send mail out to people, it's very typical that somebody gets something, and thinks, well, this is kind of interesting, I think I'll forward this on to somebody else. And then they think, well, yes, it's really interesting. Let me forward it to someone. And sometimes when we send mail, we have the sense of, well, I might as well publish it in the newspaper the way this stuff gets out there. What we've done with mail is, when you create a piece of mail now, what you'll be able to do is pick a mail template. So, you'll have a mail template, say, for attorney-client mail, or mail that's only for people who see the early financial results, and so you pick one of those templates. And what it's doing is controlling that mail; you set up the templates to decide who can receive it, can they print it, and you can also make it expire. So, if you send information out that, say, gets out of date, you can tell it that, OK, after that date, the information is no longer viewable. And so people know to go and get the most recent information instead of the thing that's out of date.

So, this information-rights management, where you have a sense that you can control the spread of that e-mail, and the software is helping you do that, we think that's a fairly important thing, because otherwise people think of e-mail as something that's just simply got no boundaries. E-mail is the biggest thing.

The other thing that I do is I go to these shared Web sites. Each of our product groups or project groups will have a Web site, and at that Web site you have information about the documents those people work on together, the calendars that those people have, and so it's a way of working together. And it's a shared Web site that's got all the different information that's available up there. These are the things I talked about people being able to set up on a very straightforward basis. They typically this one here, you know, is a CEO thing, you can see the calendar is here. I used the standard template for this one, if I ask for the contacts, it's just going to show me all the different people who are attendees to the meeting, so that comes up pretty well. I can put so-called Web parts in here, where you have information from business systems. This is too small, I should grow the text up. So, here I have stock prices from various people here at the meeting, and it's live. I hope that doesn't shock anybody. And I've got live news coming in from MSNBC. And you can have these Web parts that, say, connect to your project management system and show the current schedule for things, or show the sales results, are they at forecast, below forecast, and different Web parts belong on different pages that people have.

I've got a document library, and people can check things out, and check things back in here. Let me just finish up with two quick documents that are in here. One is a little spreadsheet that I have, and I just created this, it's almost a kind of a humorous thing. But what I did is, I created a spreadsheet that connects out actually to Amazon, and looks at which books are important. Amazon actually has one of these Web services. And so you can go in, and for any book, you can ask how well it's selling. And so here and it didn't take long at all -- I created a spreadsheet that shows, for people who sell computer books, which Microsoft is one through Microsoft Press, exactly what the best-selling books are, and exactly what rank they are. And so because of one of these pivot-table things, I can just take this and I can restrict it to a different publisher, or I can just go back and show everybody, and sort this in different ways. And so it's a live view. Typically this is used for sales-type data. Here is the pivot table that just shows who's got the most top-sellers. I can click in to drill down on that.

I've also created a live connection where I can type in any author and see how well their books are selling. So, I can type in...my books are really old, so I'm sure they're not selling that well. I just click Go, and what it's doing is, it's going out to the Amazon web site, and seeing everything that's got my name associated with it. I just have the ones that I directly authored, and I see here, OK, I've got one that's the 138,000th most popular, 88,000th, oh, 400th, that's not too bad. And honestly I didn't go out to buy any this morning to try and make this thing work better.

And I can even do some comparative things here. Let's see how Michael Dell is doing as an author. And so this is live information that's up on the Web site. At any time you can go do this, and typically you could hook this into internal systems. I don't know if the Internet connection is defeating me here or what. Did I not hit Return? Oh, that was my mistake, sorry, I didn't hit Return.

So, I'm a little bit ahead, his is the 401st best seller. It looks like Michael and I need to come out with a new book to move up on the charts there.

Let me just show you one other document here and this is to talk about reading off the screen. If ,as soon as you print a document out then, of course, you've had to pick not only what sales information but you've also had to pick the fact that if you make comments on it, then getting those comments back to the person is fairly inefficient, and yet reading off the screen has been very painful, and it's been a Holy Grail for us, and you've heard us talk about this over the years, making screen reading something that's very comfortable, even for fairly long documents. And so here I have a document, and you can see it's noticed that I have a wide display. So, if I scroll through this, it's showing, side-by-side, two pages. At any time I could say, well, this text isn't big enough, and I can say, OK, I want to make the text bigger here, and just click this and make it as big as I want. So, if your eyesight is not as good as it used to be, and you like to read it a certain way, you're doing that. You're not affecting the document, you're not re-editing the document, it's just doing this dynamic layout, and it goes down in size.

Over here, I turn over what are called thumbnails, you can see the different pages and what they look like. And I can just click. And I click on a page, it's a thumbnail over here, then it navigates to that part of the document.

Now, before we had this, when you were reading online, you had to sit and scroll with the scroll button on the mouse, you had to scroll line by line. And all usability testing showed that was a really terrible way to read documents. You really want to read it page by page. And so here, as you roll the scroller, it's just going page by page. So you don't get things spread across the pages in a very strange way. It's a very nice way to see a document. You can see the document outline here, if you want, instead of the thumbnails. Now, we're using the quality of this LCD, and the fact that we have clear type capability in order to make this work very well. Another thing, since it's online reading, at any time you can point to a word and just click on it, and we go out and we find you information. Here I clicked on the word diagram, and so I'll see in the dictionary the definition, I'll see the thesaurus. I can pick any language and translate that word into Japanese, or I can pick French, and see what it is in French. I can also do this so that when I pick a term it goes out and looks at a web site. So, for example, if you subscribe to Dow Jones Factiva, you can take something like a company name and ask to go and do a search on that, and make sure that one of the sources it looks at is the Factiva news search. And so I picked NEC; I could have clicked on that in the document search, and it would go out and see all the latest news and information about NEC.

We often have code names for products, so I'll see documents that have these code names, and I don't even know what the code name refers to. So I just click on the document and I say for it to search, and I pick the Microsoft Web site, all of our different Web sites, and I can see exactly what's going on with that. And so the fact that you can take any of this text, and use it to navigate and get information, is another thing that you only get with online reading.

Hopefully that gives you a little bit of a sense of, during those hours I'm not in meetings, what I'm doing. Both with your direct Tablet experience -- and Jeff Raikes a little bit tomorrow is going to talk about when you're in the meeting and you've got the Tablet note taking -- how that changes that part of the job, which, I'm sure for a lot of us, sitting in meetings is actually even more hours than it is sitting in front of the terminal.

There are a lot of breakthroughs coming, I won't dive into these, the ease of developing applications, that's been very hard for corporations, because they're duplicating a lot of things, and the code is being written at a very low level. The idea of being able to navigate business data in terms that you understand, by division, to really have schemas that let you navigate not just at the cells of the spreadsheet, but the terminology that makes sense to you. There are some big software breakthroughs coming on that will turn this into sort of a low-volume specialized market, where every worker has these rich views of profitability and sales, and things like that. Speech is coming along; it's not mature like handwriting is, but it's within a few years of that, so that making voice annotations, of course, that's easy, it doesn't require recognition, but even giving your commands and navigating with speech will be possible. And the connection of the phone and the PC, we're going to see some dramatic things there, where the phone right now is a challenge for IT departments to manage, because people are downloading information onto it. By doing integration we can allow that scenario, but still have the connectivity that people expect.

So the conclusion for me, in terms of how you should think about IT investments, and where it's probably most effective, and making a difference, is it's important to see that although those harsh realities are there, how is the industry responding to that, software advances are the key breakthrough that turn back those realities. There's a lot that can be done in empowerment. And as long as these breakthroughs are coming along, it's worth it to give those people the best tools. And one great thing is that there are many best practices; for any one of these things that I've talked about, there are pioneering companies and Web services, and how they're using e-mail, how they're using SharePoint, they are doing pretty neat things. And so it's not like people are off on their own. They can move out with some certainty, seeing what the pioneers have done.

So we're pretty excited, obviously we must be, we're still increasing our R&D budget, up from the $5 billion level, and I think that will be fully justified. So we look forward to working with you on putting some of these things into practice.

Thank you.

(Applause.)

EE Times - Intel's Barrett fires back in IT relevance debate

Referring to the current debate about IT's efficacy, and specifically to the HBR article which he said "best articulates the pseudo-populist theory," Barrett said Carr's suggestion that "IT is a commodity infrastructure like roads, the internal combustion engine and electricity," absolutely misses the point.

All of those common infrastructures are infrastructural elements that allow you to make or move material; they don't allow you to put intellectual content or value into what you are doing

IT, Barrettt rejoined, "is the vehicle to put value in what you are doing. Therefore, if you want to have a high standard of living, if you want to have a progressive economy and if you want to be competitive around the world, you either have that infrastructure or you don't. If you don't have it the jobs will go somewhere else, which is why I have said you have the possibility of a jobless recovery [in the U.S., western Europe and Japan] if you don't have that IT infrastructure upgrade.

Barrett went on to say that "economies today are measured in terms of intellectual content that is embedded in the products they sell. IT is the vehicle by which you take information and data and turn it into intellect content. Either the U.S., western Europe and Japan invest and upgrade or they will see a somewhat devastating jobless recovery and the job will go to where the educated people are and where the tools and infrastructure are.

"Intel can't design a next generation chip without IT infrastructure, Boeing can't design an airplane and GM can't design a new automobile without an advanced IT and communications infrastructure. And you can't index the human genome or tailor drugs on individual DNA makeup without IT infrastructure. To me it's a no-brainer. The only question is when do we start to invest?

johnhagel.com Where Business meets IT

IT Does Matter
(written in Collaboration with John Seely Brown)

For those of you who have not seen it, the latest (May 2003) issue of Harvard Business Review has an article that will have significant impact in the business world. The article, by Nicholas Carr, an Editor at Large for HBR, is provocatively, but somewhat inaccurately, titled "IT DoesnТt Matter". Carr doesnТt actually say that in the article Ц instead, he argues that the opportunity for strategic differentiation through IT is rapidly diminishing. While he acknowledges that IT is essential for business operations, he makes the case that IT should be managed as a commodity input, squeezing cost out of IT budgets while at the same time ensuring that IT platforms deliver the necessary reliability and security to avoid business disruptions.

We believe this is an important article because it very effectively captures the backlash sweeping through executive suites against IT spending. Certainly much of what Carr writes is spot on: companies have spent too much on IT in the past with only minimal (if any returns) and there is a need to focus on the increasing vulnerabilities we face as we become more dependent on automated operations. But CarrТs article is also dangerous because it endorses the growing view that IT offers only limited potential for strategic differentiation.

We ended up writing an extensive rebuttal to CarrТs article that will be published in the July 2003 issue of Harvard Business Review. In the meantime, we thought we would briefly recap the three key points we made in this rebuttal, so that we could at least make our voices heard earlier in the debate that is sure to develop around this article:

Carr refers to previous technology innovations like the railroad and electricity to make the claim that rapid early investment in the technology is soon followed by commoditization. We argue that IT differs fundamentally from these other technology innovations in two key respects. First, performance improvements in the underlying technology components has proceeded at a faster and more sustained pace than any of these previous technologies. Second, the performance improvements in the technology components have enabled a series of architectural shifts from centralized mainframe architectures to client-server architectures and, more recently, to three tier architectures. Each of these shifts has amplified the power of the underlying technology components, in part by creating more flexibility in the deployment of these resources. In contrast, previous technology innovations began to stabilize and commoditize as a dominant architecture emerged (e.g., think about the standard railway gauges that helped to connect tracks and establish a national railway system). We have yet to see a dominant architecture for IT emerge. In fact, we believe we are on the cusp of another major shift toward a true distributed service architecture that will represent a qualitative breakthrough in terms of delivering more flexibility and fluidity to businesses.

In other contexts, John Seely Brown has championed a perspective he describes as radical incrementalism. This perspective emphasizes the role of architecture in facilitating the ability to rapidly build and deploy radical new components. With an appropriate architecture, radical individual components can significantly amplify their impact. We believe that distributed service architectures will be exactly this kind of architecture in terms of amplifying the innovative potential of individual technology components. But it wonТt stop there.

Distributed service architectures have the potential to create a powerful virtuous cycle when coupled with the FAST strategy outlined below. By amplifying the potential of individual technology components, these technology architectures will expand the range of options available to business executives in terms of how they organize and run their companies. The FAST strategy approach helps business executives to innovate business practices in rapid increments, focused by a longer-term view of the opportunities and requirements for business success. Thus, the technology architecture will amplify options for business innovation and the FAST strategy approach will accelerate the innovation process Ц giving businesses powerful tools to build and deepen strategic advantage.

Bottom line, far from believing that the potential for strategic differentiation through IT is diminishing, we would maintain that the potential is increasing, given the growing gap between IT potential and realized business value. For the more detailed development of this position, you will unfortunately need to wait until the July issue to read the full letter.

[May 5, 2003] Ringing the death knell on tech's high-growth by Steve Lohr

He makes two critical points that are sometimes being lost in the current debate: ". . . it is possible to agree that technology can deliver broad productivity gains without necessarily delivering higher profits or competitive gains for individual companies, a point made by Mr. Carr. It is also possible to agree that the technology industry continues to be innovative and important, without also accepting that it will be a growth industry as it has been in the past."
May 5, 2003 | NYT

Martin Pichinson is one of Silicon Valley's undertakers. His company, Sherwood Partners, has carved out a prosperous niche as an expert in shutting down failed technology start-ups - 150 in the past two years, and Pichinson figures that thousands more are destined to fold.

"We're doctors of reality," he said.

The winnowing of the corporate population is just one sign that the information technology industry is maturing in ways that will affect technology companies, their customers and investors for years to come. But what is painful for Silicon Valley is beneficial for those who use the stuff it produces.

The industry, according to Irving Wladawsky-Berger, a strategy executive at International Business Machines Corp., has entered "the post-technology era." It is not that technology itself no longer matters, he said; but steady advances in chips, disk storage and software mean that the focus is no longer on the technology itself - with its arcane language of processing speeds and gigabytes - but on what people and companies can do with it.

As a result, industry executives and analysts say, the balance of power is shifting away from technology suppliers and toward their corporate customers. At the same time, the use of lower-cost building blocks of computer hardware and software is spreading, making it easier for companies and individuals to share data and work together using industry standards rather than remain dependent on one or two key suppliers.

These trends, they say, point to increased pressure on prices and profits for most technology companies, a good deal for corporate customers and a very tricky time for investors.

This is more than a backlash against the bubble years, a mere pendulum swing in attitudes and practices. The technology itself will still deliver waves of innovation in the future, but the industry that has risen to account for 10 percent of the economy and nearly 60 percent of business capital spending can no longer play by its own rules.

"I don't see a loss of faith in technology, but gravity has been turned back on," said Dick Lampman, the director of research laboratories at Hewlett-Packard Co.

Yet an article published last week in The Harvard Business Review does question corporate America's faith in the value of technology.

Titled "IT Doesn't Matter," it argues that information technology is inevitably headed in the same direction as the railroads, the telegraph, electricity and the internal combustion engine. All of these industrial technologies aged from their boom-time youth to become, in economic terms, ordinary factors of production, or "commodity inputs," the article said.

"From a strategic standpoint, they became invisible; they no longer mattered," wrote Nicholas Carr, editor at large. "That is exactly what is happening to information technology today."

Most corporate executives say there is a lot they can do now with technology to give themselves an edge. Glen Salow, chief information officer of American Express Co., sees the recent trends in the industry as working to his advantage.

First, he said, the hard times in the technology business have increasingly meant that big corporate customers hold the upper hand in their dealings with suppliers. That shift, Salow said, has given him not only more bargaining power on price but also more influence in the development of products and services.

With their new power, customers are also pressing for greater flexibility in how they buy computing resources, including paying only for as much product as they use, as if they were buying electricity.

The widespread use of software standards, Salow said, enables the thousands of internal programmers at American Express to build applications almost as if snapping together Lego blocks, reducing the amount of code that has to be written by hand. A result, he said, is that the software for, say, a new credit card offering or a fraud-detection feature can be built and put in use in about two weeks; five years ago, this might have taken six months.

"It all frees you up to take more gambles because each risk is not so costly and you can move a lot faster," Salow said.

The push toward utility computing, according to Wladawsky-Berger of IBM, fits neatly into his concept of a post-technology era.

"In the last few years," he said, "the underlying components have become so powerful, reliable and inexpensive that you don't have to worry so much about the underlying engine, and you can move up to higher-level concerns."

IBM has moved more and more toward becoming a provider not only of technology but also of business expertise in 17 industries from banking to electronics and transportation.

Each successive wave of computing - from mainframes to minicomputers to personal computers to the Internet - has opened the door to new users and created new problems. Each of those, in turn, must be addressed if the industry is to move ahead. The Internet brought an explosion of computing complexity. And while many dot-coms are gone, Internet technology has spread, is used by most people and has become mainstream within corporations.

Marc Andreessen, a co-founder of Netscape Communications, whose software introduced Web browsing to millions and touched off the Internet boom, is now chairman of Opsware, whose data-center software is intended to tackle the complexity crisis. "At Netscape, we were building all the software components that made this possible and created the problem, and we didn't grasp the implications," he said.

Larry Ellison, the chairman of Oracle Corp., has been one of the most vocal proponents of the view that the technology industry is graying. "Thousands of companies are on life support that just have to die," he said. "Our industry is in the inevitable process of maturing."

But Ellison's concept of a maturing industry is not exactly a listless old age. There will be fewer companies and slower growth, he said, but still plenty of leeway for entrepreneurial creativity.

"There will continue to be very cool new computing technologies," Ellison said. Unlike many industrial technologies the stored-program computer is a general-purpose tool, animated by software, a medium without material constraints. The unrelenting pace of improvement in processing speeds, data storage and miniaturization means the tools get more powerful and smaller, and then people find things to do with them.

And innovation is continuing apace despite the downturn. Advances are evident in a range of technologies: wireless, data center automation, speech recognition, intelligent software, sensors, natural language processing and on and on.

Jim Gray, a computer scientist, has worked in the industry for more than 30 years. His research on databases and transaction processing at IBM and elsewhere won Gray the 1999 A.M. Turing Award, sometimes called the Nobel prize of computer science. "I've seen the 'end' at least twice in my career - only to be surprised by the next wave," Gray observed. "My guess is that this computer thing has just gotten started."

The New York Times

Get Over Yourself - Computerworld

IT Does Matter - Computerworld

USATODAY.com - How IBM, Dell managed to build crushing tech dominance

Very pro-IBM and very un-true...

How IBM, Dell managed to build crushing tech dominance

In sports circles, the argument du jour is whether female golfer Annika Sorenstam should play this week in a PGA tournament against men.

Among technology people, the argument du jour is whether the industry is stuck in its prolonged, depressing slump because information technology - IT for short - has permanently become a mundane, slow-growth business, like electricity, toothpaste or paint.

Some analysts and academics say it has. Tech people say that's ridiculous and get more offended than if you questioned their mothers' decency. This is why tech people don't get invited to parties.

Anyway, it seems that most are missing an intriguing part of the argument. It might explain one of the current mysteries in the technology industry. To whit: Why are Dell Computer and IBM out there kicking booty in the computer business while just about everybody else is sucking wind?

This Dell-IBM thing has become an accepted fact of life in 2003, like the rebirth of movie musicals or the effectiveness of the Atkins diet. Wall Street analysts, most prominently Steve Milunovich of Merrill Lynch, talk of a "bifurcation" of the market into Dell at the low-priced commodity end and IBM at the high end, with every other computer company - Hewlett-Packard, Sun Microsystems, Gateway - caught in a profit-draining no man's land.

"Who makes money? Dell makes money, and IBM makes money," brags Dell President Kevin Rollins. Yet no one has really explained why.

The answer might lie in a controversial article by Nicholas Carr in the May Harvard Business Review, titled "IT Doesn't Matter." Carr doesn't specifically tie his findings to the Dell-IBM split, but the logic involved fits.

To understand his argument, think of IT as cars and companies as teenagers. When I was in high school, hardly any boys had cars. So the ones who did own cars had a huge strategic girl-luring advantage over those who didn't. Those boys were mobile. They could get to every party. They could make out in their cars. My friend Ed had a car that had gaping holes in the floor and belched smoke like an Iraqi oil well fire, and even that was a strategic advantage.

Today, in my neighborhood of the spoiled, every high school boy has a car. So having a car is no longer a strategic advantage. Having a Lexus might give you a bit of an edge over a classmate with a Hyundai, but it's not even close to the gulf between a boy with a car and a boy with no car.

Bottom line: As a strategic advantage for teenage boys, cars no longer matter.

This is exactly what has happened with IT. Carr says that IT used to be a strategic advantage for companies because not every company had it. So Wal-Mart could jump ahead of Kmart, in part, by investing heavily in IT and making better and faster decisions.

But these days, great technology is cheap and plentiful, and every company has its share. So IT doesn't matter because it's no longer a strategic advantage. It's essentially a cost of doing business.

And if that's the case, who wants to spend a lot on IT? It's like phone service or office stationery - you want quality stuff for a low price, in bulk. Who does that better than anybody? Right now it's Dell. And Dell is hotter than just about any technology producer.

But - and this is a big ol' BUT - IT is different from most other products in one big way: The technology keeps changing and improving, often in great leaps.

If a tech company can keep coming out with really high-end, super-cool new technology, it can go to customers and offer something that will give them a strategic advantage over all the other mopes buying the commodity bulk stuff from Dell.

That's the road IBM has taken. It pumps billions of dollars a year into its massive scientific research labs and builds big honkin' machines like T-Rex, which it unveiled earlier this month. T-Rex is three times more powerful than previous commercial mainframes, and it starts at $1 million apiece.

In the market, IBM is increasingly winning the customers willing to take a risk on technology that might bring a strategic advantage, and Dell gets all the rest, who are just trying to keep from getting toasted by competitors.

As Milunovich and other analysts note, H-P, Sun and similar tech companies aren't making products cheaply enough to compete for the keeping-up buyers, and aren't high-end enough to offer a true strategic advantage. Those companies seem to be getting squeezed at both ends.

So IT does matter, and it doesn't. It matters at the high end. In every other part of the market, it doesn't. But if you're a tech company trying to sell into this market, the fact that IT does matter and doesn't matter matters a lot.

Right. Now, can we get back to arguing about golf?

InformationWeek Bob Evans Business Technology IT Doesn't Matter May 12, 2003

Whichever ancient sage who first divined that the gods do not always smile upon us surely knew of which he spoketh. I need to cut my shaggy grass, but it keeps raining; my beloved but bungling Pittsburgh Pirates have lost six in a row; the drain trap below my shower is leaking; and on top of all that comes word that the Harvard Business Review has decided that IT doesn't matter.

Yep, that's what they say in the table of contents, in the headline, and at the top of each page in the lengthy article: "IT Doesn't Matter" (HBR, May 2003, p. 41). And here I was, fussing about leaks and losing streaks when IT was just stopping mattering. Can it be so?

The article is thoughtful and sweeping and quite interesting to read. I'd heartily recommend it. But that doesn't make it either accurate in its conclusions or even properly focused, and that's the problem I have with it. Written by HBR editor-at-large Nicholas Carr, the article is intent on proving the thesis that because IT has become widespread, then it must perforce become a commodity, as happened to other one-time breakthrough and industry-jarring innovations, such as steam engines and railroads, telephones and telegraphs, electric generators and internal-combustion engines. And Carr's unshakable belief in that inevitability leads him to a conclusion that's no doubt provocative, which I think was his primary intent, but also profoundly shortsighted and dangerous.

Now, please allow me a brief digression to the Full Disclosure Department: I will certainly admit that given my position with a publication like InformationWeek, I have a vested interest in the ongoing relevance, growth, and vitality of what I believe Carr is referring to as the IT industry. And I can certainly say that my view of the future that lies ahead for you readers could not possibly be in more extreme opposition to what Carr forecasts in the final paragraph of his article: "IT management should, frankly, become boring. The key to success, for the vast majority of companies, is no longer to seek advantage aggressively but to manage costs and risks meticulously." So, yes, I am a bit subjective on this topic, but I also feel that while the jobs of Randy Mott and Ralph Szygenda and Rob Carter and many tens of thousands of others in top IT management positions will be many things, one thing they will must assuredly not be is boring.

Where the article should have gone, I think, is outside the realm of embedded infrastructure and applications and into some attempts to look at what the future might look like. Instead, it assumes that the futures that befell railroads and steam engines will, inexorably and inevitably, be the future of IT. And I think that's astonishingly shortsighted. Only 10 years ago, how many of you had heard of the World Wide Web? And today, we've all heard of Web services--heard too much and seen too little, some would say--but can any of us really imagine what business will be like when the potential of those new technologies begins to be expressed? Or when global and mobile get more stable, and true collaboration becomes less psychology and more process and software, and the recent focus on internal technology becomes redirected on customer-centered possibilities? If we've learned anything in the last several years, it's that the balance of power in the world of business has tipped to the buyer and they will continue to get more demanding, more fickle, more selective, and more willing to spend their bucks elsewhere unless businesses mold their efforts around those customers.

Will that be done with people? With paper? With singing telegrams? I don't think so; the key will continue to be technology.

[May 15, 2003] John Hagel Viewpoint- IT Does Matter strategy @ the intersection of business & technology

Weak defense; questionable propositions. Good idea about importance of architecture, though.
We ended up writing an extensive rebuttal to Carr's article that will be published in the July 2003 issue of Harvard Business Review. In the meantime, we thought we would briefly recap the three key points we made in this rebuttal, so that we could at least make our voices heard earlier in the debate that is sure to develop around this article:

Carr refers to previous technology innovations like the railroad and electricity to make the claim that rapid early investment in the technology is soon followed by commoditization. We argue that IT differs fundamentally from these other technology innovations in two key respects. First, performance improvements in the underlying technology components has proceeded at a faster and more sustained pace than any of these previous technologies. Second, the performance improvements in the technology components have enabled a series of architectural shifts from centralized mainframe architectures to client-server architectures and, more recently, to three tier architectures. Each of these shifts has amplified the power of the underlying technology components, in part by creating more flexibility in the deployment of these resources. In contrast, previous technology innovations began to stabilize and commoditize as a dominant architecture emerged (e.g., think about the standard railway gauges that helped to connect tracks and establish a national railway system). We have yet to see a dominant architecture for IT emerge. In fact, we believe we are on the cusp of another major shift toward a true distributed service architecture that will represent a qualitative breakthrough in terms of delivering more flexibility and fluidity to businesses.

In other contexts, John Seely Brown has championed a perspective he describes as radical incrementalism. This perspective emphasizes the role of architecture in facilitating the ability to rapidly build and deploy radical new components. With an appropriate architecture, radical individual components can significantly amplify their impact. We believe that distributed service architectures will be exactly this kind of architecture in terms of amplifying the innovative potential of individual technology components. But it won't stop there.

[May 16, 2003] TECHNOLOGY; Has Technology Lost Its 'Special' Status - New York Times

Most industrial technologies, Mr. Barrett explained, were used to make or move materials. By contrast, he added, information technology puts value into products and services -- from advertisements and legal briefs to Hollywood special effects and biological simulations -- which are intellectual goods in one form or another.

''I.T. is the vehicle by which you turn ideas and content into intellectual property products,'' Mr. Barrett said. ''As a nation and as a company, you either upgrade your I.T. infrastructure or you won't be competitive.''

IT Does Matter

IT has more to offer than mere supervision. It could become a combination of tech adviser and testing lab for many of these products. Or it could morph away from building pure technology systems that run a business and toward building technology into the services and products the business sells. You can't get more aligned with business goals than that!

General Motors is off to a good start here. Corporate CIO Ralph Syzgenda years ago teamed each of his group CIOs with specific department heads. He wanted his CIOs to understand the business they were serving and to be on the front lines, ready and able to partner on creating solutions to business problems.
And that is just one model. Unlike Carr, I don't think IT has to dissolve into an entity that is strictly focused on maintenance, risk avoidance and cost-cutting. Astute CIOs are already practicing this in some form or another.

[May 28, 2003]The end of IT as we know it? By Dan Farber, Tech Update

Farber was and is extremely weak...

An article appearing in the May edition of the Harvard Business Review by Nicolas Carr entitled "IT Doesn't Matter" stirred up a great deal of dialog in recent weeks. His title, implying that IT isn't strategically important, is aimed more at sparking controversy--which it did--than rational discourse. In fact, the title is the only reference to the notion that "IT doesn't matter," but he did get Microsoft Chairman Bill Gates and Intel CEO Craig Barrett to launch counterarguments to his title thesis.

Carr's real argument is that IT doesn't matter any more than electricity. It's essential for survival in business, but it's not a strategic advantage. IT, like electricity, has become a commodity, and should be viewed and managed as such.

The author posits that IT is following the same trajectory as the steam engine, railroad, telegraph, and electricity. Each seminal invention went through a disruptive period of rapid build out and industry transformation, affording the early adopters competitive edge and economic advantage. Over time, as the technology become more ubiquitous, innovation and proprietary schemes gave way to the commodity factors of production, Carr says. Today, according to Carr, IT is on track to become simply the cost of doing business, an invisible infrastructure layer, and about as exciting and complex as plugging an electrical adapter into the wall.

Carr writes that "while no one can say precisely when the build out of infrastructural technology has concluded, there are many signs that the IT build out is much closer to its end than its beginning." He bases his assessment on the affordability of IT, vendors positioning themselves as utility suppliers, an overabundance of fiber-optic capacity, and IT capabilities outstripping most business needs.

I agree that IT infrastructure as we know it today is heading toward a kind of commoditization, but the road is not well paved.

Industry consolidation will continue, with fewer players who are increasingly less differentiated in their product offerings. Most IT executives are more pragmatic today, looking at technology as a way to lower costs and increase efficiency-not to reinvent their businesses. "It's getting much harder to achieve a competitive advantage through an IT investment, but it is getting much easier to put your business at a cost disadvantage," Carr writes.

Embedded in the universal call to reduce the complexity of IT is a move toward more standards, such as Web services, and a more level playing field in terms of technology. Prepackaged server clusters, software suites, outsourcing, and the push toward on-demand computing by the industry heavyweights all signal a focus on lowering the cost to deploy and manage technology. New categories or niches of software and hardware will continue to spring up that bring proprietary advantages to the vendors and customers, but they will mostly be short-lived. Differentiation among companies delivering IT products and services will have more to do with support, security, availability, integration, and the trust factor. Technology as a competitive weapon is more about execution and competency than a secret sauce.

Does that make IT less visible, more of commodity? Yes and no. It's a commodity if the technology itself is built out of fairly standard components that don't vary greatly among vendors or provide truly unique advantages. However, the problem is that most software vendors have not figured out yet how to build reliable, easy-to-configure-and-use software, and IT organizations are often dysfunctional. While IT executives wish that building an IT solution were as easy as plugging servers, software, and end-user devices into a network grid, that's not the case. Carr's notion of the commoditization and homogenization of infrastructure gives too much credit to IT as a mature industry with an established base of technology and best practices that will evolve linearly.

Although IT is becoming ubiquitous, especially via the Internet and the declining pricing for increasingly powerful technology, it is also messed up. The Internet may be the equivalent of the U.S. standard railroad gauge, but delivering value along that track is often elusive. IT is absolutely a strategic and competitive advantage to companies that can implement and manage it effectively-even if the constituent parts are more universal and provide no distinct advantage themselves.

"IT management should, frankly, become boring," Carr writes. "The key to success, for the vast majority of companies, is no longer to seek advantage aggressively but to manage costs and risks meticulously." While boring isn't the word I would use (how about pragmatic?), what Carr articulates is good common sense. Investments in Web services, grid computing, self-healing systems, pay-as-you-go services, embedded business processes and other innovations designed to improve reliability and reduce the cost to deliver IT should be done with caution.

Perhaps 10 years from now, when the complexities of today's IT have been overcome, a true era of commodity computing will dawn. But tomorrow's IT will present new, possibly unforeseen branches of technology, and start a new cycle of creativity and innovation.

We know a lot about automating processes with computers, for example, but we are just at the beginning of automating computing itself, which is an essential next step in IT evolution. In that scenario, several layers of IT can be viewed as a commodity---as a common foundation upon which new, strategically important technology innovations will arise.

Recommended Links

Google matched content

Softpanorama Recommended

Top articles

[Mar 29, 2020] Why Didn't We Test Our Trade's 'Antifragility' Before COVID-19 by Gene Callahan and Joe Norman Published on Mar 28, 2020 | www.theamericanconservative.com

Sites

Summary of the Amazon EC2 and Amazon RDS Service Disruption in the US East Region

Wikipedia

Data Center Practices

Eighty percent of outages are allegedly the result of people or process issues. An intuitive and informative naming scheme can define and highlight the composition and function of components within a service infrastructure. The article looks at the merits of such a naming scheme and includes an example system for servers, storage, networks and cables that may help reduce operational error.

System Management

Resource Management
Data Management

Performance

Naming and Directory Services

PC Interoperability

High Availability

Rapid Recovery Techniques

Security

Operating Environment

Service Provider

Cluster

Nicholas Carr's "in the cloud" utilities dreams:

Interviews

The Ignorance of Crowds strategy+business Issue 47 | Summer 2007

Fallacies of Distibuted computing Explained

Podcast Is Carr right Does IT not matter Gartner attendees respond Between the Lines ZDNet.com

Bill Gates' Web Site - Speech Transcript, CEO Summit 2003

Nicholas G Carr webpage

Nicholas Carr videos

Tech Policy Seminar Carr Week 1

Links from Wikipedia articles

Unlimited bandwidth myth

Prev | Contents | Next



Etc

Society

Groupthink : Two Party System as Polyarchy : Corruption of Regulators : Bureaucracies : Understanding Micromanagers and Control Freaks : Toxic Managers :   Harvard Mafia : Diplomatic Communication : Surviving a Bad Performance Review : Insufficient Retirement Funds as Immanent Problem of Neoliberal Regime : PseudoScience : Who Rules America : Neoliberalism  : The Iron Law of Oligarchy : Libertarian Philosophy

Quotes

War and Peace : Skeptical Finance : John Kenneth Galbraith :Talleyrand : Oscar Wilde : Otto Von Bismarck : Keynes : George Carlin : Skeptics : Propaganda  : SE quotes : Language Design and Programming Quotes : Random IT-related quotesSomerset Maugham : Marcus Aurelius : Kurt Vonnegut : Eric Hoffer : Winston Churchill : Napoleon Bonaparte : Ambrose BierceBernard Shaw : Mark Twain Quotes

Bulletin:

Vol 25, No.12 (December, 2013) Rational Fools vs. Efficient Crooks The efficient markets hypothesis : Political Skeptic Bulletin, 2013 : Unemployment Bulletin, 2010 :  Vol 23, No.10 (October, 2011) An observation about corporate security departments : Slightly Skeptical Euromaydan Chronicles, June 2014 : Greenspan legacy bulletin, 2008 : Vol 25, No.10 (October, 2013) Cryptolocker Trojan (Win32/Crilock.A) : Vol 25, No.08 (August, 2013) Cloud providers as intelligence collection hubs : Financial Humor Bulletin, 2010 : Inequality Bulletin, 2009 : Financial Humor Bulletin, 2008 : Copyleft Problems Bulletin, 2004 : Financial Humor Bulletin, 2011 : Energy Bulletin, 2010 : Malware Protection Bulletin, 2010 : Vol 26, No.1 (January, 2013) Object-Oriented Cult : Political Skeptic Bulletin, 2011 : Vol 23, No.11 (November, 2011) Softpanorama classification of sysadmin horror stories : Vol 25, No.05 (May, 2013) Corporate bullshit as a communication method  : Vol 25, No.06 (June, 2013) A Note on the Relationship of Brooks Law and Conway Law

History:

Fifty glorious years (1950-2000): the triumph of the US computer engineering : Donald Knuth : TAoCP and its Influence of Computer Science : Richard Stallman : Linus Torvalds  : Larry Wall  : John K. Ousterhout : CTSS : Multix OS Unix History : Unix shell history : VI editor : History of pipes concept : Solaris : MS DOSProgramming Languages History : PL/1 : Simula 67 : C : History of GCC developmentScripting Languages : Perl history   : OS History : Mail : DNS : SSH : CPU Instruction Sets : SPARC systems 1987-2006 : Norton Commander : Norton Utilities : Norton Ghost : Frontpage history : Malware Defense History : GNU Screen : OSS early history

Classic books:

The Peter Principle : Parkinson Law : 1984 : The Mythical Man-MonthHow to Solve It by George Polya : The Art of Computer Programming : The Elements of Programming Style : The Unix Hater’s Handbook : The Jargon file : The True Believer : Programming Pearls : The Good Soldier Svejk : The Power Elite

Most popular humor pages:

Manifest of the Softpanorama IT Slacker Society : Ten Commandments of the IT Slackers Society : Computer Humor Collection : BSD Logo Story : The Cuckoo's Egg : IT Slang : C++ Humor : ARE YOU A BBS ADDICT? : The Perl Purity Test : Object oriented programmers of all nations : Financial Humor : Financial Humor Bulletin, 2008 : Financial Humor Bulletin, 2010 : The Most Comprehensive Collection of Editor-related Humor : Programming Language Humor : Goldman Sachs related humor : Greenspan humor : C Humor : Scripting Humor : Real Programmers Humor : Web Humor : GPL-related Humor : OFM Humor : Politically Incorrect Humor : IDS Humor : "Linux Sucks" Humor : Russian Musical Humor : Best Russian Programmer Humor : Microsoft plans to buy Catholic Church : Richard Stallman Related Humor : Admin Humor : Perl-related Humor : Linus Torvalds Related humor : PseudoScience Related Humor : Networking Humor : Shell Humor : Financial Humor Bulletin, 2011 : Financial Humor Bulletin, 2012 : Financial Humor Bulletin, 2013 : Java Humor : Software Engineering Humor : Sun Solaris Related Humor : Education Humor : IBM Humor : Assembler-related Humor : VIM Humor : Computer Viruses Humor : Bright tomorrow is rescheduled to a day after tomorrow : Classic Computer Humor

The Last but not Least Technology is dominated by two types of people: those who understand what they do not manage and those who manage what they do not understand ~Archibald Putt. Ph.D


Copyright © 1996-2021 by Softpanorama Society. www.softpanorama.org was initially created as a service to the (now defunct) UN Sustainable Development Networking Programme (SDNP) without any remuneration. This document is an industrial compilation designed and created exclusively for educational use and is distributed under the Softpanorama Content License. Original materials copyright belong to respective owners. Quotes are made for educational purposes only in compliance with the fair use doctrine.

FAIR USE NOTICE This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available to advance understanding of computer science, IT technology, economic, scientific, and social issues. We believe this constitutes a 'fair use' of any such copyrighted material as provided by section 107 of the US Copyright Law according to which such material can be distributed without profit exclusively for research and educational purposes.

This is a Spartan WHYFF (We Help You For Free) site written by people for whom English is not a native language. Grammar and spelling errors should be expected. The site contain some broken links as it develops like a living tree...

You can use PayPal to to buy a cup of coffee for authors of this site

Disclaimer:

The statements, views and opinions presented on this web page are those of the author (or referenced source) and are not endorsed by, nor do they necessarily reflect, the opinions of the Softpanorama society. We do not warrant the correctness of the information provided or its fitness for any purpose. The site uses AdSense so you need to be aware of Google privacy policy. You you do not want to be tracked by Google please disable Javascript for this site. This site is perfectly usable without Javascript.

Last modified: March 29, 2020