||Home||Switchboard||Unix Administration||Red Hat||TCP/IP Networks||Neoliberalism||Toxic Managers|
|(slightly skeptical) Educational society promoting "Back to basics" movement against IT overcomplexity and bastardization of classic Unix|
It looks like SaaS approach is just one instance of Service Oriented Architecture (SOA). The latter emphasizes the grouping of software functionality around business processes and packaging software as network services. SOA also presuppose network usage as a channel for the exchange for data between applications. It also relied on standard protocols that provide a loose coupling of services with particular OS and implementation language. Here is how Wikipedia defines SOA:
SOA separates functions into distinct units, or services[ which are made accessible over a network in order that they can be combined and reused in the production of business applications. These services communicate with each other by passing data from one service to another, or by coordinating an activity between two or more services. SOA concepts are often seen as built upon, and evolving from older concepts of distributed computing and modular programming.
I would also say that SOA concept is connected with the concept of Unix , especially pipes and co-routines. SaaS represents just one limited architecture for implementing SOA and while "in the cloud" services providers are good for certain production tasks they are bad and/or too costly for others. Webmail providers are the most prominent examples of successful SaaS application; mail filtering (for example Google Postini) and teleconferences are two other examples of successful delivery of functionality over the Internet. But they are not so good for complex enterprise applications (for example SAP/R3) and they are questionable for file and documents sharing.
There is almost a dozen of competing technologies among which can be used in isolation or in certain combinations that provide most of the positive features of SaaS while avoiding most problems. I would suffers the following as, in my opinion, the most promising alternatives to SaaS:
That means that "in the cloud" software services is just one of several emerging technical trends and jury is still out how much market share each of them can grab. Application streaming looks like direct and increasingly dangerous competitor for the "in the cloud" software services model.
In many cases existing "legacy" IT solutions are working very well and are cost efficient. It can be the system that was installed five yeas ago, but it can be even the system that was installed 15 years ago (some publishing companies are still using PDP-based systems to run the business), but they are by-and-large running "by-themselves" without need for intervention for months if not years. All the painful deployment problems were ironed out a long time ago, users know the software well and everything is as smooth as it should be. Why bother to rock the boat and pay greedy service providers per user fees for the solution that you already have and paid for ? It would be interesting if Carr tried to explain the reason why this is can be cost efficient for the enterprise...
IBM promote this approach under the name of "autonomic computing". But in general the level of automation strongly influences how efficiently IT is running. This factor alone might have greater influence on the future shape of IT than then all this "in the cloud" buzz.
IBM Tivoli suit of applications (and first of all TEC, TCM and IBM Tivoli workload scheduler) is one popular approach to the automation of the datacenters. You can read more about this technology at IBM website as well as Softpanorama Tivoli pages. This is not a new approach as Tivoli is more then 10 years old. Other vendors propose similar solution know as Enterprise System Management (ESM). The other prominent ESM suits are HP Openview and BMC Software Patrol Service Impact Manager (former MasterCell 3.0).
The key here is to view set of servers not as a separate entities, but as a single distributed system with a common control center. This approach can be combined with several other alternatives to "in the cloud" computing, such as "datacenter in the box", virtual appliances, and application streaming.
Appliance are specialized computers which have stripped OS specifically tuned
to perform only the functions required for a particular application (or set of applications).
Support is remote and performed by the vendor via WAN. In a sense they represent
approach which can be called "local cloud".
Most popular type of appliances are firewalls, proxy servers and IDS sensors, but mail server appliances are also popular. They are real and battle tested. Some versions of Microsoft OS (Windows Server, Small Business Edition) can be considered to be appliances in disguise.
In case of application streaming applications are located on a remote computer and are accessed 'as needed" using network protocols ("on the fly" installation). The key advantage of application streaming is that you use local computing power for running the application, not a remote server. That removes the problem of latency in transmitting video stream generated by GUI interface on the remote server (were the application is running) to the client.
Also modern laptops have tremendous computing power that is difficult (and very expensive) to match in remote server park. Once you launch the application on the client (from a shortcut ) the remote server streams (like streaming video or audio) the necessary application files to your PC and the application launches. This is done just once. After that application works as if it is local. Also only required files are sent (so if you are launching Excel you do NOT get those libraries that are shared with MS Word if it is already installed). But each time you launch an application verification of the current version is performed in in case of upgrades or patches launching of a new version is transparent.
In a way Microsoft patching system in automatic mode can be considered as a rudimentary application streaming framework so this approach is not as exotic as it sounds. It implements some neat tricks: downloading new version in the background while you are working with the new version and upgrade during the next reboot.
Virtualization promises more agile and more efficient local datacenters and undercut "in the cloud" software services model in several ways. First of all it permits packaging a set of key enterprise applications as "virtual appliances". The latter like streamed applications run locally, store data locally, are cheaper, have better response time and are more maintainable. This looks to me as a more promising technical approach for complex sets of applications with intensive I/O requirements. For example, you can deliver LAMP stack virtual appliance (Linux-Apache-PHP-MySQL) and use it on a local server for running your LAMP-applications (for example helpdesk) enjoying the same level of quality and sophistication of packaging and tuning as in case of remote software providers. But you do not depend on WAN as users connect to it using LAN which guarantees fast response time. And your data are stored locally (but if you wish they can be backed up remotely to Amazon or to other remote storage provider).With the virtual appliances model the underlying infrastructure is still owned by the end-user business but they pay for compete delivery of the actual application in a highly compartmentalized fashion, which is a much simpler and more efficient way of working. It is not hard then to understand where the demand for virtual appliances comes from.
You might ask why virtual appliances have been deployed widely during the last three years or so. The answeris that virtualization started to became more mature and commodization only with the development of Intel 5160 CPUs. At this point even mid range servers became able to consolidate several old servers without breaking much of a sweet. Virtual appliances can be very quickly provisioned to meet customer demand, and automation tools can be used to reduce the management headache and simplify the process. Multiple vendors gives businesses possibility to select the offering which provides real value.
The other trend is the emergence of higher level of standardization of datacenters ("'Cloud in a Box"" trend). It permits cheap prepackaged local datacenters to be installed everywhere. Among examples of this trend are standard shipping container-based datacenters which are now sold by Sun and soon will be sold by Microsoft. They already contain typical services like DNS, mail, file sharing, etc preconfigured. For a fixed cost an organization gets set of servers capable of serving mid-size branch or plant. In this case the organization can avoid paying monthly "per user" fees -- a typical cost recovery model of software service providers. It also can be combined with previous two models: it is easy to stream both applications and virtual appliances to the local datacenter from central location. For a small organization such a datacenter now can be pre-configured in a couple of servers using Xen or VMware plus necessary routers and switches and shipped in a small rack. This spring IBM started offering BladeCenter servers with power and x86 processors, and service management software.
Liquid cooling, once used in mainframes and supercomputers, may be returning
to data centers as an alternative to air conditioning. Solutions include modular
liquid-cooling units placed between racks of servers; a new door at the back of
a server rack with tubes in which chilled water is circulating; and server racks
with integrated power supply (three-phase convertors to DC, DC distribution to servers
and liquid cooling. It permits significantly increase density of servers be it blades
or regular multicore servers.
I would like to stress that the power and versatility of modern laptop is the factor that should not be underestimated. It completely invalidates Carr's cloudy dream of users voluntarily switching to network terminal model inherent is centralized software service provision ( BTW mainframe terminals and, especially, "glass wall datacenters" were passionately hated by users). Such a solution can have a mass appeal only in very limited cases (webmail). I think that users will fight tooth and nail for the preservation of the level of autonomy provided by modern laptops. Moreover, in no way users will agree to sub-standard response time and limited feature set of "in the cloud" applications as problems with Google apps adoption demonstrated.
While Google apps is an interesting project, they can serve as a litmus test for the difficulties of replacing "installed" applications with "in the cloud" applications. First of all functionality is really, really limited. At the same time Google have spend a lot of money and efforts creating them but never got any significant traction and/or sizable return on investment. After several years of existence this product did not even match the functionality of Open Office. To increase penetration Google recently started licensing them to Salesforce and other firms. That means that the whole idea might be flawed because even such an extremely powerful organization as Google with its highly qualified staff and huge server power of datacenters cannot create an application suit that can compete with preinstalled on laptop applications, which means cannot compete with the convenience and speed of running applications locally on modern laptop.
In case of corporate editions the price is also an issue and Google apps in comparison
with Office Professional ($50 per user per year vs. $ 220 for Microsoft Office Professional)
does not look like a bargain if we assume five years life scan for the MS Office.
The same situation exists for home users: price-wise Microsoft Office can be now
classified as shareware (in Microsoft Office Home and Student 2007 which includes
Excel, PowerPoint, Word, and OneNote the cost is $25 per application ). So
for home users Google need to provide Google apps for free, which taking into account
the amount of design efforts and complexity of the achieving compatibility, is not
a very good way of investing available cash. Please note that Microsoft can at any
time add the ability to stream Office applications to laptops and put "in the cloud"
Office-alternative software service provider in a really difficult position: remote
servers need to provide the same quality of interface and amount of computing power
per user as the user enjoys on a modern laptop. That also suggests existence
of some principal limitations of "in the cloud" approach for this particular application
domain. And this is not unique case. SAP has problems with moving SAP/R3 to
the cloud too and recently decided to scale back its efforts in this direction.
All-in-all computing power of a modern dual core 2-3GHz laptops with 2-4G of memory and 100G-200G hard drives represent a serious challenge for "in the cloud" software services providers. This makes for them difficult to attract individual users money outside advertising-based or other indirect models. It's even more difficult for them "to shake corporate money loose": corporate users value the independence of locally installed on laptop applications and the ability to store data locally. Not everybody wants to share with Google their latest business plans.
Generic, non specialized supplier of virtual images and networking capabilities like Amazon has advantages over its more specialized "in the cloud" brothers which try to supply ERP, HR, CRM, or supply chain management services. Using different application within a single hardware cloud is not unlike using a remote datacenter, the technology most large companies masters to perfection and such limited to hardware cloud might will help solve performance problems. That means that generic hardware virtualization server grids might be more viable. An ecosystem cloud also has the added advantage of better integration across the different providers that are part of the same ecosystem.
Just as todayís enterprise software falls into distinct ecosystems (Microsoft, Oracle, SAP), cloud computing may well organize itself in a similar fashion, with a number of hardware clouds and a set of complementary (non-competing) SaaS providers and a database-as-a-service provider.
For now, hardware cloud computing offers most of the potential benefits for both small and large companies: lower IT costs, faster deployment of new IT capabilities, an elastic IT infrastructure that can expand or contract as needed. Testing and development labs are natural candidates for the movement into hardware cloud as server in those areas are usually used only sporadically.
Hardware clogs also permit organizations to take tactical approach moving non-strategic applications to a hardware cloud if there is a cost advantage doing so and waiting for experience accumulate and experimenting with the technology to gain valuable experience and insight into how to take advantage of this emerging phenomenon.
However, it is possible to create private "gated" clouds that, like gated communities, would serve only selected members. These private clouds use a dedicated private network that connect their members and that solves the problem of "bandwidth communism". At the same time unlike a single companyís data center, a private cloud can consolidate the hardware and software needs of multiple companies, provide load balancing and better economies of scale.
The growing trend in telecommuting creates real demand for enterprise private 'gated' clouds (large enterprise usually ruin multiple datacenter) and those services can be expanded to selected clients (for example for paint company it is natural to provide some additional It services for body shops, which already belong to their private network.
One of the most promising approach that is used mainly by Microsoft and to provide symmetry between enterprise based software in the cloud and on the client. This actually already is the case with most corporate mail systems Web portal provides access for home and travel, while client on the laptop is used in office).
Microsoft is developing similar approach for Exchange server which permits flexibly mix different environments (branch offices with IT staff and without it).
Several companies already line-up to use Microsoft's "software and services" email model. Coca-cola plans to subscribe to 30K seats of Exchange and SharePoint services. Other early adopters include Autodesk, Blockbuster, Energizer and Ingersoll-Rand. See DEMYSTIFYING THE CLOUD for details.
We already mentioned Microsoft's Live Mesh which complements this approach.
Groupthink : Two Party System as Polyarchy : Corruption of Regulators : Bureaucracies : Understanding Micromanagers and Control Freaks : Toxic Managers : Harvard Mafia : Diplomatic Communication : Surviving a Bad Performance Review : Insufficient Retirement Funds as Immanent Problem of Neoliberal Regime : PseudoScience : Who Rules America : Neoliberalism : The Iron Law of Oligarchy : Libertarian Philosophy
War and Peace : Skeptical Finance : John Kenneth Galbraith :Talleyrand : Oscar Wilde : Otto Von Bismarck : Keynes : George Carlin : Skeptics : Propaganda : SE quotes : Language Design and Programming Quotes : Random IT-related quotes : Somerset Maugham : Marcus Aurelius : Kurt Vonnegut : Eric Hoffer : Winston Churchill : Napoleon Bonaparte : Ambrose Bierce : Bernard Shaw : Mark Twain Quotes
Vol 25, No.12 (December, 2013) Rational Fools vs. Efficient Crooks The efficient markets hypothesis : Political Skeptic Bulletin, 2013 : Unemployment Bulletin, 2010 : Vol 23, No.10 (October, 2011) An observation about corporate security departments : Slightly Skeptical Euromaydan Chronicles, June 2014 : Greenspan legacy bulletin, 2008 : Vol 25, No.10 (October, 2013) Cryptolocker Trojan (Win32/Crilock.A) : Vol 25, No.08 (August, 2013) Cloud providers as intelligence collection hubs : Financial Humor Bulletin, 2010 : Inequality Bulletin, 2009 : Financial Humor Bulletin, 2008 : Copyleft Problems Bulletin, 2004 : Financial Humor Bulletin, 2011 : Energy Bulletin, 2010 : Malware Protection Bulletin, 2010 : Vol 26, No.1 (January, 2013) Object-Oriented Cult : Political Skeptic Bulletin, 2011 : Vol 23, No.11 (November, 2011) Softpanorama classification of sysadmin horror stories : Vol 25, No.05 (May, 2013) Corporate bullshit as a communication method : Vol 25, No.06 (June, 2013) A Note on the Relationship of Brooks Law and Conway Law
Fifty glorious years (1950-2000): the triumph of the US computer engineering : Donald Knuth : TAoCP and its Influence of Computer Science : Richard Stallman : Linus Torvalds : Larry Wall : John K. Ousterhout : CTSS : Multix OS Unix History : Unix shell history : VI editor : History of pipes concept : Solaris : MS DOS : Programming Languages History : PL/1 : Simula 67 : C : History of GCC development : Scripting Languages : Perl history : OS History : Mail : DNS : SSH : CPU Instruction Sets : SPARC systems 1987-2006 : Norton Commander : Norton Utilities : Norton Ghost : Frontpage history : Malware Defense History : GNU Screen : OSS early history
The Peter Principle : Parkinson Law : 1984 : The Mythical Man-Month : How to Solve It by George Polya : The Art of Computer Programming : The Elements of Programming Style : The Unix Haterís Handbook : The Jargon file : The True Believer : Programming Pearls : The Good Soldier Svejk : The Power Elite
Most popular humor pages:
Manifest of the Softpanorama IT Slacker Society : Ten Commandments of the IT Slackers Society : Computer Humor Collection : BSD Logo Story : The Cuckoo's Egg : IT Slang : C++ Humor : ARE YOU A BBS ADDICT? : The Perl Purity Test : Object oriented programmers of all nations : Financial Humor : Financial Humor Bulletin, 2008 : Financial Humor Bulletin, 2010 : The Most Comprehensive Collection of Editor-related Humor : Programming Language Humor : Goldman Sachs related humor : Greenspan humor : C Humor : Scripting Humor : Real Programmers Humor : Web Humor : GPL-related Humor : OFM Humor : Politically Incorrect Humor : IDS Humor : "Linux Sucks" Humor : Russian Musical Humor : Best Russian Programmer Humor : Microsoft plans to buy Catholic Church : Richard Stallman Related Humor : Admin Humor : Perl-related Humor : Linus Torvalds Related humor : PseudoScience Related Humor : Networking Humor : Shell Humor : Financial Humor Bulletin, 2011 : Financial Humor Bulletin, 2012 : Financial Humor Bulletin, 2013 : Java Humor : Software Engineering Humor : Sun Solaris Related Humor : Education Humor : IBM Humor : Assembler-related Humor : VIM Humor : Computer Viruses Humor : Bright tomorrow is rescheduled to a day after tomorrow : Classic Computer Humor
The Last but not Least Technology is dominated by two types of people: those who understand what they do not manage and those who manage what they do not understand ~Archibald Putt. Ph.D
Copyright © 1996-2020 by Softpanorama Society. www.softpanorama.org was initially created as a service to the (now defunct) UN Sustainable Development Networking Programme (SDNP) without any remuneration. This document is an industrial compilation designed and created exclusively for educational use and is distributed under the Softpanorama Content License. Original materials copyright belong to respective owners. Quotes are made for educational purposes only in compliance with the fair use doctrine.
FAIR USE NOTICE This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available to advance understanding of computer science, IT technology, economic, scientific, and social issues. We believe this constitutes a 'fair use' of any such copyrighted material as provided by section 107 of the US Copyright Law according to which such material can be distributed without profit exclusively for research and educational purposes.
This is a Spartan WHYFF (We Help You For Free) site written by people for whom English is not a native language. Grammar and spelling errors should be expected. The site contain some broken links as it develops like a living tree...
|You can use PayPal to to buy a cup of coffee for authors of this site|
Last modified: March 12, 2019