|Contents||Bulletin||Scripting in shell and Perl||Network troubleshooting||History||Humor|
Major Types of Virtualization
Major vendors support
Not too much zeal.
Charles Maurice de Talleyrand
advice to young diplomats
In a traditional sense that we will use here virtualization is the simulation of the hardware upon which other software runs. This simulated hardware environment is called a virtual machine (VM). Classic form of virtualization, known as operating system virtualization, provides the ability to multiple instances of OS on the same physical computer under the direction of a special layer of software called hypervisor. There are several forms of virtualization, distinguished primarily by the hypervisor architecture.
Each such virtual instance (or guest) OS thinks that is running on a real hardware with full access to the address space but in reality is operating in a separate VM container which maps this address space into segment of address space of the physical computer. this operation is called address translation. Guest OS can be unmodified (so-called heavy-weight virtualization) or specifically recompiled for the hypervisor API (para-virtualization). In light-weight virtualization a single OS instance presents itself as multiple personalities (called jails or zones), allowing high level of isolation of applications from each other at a very low overhead.
The latest Holy Grail of enterprise IT is server consolidation. In an attempt to lower the costs of IT infrastructure many companies are looking at server virtualization. The idea is to consolidate small and not very loaded servers into fewer and larger as well as more heavily-loaded physical servers. This can bring up a whole new set of complications and new risks. There is no free lunch and you need to pay for additional complexity, usually with stability. Like with junk bonds if overdone such an investment that can lead to losses instead of gains. But in moderation this new trend that can be called conversion of IT environment into set of Virtual Machines can be quite beneficial and opens some additional, non-foreseen avenues of savings. Other things equal virtualization belongs to OS vendors and using virtualization provided by the vendor is safer that third party virtualization solution. That means that from 1000 feet distance we would suggest that Microsoft Server 2008 might eventually be preferable to VMware for virtualization of Windows servers.
Also if done intelligently and without too much zeal virtualization can probably squeeze the number of the servers in a typical datacenter 30-50% and that also lead to some modest maintenance cost savings as well as electricity and air-conditioning related savings. Low end servers as extremely inefficient from the point of view of electricity consumption and add considerable to air-conditioning costs as their power supplies are less inefficient. So replacing two low end servers with one larger server running two virtual partition is a very promising avenue of datacenter server consolidation. IBM is a leader in this area and its Power server are preconfigured to run several paravirualised instances of AIX or Linux
Blades also can be considered to be more cost and energy efficient alternative to low end servers and as we will discuss later blades as a pretty attractive alternative to virtualization of low end servers.
Saving on hardware that motivates many virtualization efforts is a questionable idea as low level servers represent the most competitive segment of server market with profit margins squeezed to minimum; the margins are generally much larger of mid-range and high end servers. In other words, margins on midrange servers and high-end servers work against virtualization. Still with recent Intel 6 core CPUs and fast memory (1600 MHz) some saving might eventually be squeezed. A fully configured server with two four core CPUs and with, say, with 32G of RAM costs slightly less that two servers with one fore core CPU and 16G RAM in each.
At the same time the heavy reliance on virtualized servers for production applications, as well as the task of managing and provisioning them, are fairly new areas in the "new brave" virtualized IT world and both need higher level of skills then "business as usual" and special software solutions. Both add to costs. Also when virtualization is expensive like is the case of VMware cost benefits can be realized only with oversubscription.
Virtualization increases the importance of Tivoli and other ESM applications. Virtualization also dramatically influence configuration management, capacity management, provisioning, patch management, back-ups, and software licensing. It is inherently stimulates adoption of open source software, especially scripting-based solutions. It also opens a lot of new possibilities in saving time on system administration, electricity savings as well as makes possible some extremely impressive (albeit not yet fully practical) feats like a dynamic migration of a virtual instance from one (more loaded) physical server to another (less loaded). This is often called "factory" approach to datacenter restructuring.
We will call "Dual host virtualization" the scenario when one physical server hosts just two guest OSes. Exactly two, no more, no less.
If applied to all servers in the datacenter this approach guarantees 50% reduction in the number of physical servers. Saving on hardware that motivates many virtualization efforts is a questionable idea as low level servers represent the most competitive segment of server market with profit margins squeezed to minimum; the margins are generally much larger of mid-range and high end servers. But in "Dual host virtualization" some savings on hardware might be squeezed. For example, a fully configured Intel server with two four core CPUs and with, say, with 32GB of RAM costs less that two servers with one fore core CPU and 16GB RAM in each.
Larger number of applications on a single server are possible, but more tricky: such virtual server need careful planning as it faces memory bottleneck and CPU power bottleneck, especially painful if "rush hours" are the same for both applications. If applications have some synergy and have peaks at different time of the say, then one 2U server with two, say, quad core CPU and 32G of memory split equally between two partitions can be even more efficient then two several servers with one quid core CPU and 16G of memory if speed of memory and speed of CPU are equal.
Dual host virtualization works well on all types of enterprise servers: Intel servers, IBM Power servers, HP servers and Sun/Oracle servers (both Intel and UltraSparc based).
If we are talking about Intel platform using Xen or Microsoft VM are probably the only realistic options for Dual host virtualization. VMware is way too expensive. Using Xen and Linux you can squeeze two virtual servers previously running on individual 1U server into one single 2U server and get 30-50% reduction in the cost of both hardware and software maintenance. The latter is approximately $1K per year per server (virtual instances are free under Suse and Red Hat). There are also some marginal savings in electricity and air-conditioning related savings. Low end servers have small and usually less efficient power supplies and using one 2U server instead of two 1U servers lead to almost 30-40% savings in consumed energy (higher saving are possible if 2U server is using a single CPU with, say, four of eight cores).
If you go beyond dual host virtualization outlined above, savings on hardware are more difficult to achieve as low end Intel servers represent the most competitive segment of the Intel server market with profit margins squeezed to minimum; the margins are generally much larger of mid-range and high end Intel servers. The same is true for another architectures as well. In other words, vendor margins on midrange servers and high-end servers work against virtualization. This is especially true about HP, which overcharges customers for midrange server by tremendous margin providing mediocre servers, that are less suitable for running Linux then servers from Dell.
Virtualization is the simulation of the software and/or hardware upon which guest operating systems run. This simulated environment is called a virtual machine (VM). Each instance of an OS and its applications runs in a separate VM called a guest operating system. Those VMs are managed by the hypervisor. There are several forms of virtualization, distinguished by the architecture of hypervisor.
We can distinguish the following five different types of virtualization:
This is hardware domain-based virtualization that is used only on high-end servers. Domain can, essentially, be called "blades with common memory and I/O devices". Those "blades on steroids" are probably the closest thing on getting more power from a singe server without related sacrifices in CPU, memory access and I/O speed, sacrifices that are typical for all other virtualization solutions. Of course there is no free lunch and you need to pay for such luxury. Sun is the most prominent vendor of such servers (mainframe class servers like Sun Fire 15K).
A dynamic system domain (DSD) on Sun Fire 15K is an independent environment, a subset of a server, that is capable of running a unique version of firmware and a unique version of the Solaris operating environment. Each domain is insulated from the other domains. Continued operation of a domain is not affected by any software failures in other domains nor by most hardware failures in any other domain. The Sun Fire 15K system allows up to 18 domains to be configured.
A domain configuration unit (DCU) is a unit of hardware that can be assigned to a single domain; DCUs are the hardware components from which domains are constructed. DCUs that are not assigned to any domain are said to be in no-domain . There several types of DCU: CPU/Memory board, I/O assembly, etc. Sun Fire 15K hardware requires the presence of at least one board containing CPUs and memory, plus at least one of the I/O board types in each configured domain. Typically those servers as NUMA based. Access to memory of other domains is slower then to local memory.
By heavy-weight virtualization we will understand full hardware virtualization as exemplified by VMware. CPU vendors now are paying huge attention to this type of virtualization as they can no longer increase the CPU frequency and are forced to the path of increasing the number of cores. Intel latest CPU that are now dominant in server space are a classic example of this trend. With eight and 10 core CPUs available it is clear tat Intel is putting money on the virtualization trend. IBM P5/P6 and Sun UltraSparc T1/T2/T3 are examples among RISC CPUs.
All new Intel CPUs are "virtualization-friendly" and with the exception of cheapest models contain instructions and hardware capabilities that make heavy-weight virtualization more efficient. First of all this is related to the capability of "zero address relocation": availability of a special register which is added to each address calculation by regular instruction and thus provides illusion of multiple "zero addresses" to the programs.
VMware is the most popular representative of this approach to the design of hypervisor and recently it was greatly helped by Intel and AMD who incorporated virtualization extensions in their CPUs. VMware started to gain popularity before the latest Intel CPUs with virtualization instruction set extensions and demonstrated that it is possible to implement it reasonably efficiently even without hardware support. VMware officially supports a dozen of different types of guests: it can run Linux (Red Hat and Suse), Solaris and Windows as virtual instances (guests) on one physical server. 32-bit Suse can be run in paravirtualized mode on VMware. Generally running Linux under VMware is a big mistake, as better solutions exist.
Only people, who completely do not understand the virtualization technology run Linux (or any other open source OS) on VMware. Unfortunately this include some major corporations.
The industry consensus is that VMware's solution is overpriced. Please ignore hogwash like the following VM PR:
Horschman countered the 'high pricing' claim saying "Virtualization customers should focus on cost per VM more than upfront license costs when choosing a hypervisor. VMware Infrastructure's exclusive ability to overcommit memory gives it an advantage in cost per VM the others can't match." And he adds, "Our rivals are simply trying to compensate for limitations in their products with realistic pricing."
This overcommitting of memory is a standard feature related to presence of virtual memory subsystem in the hypervisor and first was implemented by IBM VM/CMS in early 1970th. So much about new technology. All those attempts to run dozens of guests on a server with multiple cores (and in mid 2011 you can get 80 core server -- HP DL 980 -- for less then $60K) are more result of incompetence of typical IT brass then technological breakthrough. Of course the number of servers that simply circulate air in a typical datacenter is substantial and for them it is an OK solution (if those are Windows servers), but this sad situation has nothing to do with the progress in virtualization technology.
No matter how much you can share the memory (and over commitment is just a new term for what IBM VM did since 1972), you can't bypass the limitation of a single channel from CPU to memory, unless this is a NUMA server. The more guests are running the more this channel is stressed and running dozens of instances is possible mainly in situations when they are doing nothing or close to nothing (circulating air in corporate IT jargon). That's happens (unpopular, unused corporate web servers are one typical example), but even for web servers paravirtualization and zones are much better solutions.
Even taking into account the questionable idea of running open source OS under VMware, assuming the same efficiency as multiple standalone 1U servers VMware is not cost efficient unless you can squeeze more then four guests per server. And more then four guests is possible only with servers that are doing nothing or close to nothing because if each guest is equally loaded then each of them can use only 33% or less of memory bandwidth of the server (which means memory channel for guest operating at 333MHz or less, assuming the server uses 1.028GHz memory). Due to this I would not recommend running four heavily used database servers on a single physical server for any organization. But running several servers for the compliance training that was implemented because the company was caught fixing prices along with a server or two which implement a questionnaire about how good is company IT brass in communicating IT policy to rank and file is OK ;-)
The following table demonstrates that the cost savings with less then four guest per physical server are non-existent even if we assume equal efficiency of VMware and separate physical servers. Moreover VMware price premium means that you need at least eight guests on a single physical server to achieve the same cost efficiency as four Xen servers running two guests each (Red Hat and Novell do not charge for additional guests on the same physical server, up to a limit).
|Cost of the server||Number of physical servers||Number of guests||Cost of SAN cards (Qlogic)||Cost of SAN storage||Server maintenance (annual)||VM license||VM Maintenance (annual)||OS maintenance (annual)||Five years total cost of ownership||annualized cost per one guest or physical server||Cost efficiency of one guest vs. one 1U server (annualized)|
|Running 2 guests||7||1||2||0.00||0.00||0.42||5||1.4||0.35||25.02||12.51||-3.24|
|Running 4 guests||10||1||4||4.00||3.00||0.42||5||1.4||0.35||38.52||9.63||-0.36|
|Running 8 guests||20||1||8||4.00||6.00||0.42||5||1.4||0.35||58.52||7.32||3.13|
|Running 2 guests||7||1||2||0.00||0.00||0.42||0||0||0.35||13.02||6.51||2.76|
|Running 4 guests||10||1||4||4.00||3.00||0.42||0||1.3||0.35||33.02||8.26||1.02|
|two 1U servers||5||2||0||0.00||0.00||0.42||0||0||0.35||18.54||9.27||0.00|
|four 1U servers||5||4||0||0.00||0.00||0.42||0||0||0.35||37.08||9.27||0.00|
1. Even assuming the same efficiency, there is no cost savings running 4 or less guests per VMware server in comparison with equal number of standard 1U servers.
2. The cost of blades is slightly higher then equal number of 1U servers due to the cost of the enclosure but can be assumed equal for simplicity. At the same time blades proved to be less reliable. They are not suitable for data centers without 24x7 personal present. In a typical crash, for example HP ILO on blades proves to be useless and malfunctioning as well.
3. We assume that in case of two instances no SAN is needed/used (internal drives are used for each guest)
4. We assume that in case of 4 guests or more, two SAN cards and SAN storage are used (one port for 4 guests, 2 ports for 8 guests)
5. For Xen we assume that in case of 4 or more guests Oracle virtual VM is used (which has maintenance fees)
6. For simplicity the cost of SAN storage is assumed to be fixed cost $3K per 1T per 5 years
(includes SAN unit amortization, maintenance and switches, excludes SAN cards in the server itself)
Performance of VMware guests on high loads is not impressive as it should be for any non-paravirtualized hypervisor. Here is a more realistic assessment from a rival Xen camp:
Simon Crosby, CTO of the Virtualization and Management Division at Citrix Systems, writes on his blog: "The bottom line: VMware's 'ROI analysis' offers neither an ROI comparison nor any analysis. But it does offer valuable insight into the mindset of a company that will fight tooth and nail to maintain VI3 sales at the expense of a properly thought through solution that meets end user requirements.
The very fact that the VMware EULA still forbids Citrix or Microsoft or anyone in the Xen community from publishing performance comparisons against ESX is further testimony to VMware's deepest fear, that customers will become smarter about their choices, and begin to really question ROI."
The main advantage of heavy-weight virtualization is almost complete isolation of instances. Paravirtualization and blades achieve similar level of isolation so this advantage is not exclusive.
|"The very fact that
the VMware EULA still forbids Citrix or Microsoft or anyone in the
Xen community from publishing performance comparisons against ESX
is further testimony to VMware's deepest fear, that customers will
become smarter about their choices, and begin to really question
-- Simon Crosby, Citrix Systems
The fact that CPUs, memory and I/O channels (PCI bus) are shared among guests means that you will never get the same speed on high simultaneous workloads for several guests as in the case of equal number of standalone servers each with corresponding fraction of CPUs and memory and the same set of applications. Especially problematic is sharing of memory bridge which works on lower speed then CPUs and can starve CPU, becoming the bottleneck well before CPU. Each virtual instance of OS loads pages independently of the other and compete for limited memory bandwidth. Even in best cases that means that each guest gets a fraction of memory bandwidth that is lower then memory bandwidth on a standalone server. So if, for example, two virtual instances are simultaneously active and are performing operations that do not fit in L2 cache only 2/3 of the memory bandwidth (accesses to memory are randomly spread in time so sum should probably be greater then 100%) in comparison with a standalone system. In memory operated on 1.024GHz that means that only 666MHz of bandwidth is availed for each guest while on a standalone server it would be at least 800MHz and can be as high as 1.33GHz. In other words you lose approximately 1/3 of memory bandwidth by jumping into virtualization bandwagon. That's why heavy-weight virtualization behaves bad on memory intensive applications.
There can be a lot of synergy if you run two or more instances of identical OSes. Many pages representing identical part of the kernel and applications can be loaded only once while used in all virtual instances. But I think you lose stack overflow protection this way as your pages are shared by different instances.
As memory speed and memory channel are bottlenecks adding CPUs (or cores) at some point became just wasting of money. The amount of resources used for intercommunication dramatically increases with the growth of the number of CPUs. VMware server farms based on the largest Intel servers like HP DL 980 (up to eight 10 core CPUs ) tend to suffer from this effect. The presence of a full non-modified version of an OS for each partition introduces significant drag on resources (both memory and CPU-wise). I/O load can be diminished by using SAN for each virtual instance OS and multiple cards on the server. Still in some deep sense heavy-weight partitioning is inefficient and will always waist significant part of server resources.
Still this approach is important for running legacy applications which is the area where this type of virtualization shine.
Sun calls heavy-weight virtual partitions "logical domains"(LDOM) . It is supported on Sun's T1-T3 CPU based and all the latest Oracle servers. Sun supports up to 32 guests with this virtualization technology. About differences with LPARs see Rolf M Dietze blog:
Sun’s LDoms supply a virtual terminal server, so you have consoles for the partitions, but I guess this comes out of the UNIX history: You don’t like flying without any sight or instruments at high speed through caves, do you? So you need a console for a partition! T2000 with LDoms seems to support this, at IBM you need to buy an HMC (Linux-PC with HMC-software).
With crossbow virtual network comes to Solaris. LDoms seem to give all advantages of logical partitioning as IBMs have, but hopefully a bit faster and clearly less power consumption.
Sun offers a far more open licensing of course and: You do not need a Windows-PC to administer the machine (iSeries OS/400 is administered from such a thing).
A T2000 is fast and has up to 8 cores (32 thread-CPUs) 16GBRam and has a good price and those that do not really need the pure power and are more interested in partitioning.
The Solaris zones have some restrictions aka no NFS server in zones etc. That is where LDoms come in. That’s why I want to actually compare LDoms and LPARs.
It looks like it becomes cold out there for IBM boxes….
Para-virtualization is a variant of native virtualization, where the VM (hypervisor) emulates only part of hardware and provides a special API requiring OS modifications. The most popular representative of this approach is Xen with AIX as a distant second:
With Xen virtualization, a thin software layer known as the Xen hypervisor is inserted between the server’s hardware and the operating system. This provides an abstraction layer that allows each physical server to run one or more “virtual servers,” effectively decoupling the operating system and its applications from the underlying physical server.
IBM LPARs for AIX are currently the king of the hill in this area because of higher stability in comparison with alternatives. IBM actually pioneered this class of VM machines in late 60 with the release of famous VM/CMS. Until recently Power5 based servers with AIX 5.3 and LPARs were the most battle-tested and reliable virtualized environments based on paravirtualization.
Xen is the king of paravirtualization hill in Intel space. Work on Xen has been supported by UK EPSRC grant GR/S01894, Intel Research, HP Labs and Microsoft Research (Yes, despite naive Linux zealots wining Microsoft did contributed code to Linux ;-). Other things equal it provides higher speed and less overhead then native virtualization. NetBSD was the first to implement Xen. Currently the key platform for Xen is linux with Novell supporting it in production version of Suse.
Xen is now resold commercially by IBM, Oracle and several other companies. XenSource, the company create for commercialization of Xen technology, was bought by Cytrix.
The main advantage of Xen is that it supports live relocation capability. It is also more cost effective solution the VMware that is definitely overpriced.
The main problem is that para-virtualization requires OS kernel modification to be aware of the environment it is running and pass control to hypervisor in case of executing all privileged instructions. Therefore it is not suitable for running legacy OSes and for running Microsoft Windows (although Xen can run it in newer 51xx CPU series)
Para-virtualization improves speed in comparison with heavy-weight virtualization (much less context switching), but does little beyond that. It is unclear how much faster is para-virtualized instance of OS in comparison with heavy-weight virtualization on "virtualization-friendly" CPUs. Xen page claims that:
Xen offers near-native performance for virtual servers with up to 10 times less overhead than proprietary offerings, and benchmarked overhead of well under 5% in most cases compared to 35% or higher overhead rates for other virtualization technologies.
It's unclear was this difference measured of old Intel CPU or new 5xxx series that support virtualization extensions. I suspect the difference on newer CPUs should be smaller.
I would like to stress it again that the level of modification OS is very basic and important idea of factoring out common functions like virtual memory management that was implemented in classic VM/CMS is not utilized. Therefore all the redundant processing typical for heavy-weight virtualization is present in para-virtualization environment.
Note: Xen 3.0 and above support both para-virtualization and full (heavy-weight) virtualization to leverage the built-in hardware support built into the Intel-VT-x and AMD Pacifica processors. According to XenSource Products - Xen 3.0 page:
With the 3.0 release, Xen extends its feature leadership with functionality required to virtualize the servers found in today’s enterprise data centers. New features include:
- Support for up to 32-way SMP guest
- Intel® VT-x and AMD Pacifica hardware virtualization support
- PAE support for 32 bit servers with over 4 GB memory
- x86/64 support for both AMD64 and EM64T
One very interesting application of paravirtualization are so called virtual appliances. This is a wholenew area that we discuss on a separate page.
Another very interesting application of paravirtualization is "cloud" environment like Amazon Elastic cloud.
All-in-all paravirtualization along with light-weight virtualization (BSD jail and Solaris zones) looks like the most promising types of virtualization.
This type of virtualization was pioneered in Free BCD (jails) and was further developed by Sun and introduced in Solaris 10 as concept of Zones. There are various experimental add-ons of this type for Linux but none got any prominence.
Solaris 10 11/06 and later are capable to clone a Zone as well as relocate it to another box, through a feature called Attach/Detach. The key advantage is that you have a single instance of OS so the price that you paid in case of heavy-weight virtualization is waived. That means that light-weight virtualization is the most efficient resources-wise. It also has great security value. Memory can become a bottleneck here as all memory accesses are channeled via a single controller.Also now it is possible to run Linux applications in zones on X86 servers (branded zones).
Zones are really revolutionary and underappreciated development which were hurt greatly by inept Sun management and subsequent acquisition by Oracle. The key advantage is that you have a single instance of OS so the price that you paid in case of heavy-weight virtualization is waived. That means that light-weight virtualization is the most efficient resources-wise. It also has great security value. Memory can become a bottleneck here as all memory accesses are channeled via a single controller, but you have a single virtual system for all zones -- great advantage that permits to reuse memory for similar processes.
IBM's "lightweight" product would be "Workload manager" for AIX which is an older (2001 ???)and less elegant technology then BSD Jails and Solaris zones:
Current UNIX offerings for partitioning and workload management have clear architectural differences. Partitioning creates isolation between multiple applications running on a single server, hosting multiple instances of the operating system. Workload management supplies effective management of multiple, diverse workloads to efficiently share a single copy of the operating system and a common pool of resources
IBM lightweight virtualization in version of AIX before 6 operated under a different paradigm with the most close thing to zone being a "class". The system administrator (root) can delegate the administration of the subclasses of each superclass to a superclass administrator (a non-root user). Unlike zones classes can be nested:
The central concept of WLM is the class. A class is a collection of processes (jobs) that has a single set of resource limits applied to it. WLM assigns processes to the various classes and controls the allocation of system resources among the different classes. For this purpose, WLM uses class assignment rules and per-class resource shares and limits set by the system administrator. T he resource entitlements and limits are enforced at the class level. This is a way of defining classes of service and regulating the resource utilization of each class of applications to prevent applications with very different resource utilization patterns from interfering with each other when they are sharing a single server.
In AIX 6 IBM adopted Solaris style light-weight virtualization.
Blade servers are an increasingly important part of the enterprise datacenters, with consistent double-digit growth easily outpacing the overall server market. IDC estimated that 500,000 blade servers were sold in 2005, or 7% of the total market, with customers spending $2.1 billion.
While blades are not virtualization in pure technical sense, the rack with blades (bladesystem) possesses some additional management capabilities that are not present in stand-alone U1 servers and in modern versions usually have shared I/O channel to NAS. Still they can be viewed as "hardware factorization" approach to server construction, which is not that different from virtualization. They share with virtualization one shortcoming: they are less stable then standalone servers. Unexplainable crashes of several blades (running different OSes) were observed on HP blades. ILO 3 used in HP blades is buggy and is a source of significant additional problems that diminish its value proposition.
The first shot in this direction is the new generation of bladesystems like IBM BladeCenter H system has offered I/O virtualization since February, 2006. Next were HP BladeSystem c-Class. The latter offers marginally better server management (via enclosure manager), and can save up to 10-20% of power in comparison with the equal number of rack mounted 1U servers with identical CPU and memory configurations. Please note that those saving as pre-paid by customers in a form of blade chassis cost, so they go to HP not to customer pockets.
Blades are also cost effective solution as you do not pay for viral manager support like in the case with VMware. Typical blade is $2-4K and enclosure for 16 blades is appox 20K so it adds another $1.25K per blade. So "real" cost of the blade is around $3-$5K which is equal or higher then comparable 1U servers. So blades does not win over 1U servers in cost or performance but they do win in energy consumption and management. Also enclosures provide infiniband connections between blades so it is national to create clusters.
Sun also offers blades but it is a minor player in this area. It offers pretty interesting and innovative Sun Blade 8000 Modular System which target higher end that usual blade servers. Here is how Cnet described the key idea behind the server if the article Sun defends big blade server 'Size matters':
Sun co-founder Andy Bechtolsheim, the company's top x86 server designer and a respected computer engineer, shed light on his technical reasoning for the move.
"It's not that our blade is too large. It's that the others are too small," he said.
Today's dual-core processors will be followed by models with four, eight and 16 cores, Bechtolsheim said. "There are two megatrends in servers: miniaturization and multicore--quad-core, octo-core, hexadeci-core. You definitely want bigger blades with more memory and more input-output."
When blade server leaders IBM and HP introduced their second-generation blade chassis earlier this year, both chose larger products. IBM's grew 3.5 inches taller, while HP's grew 7 inches taller. But opinions vary on whether Bechtolsheim's prediction of even larger systems will come true.
"You're going to have bigger chassis," said IDC analyst John Humphries, because blade server applications are expanding from lower-end tasks such as e-mail to higher-end tasks such as databases. On the more cautious side is Illuminata analyst Gordon Haff, who said that with IBM and HP just at the beginning of a new blade chassis generation, "I don't see them rushing to add additional chassis any time soon."
Business reasons as well as technology reasons led Sun to re-enter the blade server arena with big blades rather than more conventional smaller models that sell in higher volumes, said the Santa Clara, Calif.-based company's top server executive, John Fowler. "We believe there is a market for a high-end capabilities. And sometimes you go to where the competition isn't," Fowler said.
As a result of such factorization more and more functions move to the blade enclosure. As a result power consumption improves dramatically as blades typically use low power dissipating CPUs and all blades typically share the same power supply that in case of full or nearly full rack permits power supply to work with much greater power efficiency (twice of more efficient then on a typical server). That cuts air conditioning costs too. also newer blades monitor air flow and adjust fans accordingly. As a result energy bill can be 66% of the same number of U1 servers or even less.
Blades generally solves the problem of memory bandwidth typical for most types of virtualization except domain-based. Think about them are predefined partitions with fixed amount of CPU and memory. Dynamic swap of images between blades is possible. Some I/O can be local and with high speed solid drives very reliable and fast. That permits offloading OS-related IO from application related I/O. Blades also presents some technical problems (amount of heat dissipated by blades is substantial and small volume of the blade presuppose usage of energy efficient CPUs and low voltage memory), but they are at least theoretically solvable. Still you pay the price even in this case as reliability of blade-based solution typically is less then standalone servers. In other words there is no free lunch.
Think about them are predefined (fixed) partitions with fixed number of CPUs and size of memory. Dynamic swap of images between blades is possible. Some I/O can be local as blade typically can carry 2 (half-size blades) or 4 (full size blades) 2.5" disks. With solid state drive being a reliable and fast, albeit expensive alternative to tradition rotating hardrives and memory cards like ioDrive local disk speed can be as good as better as on the large server with, say, sixteen 15K RPM hardrives.
Groupthink : Understanding Micromanagers and Control Freaks : Toxic Managers : Bureaucracies : Harvard Mafia : Diplomatic Communication : Surviving a Bad Performance Review : Insufficient Retirement Funds as Immanent Problem of Neoliberal Regime : PseudoScience : Who Rules America : Two Party System as Polyarchy : Neoliberalism : The Iron Law of Oligarchy : Libertarian Philosophy
Skeptical Finance : John Kenneth Galbraith : Keynes : George Carlin : Skeptics : Propaganda : SE quotes : Language Design and Programming Quotes : Random IT-related quotes : Oscar Wilde : Talleyrand : Somerset Maugham : War and Peace : Marcus Aurelius : Eric Hoffer : Kurt Vonnegut : Otto Von Bismarck : Winston Churchill : Napoleon Bonaparte : Ambrose Bierce : Oscar Wilde : Bernard Shaw : Mark Twain Quotes
Vol 26, No.1 (January, 2013) Object-Oriented Cult : Vol 25, No.12 (December, 2013) Rational Fools vs. Efficient Crooks: The efficient markets hypothesis : Vol 25, No.08 (August, 2013) Cloud providers as intelligence collection hubs : Vol 23, No.10 (October, 2011) An observation about corporate security departments : Vol 23, No.11 (November, 2011) Softpanorama classification of sysadmin horror stories : Vol 25, No.05 (May, 2013) Corporate bullshit as a communication method : Vol 25, No.10 (October, 2013) Cryptolocker Trojan (Win32/Crilock.A) : Vol 25, No.06 (June, 2013) A Note on the Relationship of Brooks Law and Conway Law
Fifty glorious years (1950-2000): the triumph of the US computer engineering : Donald Knuth : TAoCP and its Influence of Computer Science : Richard Stallman : Linus Torvalds : Larry Wall : John K. Ousterhout : CTSS : Multix OS Unix History : Unix shell history : VI editor : History of pipes concept : Solaris : MS DOS : Programming Languages History : PL/1 : Simula 67 : C : History of GCC development : Scripting Languages : Perl history : OS History : Mail : DNS : SSH : CPU Instruction Sets : SPARC systems 1987-2006 : Norton Commander : Norton Utilities : Norton Ghost : Frontpage history : Malware Defense History : GNU Screen : OSS early history
The Peter Principle : Parkinson Law : 1984 : The Mythical Man-Month : How to Solve It by George Polya : The Art of Computer Programming : The Elements of Programming Style : The Unix Hater’s Handbook : The Jargon file : The True Believer : Programming Pearls : The Good Soldier Svejk : The Power Elite
Most popular humor pages:
Manifest of the Softpanorama IT Slacker Society : Ten Commandments of the IT Slackers Society : Computer Humor Collection : BSD Logo Story : The Cuckoo's Egg : IT Slang : C++ Humor : ARE YOU A BBS ADDICT? : The Perl Purity Test : Object oriented programmers of all nations : Financial Humor : Financial Humor Bulletin, 2008 : Financial Humor Bulletin, 2010 : The Most Comprehensive Collection of Editor-related Humor : Programming Language Humor : Goldman Sachs related humor : Greenspan humor : C Humor : Scripting Humor : Real Programmers Humor : Web Humor : GPL-related Humor : OFM Humor : Politically Incorrect Humor : IDS Humor : "Linux Sucks" Humor : Russian Musical Humor : Best Russian Programmer Humor : Microsoft plans to buy Catholic Church : Richard Stallman Related Humor : Admin Humor : Perl-related Humor : Linus Torvalds Related humor : PseudoScience Related Humor : Networking Humor : Shell Humor : Financial Humor Bulletin, 2011 : Financial Humor Bulletin, 2012 : Financial Humor Bulletin, 2013 : Java Humor : Software Engineering Humor : Sun Solaris Related Humor : Education Humor : IBM Humor : Assembler-related Humor : VIM Humor : Computer Viruses Humor : Bright tomorrow is rescheduled to a day after tomorrow : Classic Computer Humor
The Last but not Least
|You can use PayPal to make a contribution, supporting hosting of this site with different providers to distribute and speed up access. Currently there are two functional mirrors: softpanorama.info (the fastest) and softpanorama.net.|
The statements, views and opinions presented on this web page are those of the author and are not endorsed by, nor do they necessarily reflect, the opinions of the author present and former employers, SDNP or any other organization the author may be associated with. We do not warrant the correctness of the information provided or its fitness for any purpose.
Last modified: February 19, 2014