Softpanorama

May the source be with you, but remember the KISS principle ;-)
Home Switchboard Unix Administration Red Hat TCP/IP Networks Neoliberalism Toxic Managers
(slightly skeptical) Educational society promoting "Back to basics" movement against IT overcomplexity and  bastardization of classic Unix

Solaris vs. Linux: Framework for the Comparison

by Dr Nikolai Bezroukov


 

Prev Contents Next

Midrange Servers

Computers should obey a square law -- when the price doubles,
 you should get at least four times as much speed

 -- Seymour Cray

Simplifying we can provide the following definition of a midrange server: two sockets server

For a large enterprise environment, the initial costs of the midrange servers is probably the least expensive part in the total cost of ownership (TCO) of such a server (let's assume five years lifespan). During those five years the cost of electricity plus the cost of licensing of software (let's say RHEL)  and maintenance costs of hardware and software installed (for example Oracle) often are close and sometimes exceed the initial cost of hardware. Manpower costs are even greater.  Just the cost of administration, OS license and hardware maintenance contracts can exceed $4K per year or $20K per five years.  For example if  a sysadmin administers 40 servers and gets (with benefits) $80K per year just human labor costs $2K per server per year.  That's $10K for five years. Add $3.5K for Red Hat support and, say, $5K for hardware maintenance ($1K per year for 4-hours on site repair contract)  and we are talking about  $21K.

Max number of cores on a single Intel CPU now stands at 22 -- a really amazing, simply unreal number (44 cores per two sockets server). I think soon it will be 32 core per CPU which means you can reach 64 cores in a single two socket server. Add to this ability of such server to have 1TB of memory and you can see that most enterprise databases now can be processed totally in memory with I/O needed only to write the updates of the database from time to time.  Max amount of memory for a single two socket server is 1.5 TB or more.    

Midrange servers are not very competitive with the equal number low end one socket servers that cost the same, but for large applications they are a must.   So in a way those servers do not obey Saymour Cray law that  when the price doubles, you should get at least four times as much speed.

Also the question of how many low end servers one midrange server can replace (the most common question during current consolidation mania) is not that easy to answer.  Intel continues to push the envelope, but there are natural physical limits of shrinking die. Current chips are produced using 14-nanometer process. At some point "die shrinking saga" will end.  Whether Intel will be able to shrink die to 8 nanometers remains to be seen.  In any case such CPU are more sensitive to cosmic radiation.

It is important to understand that the total number of cores is not that well correlated to performance. Assuming that each core operates at the same speed a server with, say, 16 cores can't even get close to sum of productivities of 4 servers with 4 cores. Overhead per additional core is substantial and that should cool heads of too adamant virtualization enthusiasts, but they just don't care ;-). At the same time, wasting company money on the current techno fad is a noble tradition of large enterprise IT (see below discussion of conspicuous consumption).

Wasting company money on the current techno fad is a noble tradition of large enterprise IT (see below discussion of conspicuous consumption).

I would like to stress that for mid-range servers, beyond the lure of low cost of hardware and OS, an organization needs to consider the TCO which includes operating system and hardware support costs ($1-$2K a year), database or ERP licensing costs and maintenance fees (often per core for software from such sharks as IBM) as well as cost of licensing of the clist of the monitoring system (let's OpenView) and add to the TCO the costs of  system administrators and network infrastructure.  Cutting the number of cores and rising CPU speed to max is a good strategy of fighting software sharks, which license their software per core (unless you can get rid of them completely, which is probably an optimal solution ;-)

Stability is one of the primary concern in this range as substantial downtime often represents what it is called "career limiting move" for the person responsible for the selection of particular hardware.

Stability is one of the primary concern in this range. Actually stability that is pretty adequate for development desktop/laptop is completely inadequate for midrange server.

Saving on administration costs is also a very important consideration. If you can save one Unix administrator position (for example, by not adding a new one due to increased workload after a large acquisition) that means that in the USA you saved approximately $60K-$100K  per year. That means that all midrange servers should have built-in subsystems for remote administration. Like DRAC in Dell or ILO in HP.

There are two major areas of deployment of midrange servers in large enterprises:

Midrange servers deployments in large enterprises are dominated by database servers. Databases are big business -- just look at what some companies spend on licensing Oracle. Also software licensing costs have great influence on selection of hardware. Oracle charges per physical socket. So it you can get two socket server with 16 cores per CPU you will be paying two time less for software license (if you need to buy one) then if you buy a server with for sockets and each CPU has 8 cores. And I know several large corporations that went into this trap and paid the costs just due to misunderstanding of this simple "Oracle license cost optimization" trick.  It make sense to discard old 4 socket server and switch to two socket 16 cores per CPU sever just from this standpoint.

In many large companies database servers are used very inefficiently: the level of a database group in a typical large company is usually rather basic and they rely more on "brute force" then sophistication in designing and managing the database.  So many database servers is considerably over-specified. Due to feudal structure of IT, data base administrators and Unix group often are separate groups. And while sysadmin group  has no substantial knowledge about particular application that is running on, say, Oracle database, database group usually is too far from understanding hardware and OS issues.  Also the structure of many databases are simply beyond their control.  For example,  often the database is a part of an ERP system or enterprise application developed outside the company and as such should be taken "as is".

Enterprise Resource Planning (ERP) systems have become popular  in large corporations and which support critical business functions such as supply chain, inventory, and customer relationship management are very complex, often badly written systems. And they support enterprise resource planning equally badly ;-). The choice of the right hardware platform for ERP systems is extremely important for large enterprises as it greatly affects the total cost, efficiency and future extendibility. Any unplanned downtime can lead to reduced productivity and lost revenue. The ERP platform must also be able to easily handle growing amounts of data and large numbers of concurrent users with reasonable response times.

In case the database server is used in ERP system like SAP/R3 the structure of database is dictated by the ERP vendor and a typical benchmark for ERP applications for this server often means more then cost. Again, the number of cores per CPU is important as it permit cutting the number of physical CPUs and software is often licensed "per socket (aka physical CPU). That's why Intel 16-core CPUs are so valuable in this domain. There are also greedy perverts like IBM, who charge per core, and that even with their excellent ability to lure top brass into their spider net, this is desperate, almost suicidal move on their part as there is nothing in IBM software that does not exist (often in better incarnation) in competition.  That's how they killed Tivoli.

Again minimizing number of physical sockets (physical CPUs) to cut software licencing costs is the first rule of thumb.  Generally, the less CPUs the server has,  the better it is for company valet. If you can do job on two socket server, unless Oracle or SAP/R3 employs your close relatives, you should not order a four socket server ;-) That's why IBM has reasonable success with its Power 5 line in ERP systems area. In comparison with Sun SPARC, IBM POWER server can use much less CPUs to get the same productivity. But with modern Intel CPU IBM come  under tremendous pressure and has no real answer to the Intel assault. Here are old  Second Quarter 2011 SPEC CPU2006 Results (after that the situation only got worse for IBM). Here results for Dell R910 server shown. Other vendor results are similar  as they all use the same CPUs.

Test Sponsor System Name Base
Copies
Processor Results
Enabled
Cores
Enabled
Chips
Cores/
Chip
Threads/
Core
Base Peak
Dell Inc. PowerEdge R910 (Intel Xeon E7-4870, 2.40 GHz)
HTML | CSV | Text | PDF | PS | Config
Yes 40 4 10 1 33.5 36.6
Dell Inc. PowerEdge R910 (Intel Xeon E7-8837, 2.67 GHz)
HTML | CSV | Text | PDF | PS | Config
Yes 32 4 8 1 33.9 36.3
Dell Inc. PowerEdge R910 (Intel Xeon E7-8867L, 2.13 GHz)
HTML | CSV | Text | PDF | PS | Config
Yes 40 4 10 1 30.3 33.0
Dell Inc. PowerEdge T710 (Intel Xeon X5687, 3.60 GHz)
HTML | CSV | Text | PDF | PS | Config
Yes 8 2 4 1 45.1 47.7

There is actually no non-Intel results in the table for this quarter, so decimation of RISK vendors in 2011 is complete.  Also note that speed of the CPU is important as Intel Xeon X5687 in entry level T710 server running at 3.6GHz beats R910  Intel Xeon E7-8837 by wide margin. Like car enthusiasts used to say: there is no replacement for displacement.

Solaris on Intel

The most cost effective option for running modern Solaris on midrange servers is to run it on Intel hardware. The key consideration here is the security not price or performance: ou get much better security then in Linux.  Solaris on Intel can compete with Linux heard-to-head as hardware is the same. The only problem is that you need to buy hardware from Oracle (or Fujitsu) will cost more, but you will enjoy almost the same security via obscurity advantage as for Solaris on UltraSparc. That means that for a director of a large corporate datacenter chances to wake up and see a news in NYT that his datacenter was hacked and millions of customer record gone are less.  So from the point of view of sound sleep of datacenter brass Solaris still make a lot of sense ;-).

 

  Small Medium Large
System List Price US$6,783.00 US$11,354.00 US$23,932.00
First Year Oracle Premier Support for Systems US$813.96 US$1,362.48 US$2,871.84
Processor Intel Xeon E5-2609, 2.4 GHz processor Intel Xeon E5-2660, 2.2 GHz processor Intel Xeon E5-2690, 2.9 GHz processor
Processor Core Four Cores Eight Cores Eight Cores
Clock Speed 2.4 GHz 2.2 GHz 2.9 GHz
Max # of Processors Two Two Two
Compute Threads One Thread per Core Two Threads per core Two Thread per core
Installed Processors One Two Two
Cache 10 MB L3 Cache 20 MB L3 Cache 20 MB L3 Cache
Built-In Security Intel Advanced Encryption Standard Technology, on-board Trusted Platform Module version 1.2; Oracle Solaris Trusted Extensions, Digital Signatures, Users and Process Rights Management, IP Filter Firewall Software, and other Security Features Built into the Oracle Solaris Operating System. Intel Advanced Encryption Standard Technology, on-board Trusted Platform Module version 1.2; Oracle Solaris Trusted Extensions, Digital Signatures, Users and Process Rights Management, IP Filter Firewall Software, and other Security Features Built into the Oracle Solaris Operating System. Intel Advanced Encryption Standard Technology, on-board Trusted Platform Module version 1.2; Oracle Solaris Trusted Extensions, Digital Signatures, Users and Process Rights Management, IP Filter Firewall Software, and other Security Features Built into the Oracle Solaris Operating System.
Memory Eight GB DDR3 -1033 DIMM 16 GB DDR3-1,600 DIMM 16 GB DDR3-1600 DIMM
Max Memory 512 GB 512 GB 512 GB
# Installed Memory Slots Two Four 16
Max # of Memory Slots 16 16 16
Reliability, Availability & Serviceability (RAS) DIMMs have ECC, Up to eight front accessible hot-swap drives, redundant hot-swappable power supplies and fans. DIMMs have ECC, Up to eight front accessible hot-swap drives, redundant hot-swappable power supplies and fans. DIMMs have ECC, Up to eight front accessible hot-swap drives, redundant hot-swappable power supplies and fans.
Mass Storage One 600 GB 15,000 rpm 3.5-inch SAS-2 hard disk drive Three 300 GB 10,000 rpm 2.5-inch SAS-2 hard disk drive Three 100 GB 2.5-inch eMLC SSD with bracket and three 600 GB 10,000 rpm 2.5-inch SAS2 hard disk drive
Internal Disk Bays Four Four Eight
Max Internal Capacity 12 TB 2.4 TB 4.8 TB
Optical Drive - DVD RW -
Network & I/O Options Four onboard 1/10 GbE UTP port and Four PCIe 3.0 Four onboard 1/10 GbE UTP port and Four PCIe 3.0 Four onboard 1/10 GbE UTP port and Four PCIe 3.0
Ethernet Port Four 10 GbE Four 10 GbE Four 10 GbE
USB Port Six USB 2.0 ports Six USB 2.0 ports Six USB 2.0 ports
Serial Port One One One
Video Port One One One
I/O Module Four PCIe 3.0 (one internal) Four PCIe 3.0 (one internal) Four PCIe 3.0 (one internal)
I/O Slot Three PCIe 2.0 (one x16, two x8) Three PCIe 2.0 (one x16, two x8) Three PCIe 2.0 (one x16, two x8)
I/O Max Slots Four Four Four
Service Processor One One One
Virtualization Oracle VM, Oracle Solaris Containers, VMware, Microsoft Hyper-V, Intel Virtualization Technology Oracle VM, Oracle Solaris Containers, VMware, Microsoft Hyper-V, Intel Virtualization Technology Oracle VM, Oracle Solaris Containers, VMware, Microsoft Hyper-V, Intel Virtualization Technology
Power/Cooling Two 100-240 VAC Two 100-240 VAC Two 100-240 VAC
Power Supply Platinum rated, hot swappable and redundant Platinum rated, hot swappable and redundant Platinum rated, hot swappable and redundant
Redundant Fans Yes Yes Yes
Power Consumption 600W 600W 600W
Physical Specifications One rack unit One rack unit One rack unit
Design Type Rack Rack Rack
Dimensions Height: 42.6mm (1.7 in)
Width: 436.5 mm (17.2 in)
Depth: 737.0 mm (29.0 in)
Height: 42.6mm (1.7 in)
Width: 436.5 mm (17.2 in)
Depth: 737.0 mm (29.0 in)
Height: 42.6mm (1.7 in)
Width: 436.5 mm (17.2 in)
Depth: 737.0 mm (29.0 in)
Rack Unit Dimensions 1 RU 1 RU 1 RU
Weight 16.4kg (36.1lbs) fully populated 16.4kg (36.1lbs) fully populated 16.4kg (36.1lbs) fully populated
Cable Management Arm Included Included Included
Regulations RoHS Compliance RoHS Compliance RoHS Compliance
Warranty One year One year One year

You can also run Solaris on Intel server of other manufactures using Oracle VM and that looks like another interesting proposition. We will discuss it in more details in VM section.

Historically there were periods when other companies tried to exploit Solaris security advantages and some of their servers preinstalled with Solaris 10. Those attempts are all in the past but we will provide some information about previous attempts for historical reasons.

The importance of memory size and speed

Both size and speed of memory are of paramount importance as database applications are very unfriendly toward caching. Memory in modern computer is a very complex subsystem. See an excellent Ulrich Drepper's article What every programmer should know about memory  for some important details the need to be understood. Figure 2.13 in the article is reproduced below and shows the names of the DDR3 modules.

Array
Freq.
Bus
Freq.
Data
Rate
Name
(Rate)
Name
(FSB)
100MHz 400MHz 6,400MB/s PC3-6400 DDR3-800
133MHz 533MHz 8,512MB/s PC3-8500 DDR3-1066
166MHz 667MHz 10,667MB/s PC3-10667 DDR3-1333
200MHz 800MHz 12,800MB/s PC3-12800 DDR3-1600
233MHz 933MHz 14,933MB/s PC3-14900 DDR3-1866

DRAM latency is around 0.1 microseconds, 100 ns. L1 cache reference is 0.5 ms, L2 cache reference is around 7 ms.

Most computer textbooks teach that a memory access is roughly equivalent to a time of execution of one CPU instruction (especially for RISC architecture).  But with new technologies the reality is that a memory operations, for example in case of a cache miss, may cost you 100 CPU instructions or more. Also currently the top speed of CPU is approximately two times higher than the top speed of memory (5 Ghz and 2.4 GHz respectively). So if you use CPU with speed less then 2.4 GHZ for memory that is capable to work on 2.4GHz you are losing productivity of the server.  With current CPU cache sized optimal speed of CPU is around two time the speed of memory but this  is probably un-achievable for CPUs with more then 4 Cores. 

The gap between memory speed and disk latency is even worse.  So other things equal, if you can install enough memory to cache 80% of the database in RAM you can achieve spectacular improvements in performance.  And keep in mind that 32GB database is a pretty large database, but it can be easily stored in memory on any modern two socket server.  Even 256GB database can now be quite easily stored in memory.  Also SSD disks can improve reading speed dramatically and in RAID 1 configuration write speed can also be close.

Now the cost of RAM decreased dramatically so that 16GB memory chips became common and 32Gb chips affordable. With 16 32GB chips per server you can easily have 512 GB. Which should be done in most case where larger memory improves speed. Even at the cost of using less fancy CPUs.

Even mission critical database server with a terabyte database  can keep this database completely in memory. For database server doubling the amount of RAM always wins doubling of the number of CPUs.  Most organizations usually overpay for CPUs and underpay for memory when ordering such servers. There are just too many enterprise servers that are completely unbalanced CPU/memory wise and/or over-specified to the extent that they run with CPU utilization of less then 10% (Actually corporate idiotism in buying SAN storage when it is not needed is even greater waste of money, but that's another story).

Filesystems comparison

I need to stress it again that for database applications the speed of memory (or more accurately memory bandwidth) and the speed of I/O is often two major bottlenecks. That means that high speed of CPU is great but at the end of the day speed of memory and speed of I/O are those two key factors that really matters.

On identical Intel servers, Solaris 10   can claim some advantage in filesystem area due to presence of ZFS filesystem and TMPFS filesystems, but this advantage is not that great against XFS filesystem now standard of RHEL. Linux also supports TMPFS filesystem, it is just not that frequently used (Solaris by default uses TMPFS filesystem for /tmp, which was a great idea of Solaris designers).

The key problem is that linux badly needs a new filesystem and currently this void is filled with XFS. The most widely used on Linux Ext2/ext3/ext4 filesystem is showing its age. While ZFS is probably slightly overhyped  it is a very good scalable filesystem with a lot to offer and it definitely makes Solaris 10 X86 boxes more attractive on midrange, even with draconian Oracle prices for Intel servers (They ask $48K for two socket X3-2L with two E5-2909 2.9 CPUs and just 32 GB of DDR3-1066 (not 1600 but 1066) memory and 26 600GB 10K RPM SAS-2 harddrives. )

SLES 11 SP2 now supports Btrfs,  but that's slightly suicidal (Red Hat recently droppped support of it ) as it is unclear how stable it is. Theoretically Btrfs is intended to address the lack of pooling, snapshots, checksums and integral multi-device spanning in Linux file systems, but in reality it is too little too late.  So XFS is the only enterprise class filesystem that Linux now has. 

But ZFS is a stable almost 10 years old filesystem and does represent state of the art filesystem that satisfies the most stringent requirements of large businesses. At the same time you face problems with all major linux filesystems: ext3/ext4 is slow and ReiserFS v3 (still used in Suse 10 including Suse 10 SP2  as default) has scalability problems and the manager of its development is in jail for murder. As Jeff Maloney from Suse Labs noted, due to those problems ReiserFS v3 was replaced with Ext3 in OpenSuse 10.2 (the original post is available from many SUSE-related blogs, see for example here):

We’ve been using ReiserFS as our default installation file system for the last 6-7 years now, and it’s served us well in that time. Unfortunately, there are a number of problems with it. ... ReiserFS has serious scalability problems. ... While I realize that XFS-style scalability isn’t a real goal ... for ReiserFS, the scalability problems are real. ... ReiserFS has serious performance problems with extended attributes and ACLs.
... ReiserFS v3 is a dead end. Hans has been pushing reiser4 for years now and declared Reiser3 in maintenance mode. ... Reiser3 lacks a number of features ... such as extents and growth beyond current limits. Since it’s in maintenance mode, that’s unlikely to change. I view reiser4 as an interesting research file system, but that’s about as far as it goes. I’ve been unimpressed with its stability so far.
... The solution for replacing an aging file system isn’t to switch to a brand new unproven file system, but rather a proven one with a clear upgrade path. That file system is ext3. Ext3’s performance in some situations may not be on par with Reiser3, but it scales better....

It should be noted that XFS, a high quality file system donated by SGI  was available on linux for a long time, but for some reason is was used. Only RHEL 7 intstalls it as defulat (you can install it on RHEL 6.x too but it not default).  It probably should be used more.

I would like to repeat it again: quality of the filesystem and the speed of the disk subsystem has tremendous importance on midrange servers, especially database servers,  and there is significant difference running the same database on server with the most expensive SCSI controller with 1GB cache and  dozen of  of RAID 10 15K RPM drives and, say, integrated with motherboard-SCSI controller and 10K RPM, or worse 7,200 RPM drives.  

Also internal bus permits data transfer at speed 6GB/sec, which is achievable only with fastest Ethernet or fiber connections (16GFC -- 3.2GB/sec, 20GFC -- 5.1GB/sec).  So the local storage still rules and should be used whenever possible.  12Gbit/s is achievable with SAS-3.0 specification. Other things equal for typical database applications money should be spend on the fastest (and most expensive) I/O subsystem the money can buy, then on as much memory as money can buy and only after that on CPUs...

Fighting SANs Fashion on Midrange

One important, but often overlooked advantage of using Intel/AMD hardware with linux or Solaris is the ability to cut expenses of SANs.  Both now have filesystem tha sale to large amount of space. While SAN has legitimate usage in large enterprise environment for shared data, large databases and often is indispensable for clusters and multiple virtual machines running on the same box, this is a very expensive solution and one should not try to shoot birds with a cannon.  

Small, especially catalog type databases (mostly read operations), say up to 10 TB, can perform very well with the local storage, especially solid state drives and CPI cards like Fusion-io  ioDrive.  SAS RAID controllers can scale up to 6GB/sec and can have 2GB internal cache with battery backup. And cost of six 1TB SSD drives is less then the cost of two high end SAN fiber channel controllers with dual ports. To say nothing about cost of SAN storage itself and Brocade switch it is connected to. That makes solid state drives  an interesting alternative for high speed local storage. Such drive has  almost zero read latency, sustained sequential read speed of up to 770MB/s (ioDrive) and sustained sequential write speed of up to 770 MB/sec). 

At the same time when dealing with really large databases accessing data is not the single issue that should be addressed. Other operations such as importing/exporting data into database, and especially backups and restore operations are equally important . Especially if we are talking about, say a half a 500 TB data (typical for a large corporation amount that is backed up)

Actually in the area of storage decision-making in hardware acquisition by IT brass of large corporations produces some  mixed feelings (like in classic quote "the bigger you are the stupider you behave" ;-). :  

Another typical episode of abuse of SAN is when blades are used with VMware. As blades command some price premium and VM is expensive it is difficult to achieve cost savings in comparison with existing such as four tray servers with enough amount of local disks, such as HP Apollo 2600, or Dell C6320 rack server). The latter allow 24 2.5 or 12 3.5 drives.  In case of "classic" blades SAN storage is necessary as they do not have enough local storage, but the question arise whether they provide better cost/performance ration can be achieved with Apollo 2600 type of servers and local drives.    

Even usage of SAN unit  for backup as well as the use of  tape libraries needs to be investigated with spreadsheet in hand. Tape unit costs approximately $10-20K for each 100TB of backup plus cost of tapes. This is the same cost that twenty 2TB SSD drives command now, and SSD is much less hassle. Also one high end QLogic fiber card (QLE2562) costs approximately $1200 or the same as  2TB SSD drive or six 600 GB 15K RPM SAS-2 Seagate drives.

One need to understand that to transmit data via fiber or copper with speeds that that is achievable on local bus (6 GB per sec) is expensive. That means that twelve local drives per server can smoke any SAN. Network latency and, especially, possibility of oversubscription of available network channels are issues that should not be overlooked.  Additional complexity is another factor: you add more expensive components like cards and switches, each of which can fail. You can duplicate them by suffering from adding costs but sometimes a simple question arise: "Can local storage be just adequate for the task in hand?" And the answer in many cases is definite yes, as few databases exceed a couple of terabytes range that is now easily achievable with local storage (and even with solid state drives). Also the cost of  SAN connection is substantial and for equal comparison the money spend on controller, switch and SAN storage box should be used for improving I/O (by increasing the number of controllers or totally switching to solid state drives), increasing memory size to max and using the latest Intel CPUs. 

The rule "use SAN only as the heavy artillery" is often violated.  Flexibility achievable with SANs is often overrated. In many cases SAN increase not decrease downtime, because if something goes south, it goes south to all connected to particular SAN unit servers, which can be a dozen or two.  This is when IT honchos who promoted SAN storage realize that centralization has its drawbacks :-).

Using drive images and appropriate software can provide 80% of flexibility with lesser costs.  In many cases it is used  just because of fashion considerations, aggressive marketing or due to inertia linked to long gone limitations on the size of individual hard drives (which  is  the thing of the past at least since 2005).  In any case, all modern midrange server has no such limitations and can use dozen of  large size drives (for SATA that means up to 8TB per drive) without problems.  That means that other factors then scalability should be considered before making a decision.

Using such trivial improvements as bigger RAM and larger amount of physical drives (one pair per critical filesystem) one can save money that otherwise lands in EMC coffers ;-). It is not uncommon for a large enterprise paying EMC almost the same sum annually as for two major Unix vendors server maintenance. This amount can (and probably should) be cut using other vendors but the same question  "Can local storage be just adequate for the task in hand?" should still be asked before making final decision.

In this area ZFS leads the pack due to its unique advantages, but linux XFS is not far behind and with proper tuning is very competitive.  

Unless you need to access data from multiple points (and even if this case, if you needs are less then say 100TB  you can use GPFS with regular four tray servers instead of SAN), losses due to overinvestment in SANs can be substantial. Benchmarks for the equal cost solutions are a must (usually you can increase the number of cores and/or amount of memory on the server in solution that use local storage instead of SANs: just two cards required cost over $2K or, for a $20K server, 10% of the total cost ).

The proof of the pudding is in the eating

But all this theoretical considerations does not worth the paper they are printed on. The hardware configuration issues tend to be complex and strongly depend on the application which will be running of the server. Without real benchmarks on actual hardware it is difficult to tell what improvement each platform or each component can bring. The money involved are considerable and without a pilot and laboratory testing of actual applications and actual OSes on real hardware, enterprises are bound to make expensive mistakes in selection of midrange servers either because of vendor hype or their own incompetence or both.

Without a pilot and laboratory testing of actual applications on actual OSes enterprises are bound to make an expensive mistakes in selection of midrange servers either because of vendor hype or their own incompetence or both. I just want to suggest that Solaris 10 on X86 might be a part of such pilot.

For example, in late 2006 Tweakers.net performed an interesting  test using X4200 (4 cores). While in no way this the results can be taken for granted and need to be reproduced in your particular environment (tuning is important issues here and the more your know about particular OS the better it will perform in your environment) they have found that "concurrencies of 15 and above made Solaris between 6% and 14% quicker than Linux, running PostgreSQL and MySQL, respectively. Quite remarkably, the latter does show a higher performance peak running Linux." As margin of error in such experiments is probably around 10% this suggests that any performance advantage of Linux over Solaris on Intel platform is application dependent and only real testing can reveal what platform is better.  For example, typically for Webshere Solaris is extremely comparative and outperforms Linux on large lads by significant margin, while for LAMP stack Linux usually holds its own.

Sun T2000 review - Solaris vs. Linux

Summing up

Intel servers wiped the floor with RISK and on midrange they are currently the only game in town (like on low end).  Other things equal for midrange servers better reliability and faster problems resolution can be achieved if the vendor of hardware simultaneously a vendor of the OS. That makes Oracle linux premier platform for Oracle database.

But there are imporatn security issues connected with Linux and that's why there advantages of Solaris on Intel for Oracle databases.  

Important phenomenon is midrange server acquisition that is typical for large corporations can be called  "conspicuous consumption". In include various (and pretty stupid from purely technical perspective) games played by enterprise IT during the hardware selection and acquisition process based on some artificial notions of techno fashion and prestige. 

Servers for large complex applications are commonly acquired without proper testing and pilot phase. There is unhealthy preoccupation with number of cores and usage of SAN as universal openers to performance end scalability. 

Typical level of understanding of modern Intel hardware architecture in large organizations varies from poor to non-existent. Tales of extremely stupid purchasing decision abound.

But large corporation are rich, so "conspicuous consumption" is one of the privileges of such a position.

Still sometimes I think that Švejk would find himself at home in modern IT as the person responsible for ordering midrange servers in a large corporation.

Prev Contents Next

Etc

Society

Groupthink : Two Party System as Polyarchy : Corruption of Regulators : Bureaucracies : Understanding Micromanagers and Control Freaks : Toxic Managers :   Harvard Mafia : Diplomatic Communication : Surviving a Bad Performance Review : Insufficient Retirement Funds as Immanent Problem of Neoliberal Regime : PseudoScience : Who Rules America : Neoliberalism  : The Iron Law of Oligarchy : Libertarian Philosophy

Quotes

War and Peace : Skeptical Finance : John Kenneth Galbraith :Talleyrand : Oscar Wilde : Otto Von Bismarck : Keynes : George Carlin : Skeptics : Propaganda  : SE quotes : Language Design and Programming Quotes : Random IT-related quotesSomerset Maugham : Marcus Aurelius : Kurt Vonnegut : Eric Hoffer : Winston Churchill : Napoleon Bonaparte : Ambrose BierceBernard Shaw : Mark Twain Quotes

Bulletin:

Vol 25, No.12 (December, 2013) Rational Fools vs. Efficient Crooks The efficient markets hypothesis : Political Skeptic Bulletin, 2013 : Unemployment Bulletin, 2010 :  Vol 23, No.10 (October, 2011) An observation about corporate security departments : Slightly Skeptical Euromaydan Chronicles, June 2014 : Greenspan legacy bulletin, 2008 : Vol 25, No.10 (October, 2013) Cryptolocker Trojan (Win32/Crilock.A) : Vol 25, No.08 (August, 2013) Cloud providers as intelligence collection hubs : Financial Humor Bulletin, 2010 : Inequality Bulletin, 2009 : Financial Humor Bulletin, 2008 : Copyleft Problems Bulletin, 2004 : Financial Humor Bulletin, 2011 : Energy Bulletin, 2010 : Malware Protection Bulletin, 2010 : Vol 26, No.1 (January, 2013) Object-Oriented Cult : Political Skeptic Bulletin, 2011 : Vol 23, No.11 (November, 2011) Softpanorama classification of sysadmin horror stories : Vol 25, No.05 (May, 2013) Corporate bullshit as a communication method  : Vol 25, No.06 (June, 2013) A Note on the Relationship of Brooks Law and Conway Law

History:

Fifty glorious years (1950-2000): the triumph of the US computer engineering : Donald Knuth : TAoCP and its Influence of Computer Science : Richard Stallman : Linus Torvalds  : Larry Wall  : John K. Ousterhout : CTSS : Multix OS Unix History : Unix shell history : VI editor : History of pipes concept : Solaris : MS DOSProgramming Languages History : PL/1 : Simula 67 : C : History of GCC developmentScripting Languages : Perl history   : OS History : Mail : DNS : SSH : CPU Instruction Sets : SPARC systems 1987-2006 : Norton Commander : Norton Utilities : Norton Ghost : Frontpage history : Malware Defense History : GNU Screen : OSS early history

Classic books:

The Peter Principle : Parkinson Law : 1984 : The Mythical Man-MonthHow to Solve It by George Polya : The Art of Computer Programming : The Elements of Programming Style : The Unix Hater’s Handbook : The Jargon file : The True Believer : Programming Pearls : The Good Soldier Svejk : The Power Elite

Most popular humor pages:

Manifest of the Softpanorama IT Slacker Society : Ten Commandments of the IT Slackers Society : Computer Humor Collection : BSD Logo Story : The Cuckoo's Egg : IT Slang : C++ Humor : ARE YOU A BBS ADDICT? : The Perl Purity Test : Object oriented programmers of all nations : Financial Humor : Financial Humor Bulletin, 2008 : Financial Humor Bulletin, 2010 : The Most Comprehensive Collection of Editor-related Humor : Programming Language Humor : Goldman Sachs related humor : Greenspan humor : C Humor : Scripting Humor : Real Programmers Humor : Web Humor : GPL-related Humor : OFM Humor : Politically Incorrect Humor : IDS Humor : "Linux Sucks" Humor : Russian Musical Humor : Best Russian Programmer Humor : Microsoft plans to buy Catholic Church : Richard Stallman Related Humor : Admin Humor : Perl-related Humor : Linus Torvalds Related humor : PseudoScience Related Humor : Networking Humor : Shell Humor : Financial Humor Bulletin, 2011 : Financial Humor Bulletin, 2012 : Financial Humor Bulletin, 2013 : Java Humor : Software Engineering Humor : Sun Solaris Related Humor : Education Humor : IBM Humor : Assembler-related Humor : VIM Humor : Computer Viruses Humor : Bright tomorrow is rescheduled to a day after tomorrow : Classic Computer Humor

The Last but not Least Technology is dominated by two types of people: those who understand what they do not manage and those who manage what they do not understand ~Archibald Putt. Ph.D


Copyright © 1996-2021 by Softpanorama Society. www.softpanorama.org was initially created as a service to the (now defunct) UN Sustainable Development Networking Programme (SDNP) without any remuneration. This document is an industrial compilation designed and created exclusively for educational use and is distributed under the Softpanorama Content License. Original materials copyright belong to respective owners. Quotes are made for educational purposes only in compliance with the fair use doctrine.

FAIR USE NOTICE This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available to advance understanding of computer science, IT technology, economic, scientific, and social issues. We believe this constitutes a 'fair use' of any such copyrighted material as provided by section 107 of the US Copyright Law according to which such material can be distributed without profit exclusively for research and educational purposes.

This is a Spartan WHYFF (We Help You For Free) site written by people for whom English is not a native language. Grammar and spelling errors should be expected. The site contain some broken links as it develops like a living tree...

You can use PayPal to to buy a cup of coffee for authors of this site

Disclaimer:

The statements, views and opinions presented on this web page are those of the author (or referenced source) and are not endorsed by, nor do they necessarily reflect, the opinions of the Softpanorama society. We do not warrant the correctness of the information provided or its fitness for any purpose. The site uses AdSense so you need to be aware of Google privacy policy. You you do not want to be tracked by Google please disable Javascript for this site. This site is perfectly usable without Javascript.

Created Jan 2, 2005.  Last modified: March 12, 2019