May the source be with you,
but remember the KISS principle ;-)
Key Softpanorama Topics
|About||Contents||Top Updates||Top Visited|
|Bulletin||Selected Papers||Softpanorama Bookshelf||History|
|Zone state model||Examples of zone creation||zonecfg command||Scripts||Zone Migration||Zone Replication||Zones based pseudoclasters|
Zones are a light weight VM concept which is further development and refinement of the idea of BSD jails which were added to FreeBSD in 1999. This was a great idea which instantly raised the question, why Linux is called the flagman of open source if FreeBSD and OpenBSD comes with more innovations using a tiny fraction of resources. Still despite being a originator of this breakthrough, FreeBSD did not have enough resources to fully develop this idea and that where Sun as a commercial company picked up the button.
Zones were designed in Sun by Andrew Tucker and are "jails of steroids". That were released with Solaris 10 on Jan 31, 2005 and this was very stable, polished implementation from the very beginning. They have better security and are better integrated into the OS then FreeBSD jails. To say that zones are great would be an understatement. They completely changed Unix landscape (including Unix security landscape) and this why Solaris in the first true XXI century Unix available on the marketplace. From purely technical view it was a knockout of competitors. But Sun marketing proved to be weak (with the only exception of Solaris 10 - ten moves ahead part) and Sun brass was sitting between two chairs trying to decide whether they can save the company using open source or not (with Jonathan Schwartz questionable, and very expensive, acquisitions in between like his acquisition of MySQL).
The net result was that Solaris 10, and especially the concept of zones, failed to get a recognition it should. This small additional level of complexity that zones represent without marketing and education push provided to be formidable barrier for the zone usage in big corporations which were the main deployment base of Solaris since 2000. If Sun brass put the same amount of money they put in MySQL acquisition into zones refinement and marketing the result might be different. They coupled easily create the infrastructure similar to Amazon elastic cloud based on Solaris 10 on Intel and that would propel zones to the nest level and might be might more profitable then sinking one billion into MySQL. Sun never recouped this investment and at tghe end of the day it was Oracle that proved to be major beneficiary of this disastrous move.
In 2011, 12 years from invention of the concept in FreeBSD Solaris implementation of zones is still unsurpassed. It is not an accident that AIX 6 copied parts of the concept from Solaris 10: imitation is the highest form of flattery...
The idea of zone is to creates an isolated process tree, preserving the common OS kernel foundation. This is often called light-weight virtualization and that's an apt name: overhead of zone is far less then any other visualization methods and in many cases capability provided by zones are adequate for what virtualization is used. In other words zones is almost free virtualization with 90% of benefits. Like in any other virtualization solution processes inside the zone cannot affect processes outside. Thus, we get a light weight virtual machine, but with minimal overhead, that can't be matched by any existing or foreseeable types of virtual machines. In certain cases paravirtualized guests might come close as all interacts are replaced by calls to hypervisor, but still they do not share common kernel and common virtual memory allocation scheme, so they still can't compete in efficiency.
It is usually called a lightweight virtual machine. Unlike complete virtual machine environment like VMware or AIX 5.3 LPAR, zones are focused mainly on security. It is important to stress that they have the smallest overhead among all mainstream virtualization technologies and they have a clean and simple design. Unlike LPAR in AIX ("VM/360 style virtual machine implementation with paravirtualized guests"), zones can be used both on Intel and SPARC versions of Solaris 10. Unlike VMware you have one instance of OS (I always wondered what's so great in running ten instances of OS virtual page management on the same hardware and pay EMC additional $5K for this privilege -- IBM used to avoid this problem in VM/CMS factoring virtual memory management into VM level). The same is partially true about schedulers. In a very deep way full virtualization solutions cannot compete with light weight virtualization unless they use "minimized, castrated, OSes" in which all "extras" like memory management and scheduling are factored out to the VM level.
It seems that zones are becoming the new powerful security model. Instead of one computer per server, one computer could have multiple jails for applications provided by zones, with each zone providing one service. This is especially attractive for large enterprises where "fight for privileges" between users and administrators is especially acute. Now it can be resolved by granting root access to the zone with a particular application. That's huge advance over mess that used to exist.
The most important feature of zones is that this method of isolating applications from each other and from "mothership". It can be used as new, natural and powerful security paradigm for all but the most convoluted applications (I would not recommend running Oracle database in a zone if you still have some hairs on your head; at least not right now ;-).
If a service in the zone is compromised, the activities of the attacker will be constrained to the zone, but also will be fully visible to the administrator, at minimal risk to the administrator. This model offers substantially enhanced monitoring in comparison with separate hardware devices like network IDS, or paravirutalised guests (like AIX LPAR, or "classic" Xen). The latter offers little reliable insight into their operation once compromised. In zoned environment global zone can be a perfect point to watch over zones. Also constraints on system calls greatly hamper the ability of the attacker in employing rootkits.
Zones benefited from approximately five years of experience with FreeBSD jail technology (as I mentioned above jails were added to FreeBSD in 1999) and managed to move further along the path pioneered by FreeBSD. Solaris 10 allow separate resource allocation for each zone (See Solaris Containers-Resource Management and Solaris Zones).
Recently Sun extended the concept of a zone into more sophisticated mechanism implemented a "linux zone" which can run linux executables.
Sun terminology is confusing and often it is unclear. In one place they use the term "zone" and in the other the term "container". I tend to think that zones + resource management = Solaris containers
zones + resource management = Solaris containers
There is also analogy between zone and Java sandbox concept. Each zone requires its own dedicated IP address and, using Solaris cinematographic analogy, represents an isolated satellite revolving around the unknown planet that can communicate with other zones and "mothership" only via network services.
The number of zones that can be effectively hosted on a single system is dependent upon the total resource requirements of the application software running in all of the zones. Each zone does duplicates certain daemons (cron, syslog,etc), so there is an overhead.
A minimalist zone needs approximately 50Meg of disk and 15Meg of memory. Sun recommends 100M disk space for a zone as a minimum. If each zone does not do a lot of processing or do a very similar processing (synergy like in case of multiple WEB servers) is it probably possible to host a couple of dozens of WEB servers on a typical V210 configuration with 2 CPUs and 4G of RAM.
The problem with zones is not only that they add complexity, but that people often want from light-weight VM capabilities of full VM (hardware hypervisor). So there is "false expectations" promise with this technology, which probably prevented it acquiring the deserved popularity. And to withstand this barrage of customer requirements was pretty difficult, as during its last years Sun did not have good technical leadership (as a technology visionary Jonathan Schwartz was a joke; his acquisition my MySQL and the game he played with Solaris were highly questionable). With branded zones it became a complex expensive kludge and the line delineating zones and parvirtualized guests becomes somewhat fuzzy.
I think that until its demise Sun was experiencing the period of "irrational exuberance" with zones: instead of just polishing the offering and clearly identifying its limitations and demonstrating it in projects like Amazon elastic cloud, the developers are trying to extend it in all directions. Some directions are problematic like Linux zones in recent Solaris 10 x86 (zones that are able to run unmodified Linux binaries), some were dictated by customer needs like the ability to access raw devices in the zones to run Oracle databases (it's not a good idea to run Oracle Database on NFS, unless you have 10GBit connection), but all of them were adding complexity. It was not really clear what are the limits of technology or in other words where you need to stop. And it is not clear what is the real return on the investment into this additional complexity.
For example, if a person wants to run unmodified Linux binaries (and this is a workstation problem mainly), in most cases (unless you are running chip tracing software or other binary with huge CPU requirements) he/she should be able to use a SunPCi card to solve the problem. I do not understand why not to make SunPCi card to work on Intel boxes and use this solution for those few cases when you have no other solution but to run Linux binaries until a native Solaris solution emerge. What exactly prevents this ? In an extremely rare case you want raw power then it should be SunPCIi with high level Opteron. In this case your main application can be isolated from the rest of the system and it also can be a Windows or Apple application not just Linux, which is probably more practically important case. And this solution would be profitable as customers need to buy hardware from Sun.
I hope that this "everything is possible" activity will stop or at least slow down in late 2006 when Sun will get the feedback about the rate of zones adoption in the industry (I bet it is slow and it additionally slowed by the problems with the initial implementation and all the new features that Sun is adding to the plate). When everything is possible nothing is easy...
As a zone is a light-weight VM created within a single instance of the Solaris Operating System, you can boot zone, login into zone, etc as if this is a separate computer. The original instance of Solaris ("mother ship") is called a global zone. It always has the name global. The global zone run system-wide processes and is used for zone administrative control. A regular user of the global zone can be a root user of the zone and thus can boot the zone, add/delete users, etc. that's a nice separation of duties in a large enterprise environment. Here is the summary or local/global zone features:
Processes in zones are isolated from other processes: even a process running with superuser credentials in a particular zone cannot view or affect activity in other zones. A processes that are assigned to different zones are only able to communicate through network APIs. For example to share files between zones NFS or Samba can be used.
Each zone is given a portion of the file system hierarchy. Because each zone is confined to its subtree of the file system hierarchy, a workload running in a particular zone cannot access the on-disk data of another workload running in a different zone. Files used by naming services reside within a zone's own root file system view. Thus, naming services in different zones are isolated from one other and the services can be configured differently.
Zones are ideal for hosting applications which can adversely influence each other and provide a possibility to consolidate several such applications on a single server. They are perfect for hosting providers as they permit adequate level of isolation of clients without excessive and punishing penalty that is difficult to justify in a world of cut-throat competition typical for web hosting. The fact that Solaris 10 can run of a regular x86 computers (from example PowerEdge 1950 and 2950 from Dell) makes this even more attractive value proposition.
The cost and complexity of managing numerous small servers that host just one application makes it more feasible to consolidate several applications on larger, more scalable servers. A zone also provides an additional abstraction layer.
Each zone has one or several dedicated IP addresses. Zone cannot share IP with the "mothership" (global zone) or other zones.
The global zone ("good old Unix") has a dual function. It can run process like any normal Unix system, but it can also manage satellite zones. Each zone is also given a unique numeric identifier similar to UID, which is assigned by when the zone is booted. The global zone is always has ID 0. Zone names and numeric IDs are discussed in Using the zonecfg Command.
When logged as root the global zone, the administrator can monitor and control the system as a whole. All processes and all files are visible from global zones. That's a very convenient feature which permits advanced debugging of complex applications.
A non-global (sattelite) zone is administered by a zone root user, which is a just a regular user of a global zone. The "global administrator" ("mothership" root) can assign the Zone Management profile to any user converting him into the zone admin. It is important to understand that zZone admin privileges are limited to the zone(s) he administer. In a global zone he is just a regular user. This is a very nice, very slick way to resolve "root hell" problem typical in large corporation when each application maintainer need root provides to perform its duties and as such encroach on turf of primary server administrators and can negatively affect him and/or other users as he has the privileges to alter any parameter of the system. See Non-Global Zone Characteristics for more information.
The following figure from Sun documentation shows a system with four zones. Each of the zones apps, users, and work is running a set of applications unrelated to the workloads of the other zones Each zone can provide a customized set of services.
Each zone also has a node name that is completely independent of the zone name. The node name is assigned by the zone admin. For more information, see Non-Global Zone Node Name
For more information about steps involved in creation zone, see Solaris Zone Creation Examples and man page for zonecfg Command.
Zone is a light-weight VM and we should keep in mind this fact when navigating our way via obscure terminology. Sun introduced too many states into this concept with somewhat confusing names and semantic (for example, it looks like "installed" and "ready" state are more like "offline" and "online" device states ;-). See the zoneadm(1M) man page that unfortunately does not explain this issue despite the fact that this is the command that is designed for changing VM states. It looks like a zone can be in one of the following states.
^--------------------- Shut down -------------------|
zoneadm -z zonename install
zoneadm -z zonename uninstall
- To change to the next ("ready") state:
zoneadm -z zonename ready (optional)
zoneadm -z zonename boot
- To change to previous (configured) state:
zoneadm -z zonename uninstall
zoneadm -z zonename ready
zoneadm halt and system reboot return a zone
in the ready state to the installed state.
zlogin options zonename
zoneadm -z zonename reboot
zoneadm -z zonename halt returns ready zone to the installed state.
zoneadm halt and system reboot return a zone in
the running state to the installed state.
If resource management features are used, it is best to align the boundaries of resource management controls with those of the zones. This alignment creates a more complete model of a virtual machine, where namespace access, security isolation, and resource usage are all controlled.
|Bulletin||Latest||Past week||Past month||
Part III of Software Management Best Practices for Oracle Solaris 11 Express
By Ginny Henningsen, August 2011Part I - Best Way to Update Software with IPS
Part II - Best Way to Automate ZFS Snapshots and Track Software Updates
Part III - Best Way to Update Software in Zones
- For the Novice: Some Background on Zones
- How Do Zones Differ in Oracle Solaris 11 Express?
- Creating Zones in Oracle Solaris 11 Express
- How Do I Configure a Non-Global Zone?
- How Do I Install a Non-Global Zone?
- How Do I Finalize Zone Installation?
- How Do I Clone a Zone?
- How Do I Install Packages on a Zone?
- How Do I Upgrade the Global Zone?
- How Do I Access the Support Repository?
- Upgrading the Global Zone
- Upgrading a Non-Global Zone
- What If the Upgrade Causes a Problem?
- Final Thoughts
This is the third article in a series highlighting best practices for software updates in Oracle Solaris 11 Express. The first article introduced the IPS software packaging model and highlighted best practices for creating a new Boot Environment (BE) before performing an update. The second article discussed the Time Slider and auto-snapshot services, describing how to initialize and use these services to periodically snapshot BEs and other ZFS volumes.
This third article dives more deeply into the topic of software updates, exploring the process of updating an Oracle Solaris 11 Express system configured with zones. This topic is especially pertinent since zones in this release differ somewhat from those in Oracle Solaris 10, as does the software upgrade process for zoned systems.
Please note that when Oracle Solaris 11 is released, it will change and simplify the process for creating and upgrading zones. This article focuses strictly on how to perform zone upgrades currently under Oracle Solaris 11 Express, and will be updated when the process changes. For reference, refer to the full documentation set for Oracle Solaris 11 Express.
Virtualization technologies are a popular means to consolidate multiple applications onto a single system for better system utilization. Solaris™ Cluster provides Solaris Zone Clusters (also called Solaris Containers Clusters), which provide virtual clusters and support the consolidation of multiple cluster applications onto a single cluster. Specifically, this article describes how Oracle Real Application Clusters (RAC) can be deployed on a zone cluster.
This paper addresses the following topics:
•"Zone cluster overview" provides a general overview of zone clusters.
•"Oracle RAC in zone clusters" describes how zone clusters work with Oracle RAC.
•"Example: Zone clusters hosting Oracle RAC" steps through an example configuring Oracle RAC on a zone cluster.
•"Oracle RAC configurations" provides details on the various Oracle RAC configurations supported on zone clusters.
- Zone cluster overview
- Oracle RAC in zone clusters
- Solaris Cluster/Oracle RAC integration
- Oracle RAC resource isolation
- Common resource components
- QFS shared file systems
- UDLM and native SKGXN support
- Configuration wizard for Oracle RAC
- Example: Zone clusters hosting Oracle RAC
- Example hardware configuration
- Individual zone cluster configurations
- Installing Solaris OS and Sun Cluster software
- Configuring the zone cluster
- Configuring the Oracle databases
- Oracle RAC configurations
- Oracle RAC 10g/11g using QFS on Solaris Volume Manager
- Oracle RAC 10g/11g using QFS on RAID
- Oracle RAC 10g/11g on Solaris Volume Manager
- Oracle RAC 10g/11g on ASM
- Oracle RAC 10g/11g on NAS
- Oracle RAC 9i using QFS on Solaris Volume Manager
- Oracle RAC 9i using QFS on RAID
- Oracle RAC 9i on Solaris Volume Manager
- Oracle RAC 9i on NAS
- Oracle RAC 9i/10g/11g on hardware RAID
- Multiple configurations on a single system
- About the authors
- Ordering Sun documents
- Accessing Sun documentation online
by Ritu Kamboj and Giri Mandalika
Today business is increasingly done on the Web, and thousands of new people, applications, businesses, and services are coming online daily. In fact, Wiki pages, mashups, social networking sites, and online stores are at the forefront of Web 2.0 technologies. As more businesses, services, and sites go online and gain in popularity, enterprises must deal with the massive increases in data, as well as collected community knowledge and shared information.
When information is readily available and secure, it can help make the organization smarter and more effective at solving business challenges. As a result, efficient and flexible environments that can scale and adapt, deploy new services quickly, and keep valuable information safe are paramount. To support this effort, Web 2.0 companies need easy access to an open, integrated platform that can help developers build and deploy high-performance, reliable Web services and applications fast. By using a complete SAMP (Solaris™ Operating System, Apache HTTP Server, MySQL™ database, PHP) application stack, open source database, and high-performance servers and storage systems, organizations are better positioned to create environments that are capable of supporting rapidly evolving, high traffic, high scale Web sites.
Part of a series, this Sun BluePrints™ article describes the process of deploying the MySQL database in virtualized environments using Solaris Zones partitioning technology.
- Technology Overview
- MySQL™ database server
- MySQL server in virtual environments
- Solaris™ Operating System
- Solaris Containers
- Solaris Zones Software
- Solaris Resource Manager software
- Installing MySQL Software in a Solaris Container
- Solaris Containers requirements
- Creating a non-global zone
- Configure, install, and boot the non-global zone
- Installing and configuring the MySQL software
- Prepare for installation
- Install the MySQL software
- Special Considerations and Best Practices
- Double buffering
- MySQL server and the ZFS™ file system
- ZFS and Tablespaces
- ZFS I/O scheduler
- ZFS recommendations for MySQL server
- Prioritize access to CPU resources with the Fair Share Scheduler
- Devices in Solaris Containers
- InnoDB thread concurrency
- Libumem for MySQL Server
- Mitigate mutex contention in MyISAM with mmap(2)
- Increase the file system cache for MyISAM
- Fixing errors about too many open files
- Enable large pages
- Solaris Dynamic Tracing probes in MySQL server
- For more information
- About the authors
- Ordering Sun documents
- Accessing Sun documentation online
PDF is availble from 820-7017
by Glenn Brunette and Jeff Victor
Part of the Solaris 10 Operating System (OS), Solaris Zones are widely discussed across all corners of the Web. Over time, Solaris Zones have grown in popularity, third-party support has increased, and the technology has been enhanced continually to support new and different kinds of features and conﬁgurations.
So why does the world need yet another article about Solaris Zones? Simple. Most publications and sites focus on the consolidation beneﬁts of Solaris Zones. While server and service consolidation is a key use case for Solaris Zones, there is so much more to the technology. Other materials focus on system administration practices related to conﬁguration, installation, management, and troubleshooting. This is incredibly useful information, but there is still an important gap. Namely, many people do not have a full appreciation of the security beneﬁts enabled by Solaris Zones, and sparse root zone conﬁgurations more speciﬁcally.
- Zone Root File System
- Process Containment
- Operating System Privileges
- Default Privileges
- Required Privileges
- Prohibited Privileges
- Optional Privileges
- Operating System Kernel Modules
- Operating System Devices
- Shared IP
- Exclusive IP
- Operating System Files
- Operating System Security Configuration
- Resource Management
- Memory Controls
- Physical and Virtual Memory Capping
- Shared Memory
- Locked Memory
- CPU Controls
- Fair Share Scheduler
- CPU Capping
- Private Pool
- Shared Pool
- Miscellaneous Controls
- File Integrity Checks
- Security Auditing
- Solaris Trusted Extensions
- About the Authors
- Ordering Sun Documents
- Accessing Sun Documentation Online
Zones and Containers FAQ at OpenSolaris.org
Zones Parallel Patching: The zones parallel patching enhancement to the standard Solaris 10 patch utilities increases the patching tools performance on systems with multiple zones by allowing parallel patching of the non-global zones.
This feature, described in the System Administration Guide: Solaris Containers--Resource Management and Solaris Zones , is in the Solaris 10 10/09 release. It is implemented on all previous Solaris 10 releases through the patch utilities patch 119254-66 (SPARC) and 119255-66 (x86) or later revision.
The maximum number of non-global zones to be patched in parallel is set in a new configuration file for
/etc/patch/pdo.conf. Revision 66 or later of this patch works for all Solaris 10 systems and higher level patch automation tools such as Sun Ops Center.
For more information, see:
- Comments in
/etc/patch/pdo.conffor details on setting
- Zones Parallel Patching versus Update On Attach: When to use which one?
Oracle supports the single instance database in Oracle Solaris 8 Containers and Oracle Solaris 9 Containers (also known as Solaris 8 Branded Zones and Solaris 9 Branded Zones).
Supported versions are Solaris 8 Containers 1.0.1 and Solaris 9 Containers 1.0.1, running on the Solaris 10 8/07 Operating System or later released update. Please check the documentation for the appropriate Oracle database version to ensure the corresponding Solaris versions are supported.
zoneadm cloneuses ZFS to clone the zone.
You can still specify that a ZFS zonepath be copied instead. If neither the source nor the target zonepath is on ZFS, or if one is on ZFS and the other is not on ZFS, the clone process uses the existing copy technique. In all cases, the system copies the data from a source zonepath to a target zonepath if using a ZFS clone is not possible.
-b option to
zoneadm attach has also
been added. Use this option to specify official or Interim Diagnostics
Relief (IDR) patches to be backed out of a zone during the attach. This
option applies only to zone brands that use SVr4 packaging.
For more information, see the
zoneadm(1M) man page and
the System Administration Guide: Solaris Containers-Resource Management
and Solaris Zones, on
A new screencast demonstrates the configuration, installation, booting, and operation of a Solaris 10 Container on OpenSolaris.
Solaris 10 Containers are based on the lightweight Solaris Zones virtualization technology, and allow administrators to virtualize Solaris 10 OS environments on OpenSolaris without the use of a hypervisor. Solaris 10 Containers technology is currently under development: You can learn more about it and zones technology at the Zones community site on opensolaris.org.
There are two general zone types to pick from during zone creation. They are,
- Small zone - (also known as a "Sparse Root zone")
- The default. This consumes the least disk space, has the best performance and the best security.
- Big zone - (also known as a "Whole Root zone")
- The zone has its own /usr files, which can be modified independently.
If you aren't sure which to choose, pick the small zone. Below are examples of installing each zone type as a starting point for Zone Resource Controls.
This demonstrates creating a simple zone that uses the default settings which share most of the operating system with the global zone. The final layout will be like the following,
This demonstrates creating a zone that resides on it's own slice, which has it's own copy of the operating system. The final layout will be like the following,
I needed to rename a zone on a Solaris 10 system earlier this week and here are some notes on how I did it.
The process of renaming a zone is essentially a task of renaming, editing and replacing strings in a series of (mostly XML) configuration files. All of the tasks below were carried out from the global zone on the system in question.
1. Shut down the zone to be renamed
# zoneadm -z <oldname> halt
2. Modify the configuration files that store the relevant zone configuration
# vi /etc/zones/index
Change all references of <oldname> to <newname> as appropriate
# cd /etc/zones
# mv <oldname>.xml <newname>.xml
# vi <newname>.xml
Change all references of <oldname> to <newname> as appropriate
3. Rename the main zone path for the zone
# cd /export/zones
# mv <oldname> <newname>
Your zone path may be different than the one shown above
4. Modify (network) configuration files of new zone
Depending on the applications installed in your zone, there may be several files you need to update. The essential networking files are:
# cd /export/zones/<newname>/root
# vi etc/hosts
# vi etc/nodename
But others containing your old host/zone name can also be found using this command:
# cd /export/zones/<newname>/root/etc
# find . -type f | xargs grep <oldname>
5. Boot the new zone again
# zoneadm -z <newname> boot
Jul 11, 2007 | Martello
I tried out cloning on a Solaris Zone today and it was a breeze, so much easier (and far, far quicker) than creating another zone from scratch and re-installing all the same users, packages, port lock-downs etc. Here are my notes from the exercise:
Existing System Setup
SunFire T1000 with a single sparse root zone (zone1) installed in /export/zones/zone1. The objective is to create a clone of zone1 called zone2 but using a different IP address and physical network port. I am not using any ZFS datasets (yet).
1. Export the configuration of the zone you want to clone/copy
# zonecfg -z zone1 export > zone2.cfg
2. Change the details of the new zone that differ from the existing one (e.g. IP address, data set names, network interface etc.)
# vi zone2.cfg
3. Create a new (empty, unconfigured) zone in the usual manner based on this configuration file
# zonecfg -z zone2 -f zone2.cfg
4. Ensure that the zone you intend to clone/copy is not running
# zoneadm -z zone1 halt
5. Clone the existing zone
# zoneadm -z zone2 clone zone1
Cloning zonepath /export/zones/zone1...
This took around 5 minutes to clone a 1GB zone (see notes below)
6. Verify both zones are correctly installed
# zoneadm list -vi
ID NAME STATUS PATH
0 global running /
- zone1 installed /export/zones/zone1
- zone2 installed /export/zones/zone2
7. Boot the zones again (and reverify correct status)
# zoneadm -z zone1 boot
# zoneadm -z zone2 boot
# zoneadm list -vi
ID NAME STATUS PATH
0 global running /
5 zone1 running /export/zones/zone1
6 zone2 running /export/zones/zone2
8. Configure the new zone via its console (very important)
# zlogin -C zone2
The above step is required to configure the locale, language, IP settings of the new zone. It also creates the system-wide RSA key pairs for the new zone, without which you cannot SSH into the zone. If this step not done, many of the services on the new zone will not start and you may observe /etc/.UNCONFIGURED errors in certain log files.
You should now be able to log into the new zone, either from the root zone using zlogin or directly via ssh (of configured). All of the software that was installed in the existing zone was present and accounted for in the new zone, including SMF services, user configuration and security settings etc.
If you are using ZFS datasets in your zones, then you may see the following error when trying to execute the clone command for the newly created zone:
Could not verify zfs dataset tank/xxxxx: mountpoint cannot be inherited
zoneadm: zone xxxxx failed to verify
To resolve this, you need to ensure that the mountpoint for the data set (i.e. ZFS partition) being used has been explicitly set to none. Even though the output from a zfs list command at the global zone might suggest that it does not have a mount point, this has happened to me a number of times and in each case, the following command did the trick for me:
# zfs set mountpoint=none tank/xxxxx
-by Joost Pronk van Hoogeveen
Solaris Containers and Predictive Self-Healing technologies work together by creating separate execution environments, each with its own namespace and assigned resources. Each environment can have its own self-healing personalities that can be changed, copied, and reloaded as needed. These technologies enable administrators to determine the current state of the environment, making it easier to use the Solaris OS for consolidation efforts.
This article provides an inside look on what the Solaris 10 OS has to offer, as well as ideas on how to get started and put these new features to work, with technologies such as Solaris Containers, Solaris Predictive Self Healing and Solaris Service Management Facility. Emphasis is placed on illustrating how these functionalities can be used to create isolated environments customized for specific applications.
-by Jeff Victor
This Sun BluePrints article is a must-read for those looking to find new ways to reduce IT infrastructure costs and better manage end user service levels. While costs from managing vast networks of servers and software components continue to escalate, existing server consolidation and virtualization techniques do not adequately provision applications and ensure shared resources are not compromised. The Solaris Containers technology addresses this void by making it possible to create a number of private execution environments within a single instance of the Solaris OS.
This paper provides suggestions for designing system configurations using powerful tools associated with Solaris Containers, guidelines for selecting features most appropriate for the user's needs, advice on troubleshooting, and a comprehensive consolidation planning example.
August 2008 | BigAdmin
This document describes how to set up MySQL Cluster software in a Solaris Zones environment, as if it were running on independent physical servers. This setup is useful for replicating an environment in-house without using multiple physical systems. The author shows that it is also possible to extend the setup to use Solaris Zones on different physical systems.
For more details, see the list of contents below.
Download the document as PDF.
- Steps for Setting Up MySQL Cluster Software With Solaris Zones
- 1. Create Solaris Zones
- 1.1 Create the Zones Using the Command Line
- 1.2 Create the Zones Using a Script
- 2. Install MySQL Cluster Software
- 2.1 Download and Install MySQL Cluster Software for the Solaris OS for x64 Platforms
- 2.2 Set Up the Configuration for the MySQL Server (my.cnf)
- 2.3 Verify Access to the MySQL Server
- 2.4 Modify root User Environment (.profile)
- 3. Configure and Test MySQL Cluster Software
- 3.1 Configure the Management Node
- 3.2 Configure the Data and SQL Nodes
- 3.3 Start and Stop the Cluster
- 3.4 Test the Cluster Operation
Sun will release working Xen support code in July. This code will give OpenSolaris the ability to run on Xen as a "Domain 0" (Dom0), or host, system, with support for 32-bit and 64-bit guest (DomU) Solaris systems.
OpenSolaris will get full Xen support by October, which will be extended to Solaris 10 in the first half of 2007, Sun said.
Under Xen, a virtualised machine is called a "domain," and operating systems must be modified at the kernel level to be fully virtualised - an approach called paravirtualisation that is designed to allow for maximum performance. The Dom0 system is fully virtualised, but has direct access to hardware, unlike DomU systems.
So far, Linux operating systems such as SUSE Linux Professional 9.3, the upcoming Suse Linux Enterprise 10 and Red Hat's Fedora Core 3 and 4, have been modified for Xen support. Operating systems such as Windows can run as a host system without modifications using virtualisation technology found in newer Intel chips and upcoming AMD chips.
Virtualisation is expected to revolutionise the use of operating systems, applications and even malware once it goes mainstream. Xen, developed at the University of Cambridge, is an open-source competitor to virtualisation providers such as VMware. Sun also provides its own container technology, but said it plans to provide users with the ability to mix and match.
Sun initially got Solaris working with Xen in a rudimentary form in July 2005. In February 2006 Sun released the first, early OpenSolaris-on-Xen code.
"Running on Xen, OpenSolaris is reasonably stable, but it's still very much 'pre-alpha' compared with our usual finished code quality," wrote Sun engineer Tim Marsland in his blog at the time. "Installing and configuring a client is do-able, but not for the faint of heart."
provides suggestions for designing system configurations using powerful tools associated with Solaris Containers. This Sun BluePrints article also offers advice on troubleshooting and a comprehensive consolidation planning example.
...discusses technologies inside the Solaris 10 OS that enable administrators to determine the current state of the computing environment. This Sun BluePrints article explains how users can put these new features to work, simplifying consolidation efforts.
shows how to qualify applications so that they will support non-global zones. The discussion is focused on the Solaris Zones feature of Solaris Containers.
The OpenSolaris Project's new community and application framework, BrandZ, extends the Solaris Zones infrastructure to create Branded Zones, which are zones that contain non-native operating environments. For example, the lx brand enables Linux binary applications to run unmodified on the Solaris OS, within zones running a complete Linux userspace.
Instructs users, system administrators, and developers on how to consolidate applications onto a single server. Users are guided through the consolidation process, with code examples and illustrations.
OpenSolaris Zones Presentation (pdf) - Resources
Presentation on Zones and OpenSolaris given at ApacheCon US 2006.
Authors: Narayana Janga and Shivani Khosa.
Over the years businesses have been building large-scale information systems to solve business problems, with a focus on building scalable and highly available IT infrastructures that can adapt change. Providing sufficient availability and performance for business applications was the primary driver for these efforts. Today, the need to protect technology investments and provide the same service levels at a lower price point is shifting the focus to reducing IT infrastructure cost and improving end user service level management. To help this effort, the Solaris Operating System includes Solaris Containers, a mechanism that provides isolation between software applications or services using flexible, software-defined boundaries.
This Sun BluePrint article discusses the challenges organizations face in dealing with resource and workload management. Solaris Containers, and their constituent technologies (projects, resource pools, Zones) are introduced and explained. Worked examples that show these technologies solving resource and workload management problems provide practical examples of how to use these technologies.
Note: This article is available in PDF Format only.
[Jul 2, 2005] Learning Solaris 10 » Zones Unofficial FAQ
Articles (many disappeared after Oracle acquisition)
|The Solaris 8 Branded Zone - A Brief Introduction||27 Apr 2008|
|A Brief Look at Solaris 10 build 51 Zones||26 Feb 2004|
The global administrator configures a zone by specifying various parameters for the zone's virtual platform and application environment. The zonecfg command is used to create this configuration. The zone is then installed by the global administrator, who uses the zone administration command zoneadm to install software at the package level into the file system hierarchy established for the zone. The global administrator can log into the installed zone by using the zlogin command. At first login, the internal configuration for the zone is completed. The zoneadm command is then used to boot the zone.
For information on zone configuration, installation, and login, see
|ppriv(1) - inspect or modify process privilege sets and attributes|
|zlogin(1) - Enter a zone|
|zonename(1) - print name of current zone|
|zoneadm(1M) - administer zones|
|zoneadmd(1M) - zones administration daemons|
|zonecfg(1M) - Set up zone configuration|
|getzoneid(3C) - map between zone id and name|
|getzoneidbyname(3C) - map between zone id and name|
|getzonenamebyid(3C) - map between zone id and name|
|priv_str_to_set(3C) - privilege name functions|
|File Formats and Mscellany|
|privileges(5) - the Process Rights Management privilege model|
|zones(5) - Solaris application containers|
|zcons(7D) - Zone console device driver|
Solaris Forums - Solaris Zones
The Clingan Zone
BigAdmin - Submitted Tech Tip Zone Replication on the Solaris 10 OS in Five Easy Steps
Zone Replication on the Solaris 10 OS in Five Easy Steps
David Steed, June 2006
The following is coffeeware -- instructions rather than software. If you use this, you are obligated to buy me a coffee... at your convenience.
These instructions describe a very simple method of moving a local zone from one machine to another (using the Solaris 10 OS).
- Two physical machines, with no shared storage
- The same Solaris 10 version installed
- Machine Y with one fully populated local zone installed (and nothing inherited)
- Machine Z with no zones installed (Z can also be an additional zone on the same machine)
Here are the five easy steps:
1. Log in to the console of a zone running on machine Y and create a full flash (this does not work properly with an image created from a global zone!).
Example:zonename # flarcreate -n "machineY" -S /machineY.flar (anywhere but /tmp)
2. Copy the following files from machine Y to machine Z:
- The newly created flash image
- /etc/zones/index (merge it with the existing index file)
- /etc/zones/machineY.xml (rename to machineZ.xml and edit appropriately)
3. Create the following:
- /export/zones/machineX/root/ (machineX directory with 700 perms)
4. Split the flash image (flar split machineX.flar), then move the file "archive" to /export/zones/machineX/root/, and unpack it with cpio -i.
- Uncompress if necessary (mv archive archive.Z; uncompress archive.Z)
- cd to the machineX/root directory: cpio -i < /export/archive
5. Boot the machine with zoneadm -z machineZ boot and log in -- the devices will be built at that time. Sysid information is normally required at this point ...
Most popular humor pages:
Manifest of the Softpanorama IT Slacker Society : Ten Commandments of the IT Slackers Society : Computer Humor Collection : BSD Logo Story : The Cuckoo's Egg : C++ Humor : ARE YOU A BBS ADDICT? : Object oriented programmers of all nations : C Humor : Financial Humor : Financial Humor Bulletin, 2008 : Financial Humor Bulletin, 2010 : Richard Stallman Related Humor : Admin Humor : Perl-related Humor : Linus Torvalds Related humor : PseudoScience Related Humor : Networking Humor : Shell Humor: Financial Humor Bulletin, 2011 : Financial Humor Bulletin, 2012 : Financial Humor Bulletin, 2013 : Java Humor : Software Engineering Humor : Sun Solaris Related Humor : The Most Comprehensive Collection of Editor-related Humor : Microsoft plans to buy Catholic Church : Education Humor : IBM Humor : Assembler-related Humor : VIM Humor Computer Viruses Humor : Bright tomorrow is rescheduled to a day after tomorrow : Classic Computer Humor : Best Russian Programmer Humor : Russian Musical Humor : The Perl Purity Test : Politically Incorrect Humor : GPL-related Humor : OFM Humor : IDS Humor : Real Programmers Humor : Scripting Humor : Web Humor : Programming Language Humor : Goldman Sachs related humor : Greenspan humor :
The Last but not Least
|You can use PayPal to make a contribution, supporting hosting of this site with different providers to distribute and speed up access. Currently there are two functional mirrors: softpanorama.info (the fastest) and softpanorama.net.|
The statements, views and opinions presented on this web page are those of the author and are not endorsed by, nor do they necessarily reflect, the opinions of the author present and former employers, SDNP or any other organization the author may be associated with. We do not warrant the correctness of the information provided or its fitness for any purpose.
Last modified: July 07, 2013