|Home||Switchboard||Unix Administration||Red Hat||TCP/IP Networks||Neoliberalism||Toxic Managers|
May the source be with you, but remember the KISS principle ;-)
Bigger doesn't imply better. Bigger often is a sign of obesity, of lost control, of overcomplexity, of cancerous cells
|News||Redbooks||IBM Links||Recommended Links||JFS2||Reference|
|aix lvm||mksysb Command||Log administration||Useful AIX commands||smit||AIX Networking||Performance tuning|
AIX 5.3 supports filesystem snapshots. The problem is that they are extremely badly documented. It took me hours just to understand simplest things about this feature So typical junk that IBM documentation represents here is made into incomprehensible mess that probably ensured that this feature is not used. You simply cannot do worse. In this sense Solaris documentation is a masterpiece ;-). Also AIX 5.3 supports splitting mirror drives:
In JFS (and JFS only; not JFS2) in addition to ability to create snapshots chfs command provides freeze function which writes all blocks to disk and convert disk to read-only for a certain number of seconds. If you are using FlashCopy to backup mounted AIX filesystems, please see the restrictions listed in the following URL: http://www-1.ibm.com/support/docview.wss?
One way to cut downtime is:
shutdown particular application (
Mount snapshot using a new mount point.
Backup snapshot using a new mount point.
Delete the snapshot with rmfs
That helps to cuts downtime for critical applications.
This "external snapshot" feature is controlled via chfs Command. To create a snapshot type the following:
chfs -a splitcopy=/snapshot_mount_point /filesystem/mount/point
At this point, a read-only copy of the file system is available in //snapshot_mount_point. Any changes made to the original file system after the copy is split off are not reflected in the backup copy.
You can control which mirrored copy is used as the backup by using the copy attribute. The second mirrored copy is the default if a copy is not specified by the user. For example:
chfs -a splitcopy=/snapshot_mount_point -a copy=1 /filesystem/mount/point
To reintegrate the JFS split image as a mirrored copy at the /backup_point mount point, use the following commands:
The rmfs command removes the file system copy from its split-off state and allows it to be reintegrated as a mirrored copy.
The act of freezing a file system produces a nearly consistent on-disk image of the file system, and writes all dirty file system metadata and user data to the disk. In its frozen state, the file system is read-only, and anything that attempts to modify the file system or its contents must wait for the freeze to end. The value of timeout must be either 0, off, or a positive number. If a positive number is specified, the file system is frozen for a maximum of timeout seconds. If timeout is 0 or off, the file system will be thawed, and modifications can proceed.
Note:Freezing base file systems (/, /usr, /var, /tmp) can result in unexpected behavior.
Splitting mirror drives is available via is available via chfs utility option -a
- -a splitcopy=NewMountPointName
- Splits off a mirrored copy of the file system and mounts it read-only at the new mount point. This provides a copy of the file system with consistent JFS meta-data that can be used for backup purposes. User data integrity is not guaranteed, so it is recommended that file system activity be minimal while this action is taking place. Only one copy may be designated as an online split mirror copy.
For example to split off a copy of a mirrored file system and mount it read-only for use as an online backup, enter:chfs -a splitcopy=/backup -a copy=2 /testfs
By Jake Edge
June 25, 2008
Freezing seems to be on the minds of some kernel hackers these days, whether it is the northern summer or southern winter that is causing it is unclear. Two recent patches posted to linux-kernel look at freezing, suspending essentially, two different pieces of the kernel: filesystems and containers. For containers, it is a step along the path to being able to migrate running processes elsewhere, whereas for filesystems it will allow backup systems to snapshot a consistent filesystem state. Other than conceptually, the patches have little to do with each other, but each is fairly small and self-contained so a combined look seemed in order.
Takashi Sato proposes taking an XFS-specific feature and moving it into the filesystem code. The patch would provide an ioctl() for suspending write access to a filesystem, freezing, along with a thawing option to resume writes. For backups that snapshot the state of a filesystem or otherwise operate directly on the block device, this can ensure that the filesystem is in a consistent state.
Essentially the patch just exports the freeze_bdev() kernel function in a user accessible way. freeze_bdev() locks a file system into a consistent state by flushing the superblock and syncing the device. The patch also adds tracking of the frozen state to the struct block_device state field. In its simplest form, freezing or thawing a filesystem would be done as follows:ioctl(fd, FIFREEZE, 0); ioctl(fd, FITHAW, 0);Where fd is a file descriptor of the mount point and the argument is ignored.
In another part of the patchset, Sato adds a timeout value as the argument to the ioctl(). For XFS compatibility-though courtesy of a patch by David Chinner, the XFS-specific ioctl() is removed-a value of 1 for the pointer argument means that the timeout is not set. A value of 0 for the argument also means there is no timeout, but any other value is treated as a pointer to a timeout value in seconds. It would seem that removing the XFS-specific ioctl() would break any applications that currently use it anyway, so keeping the compatibility of the argument value 1 is somewhat dubious.
If the timeout occurs, the filesystem will be automatically thawed. This is to protect against some kind of problem with the backup system. Another ioctl() flag, FIFREEZE_RESET_TIMEOUT, has been added so that an application can periodically reset its timeout while it is working. If it deadlocks, or otherwise fails to reset the timeout, the filesystem will be thawed. Another FIFREEZE_RESET_TIMEOUT after that occurs will return EINVAL so that the application can recognize that it has happened.
Moving on to containers, Matt Helsley posted a patch which reuses the software suspend (swsusp) infrastructure to implement freezing of all the processes in a control group (i.e. cgroup). This could be used now to checkpoint and restart tasks, but eventually could be used to migrate tasks elsewhere entirely for load balancing or other reasons. Helsley's patch set is a forward port of work originally done by Cedric Le Goater.
The first step is to make the freeze option, in the form of the TIF_FREEZE flag, available to all architectures. Once that is done, moving two functions, refrigerator() and freeze_task(), from the power management subsystem to the new kernel/freezer.c file makes freezing tasks available even to architectures that don't support power management.
As is usual for cgroups, controlling the freezing and thawing is done through the cgroup filesystem. Adding the freezer option when mounting will allow access to each container's freezer.state file. This can be read to get the current freezer state or written to change it as follows:# cat /containers/0/freezer.state RUNNING # echo FROZEN > /containers/0/freezer.state # cat /containers/0/freezer.state FROZENIt should be noted that it is possible for tasks in a cgroup to be busy doing something that will not allow them to be frozen. In that case, the state would be FREEZING. Freezing can then be retried by writing FROZEN again, or canceled by writing RUNNING. Moving the offending tasks out of the cgroup will also allow the cgroup to be frozen. If the state does reach FROZEN, the cgroup can be thawed by writing RUNNING.
In order for swsusp and cgroups to share the refrigerator() it is necessary to ensure that frozen cgroups do not get thawed when swsusp is waking up the system after a suspend. The last patch in the set ensures that thaw_tasks() checks for a frozen cgroup before thawing, skipping over any that it finds.
There has not been much in the way of discussion about the patches on linux-kernel, but an ACK from Pavel Machek would seem to be a good sign. Some comments by Paul Menage, who developed cgroups, also indicate interest in seeing this feature merged.
2.2 JFS2 internal snapshot
With AIX 5L V5.2, the JFS2 snapshot was introduced. Snapshots had to be created into separate logical volumes. AIX V6.1 offers the ability to create snapshots within the source file system.
Therefore, starting with AIX V6.1, there are two types of snapshots:
Table 2-1 provides an overview of the differences between the two types of snapshots.
- External snapshot
- Internal snapshot
Category External snapshot Internal snapshot Location Separate logical volume Within the same logical volume Access Must be mounted separately /fsmountpoint/.snapshot/snapshotname Maximum generations 15 64 AIX compatibility >= AIX 5L V5.2 >= AIX V6.1
Comparison of external and internal snapshots
Both the internal and the external snapshots keep track of the changes to the snapped file system by saving the modified or deleted file blocks. Snapshots provide point-in-time (PIT) images of the source file system. Often, snapshots are used to be able to create a consistent PIT backup while the workload on the snapped file system continues.
The internal snapshot introduces the following enhancements:
- No super user permissions are necessary to access data from a snapshot, since no initial mount operation is required.
- No additional file system or logical volume needs to be maintained and monitored.
- Snapshots are easily NFS exported, since they are in held in the same file system.
2.2.1 Managing internal snapshots
A JFS2 file system must be created with the new -a isnapshot=yes option. Internal snapshots require the use of the extended attributes v2 and therefore the crfs command will automatically create a v2 file system.
Existing file systems created without the isnapshot option cannot be used for internal snapshots. They have to be recreated or have to use external snapshots.
There are no new commands introduced with internal snapshots. Use the snapshot, rollback, and backsnap commands to perform operations. Use the new -n snapshotname option to specify internal snapshots. There are corresponding SMIT and Web-based System Manager panels available.
To create an internal snapshot:# snapshot -o snapfrom=/aix61diff -n snap01Snapshot for file system /aix61diff created on snap01
To list all snapshots for a file system:# snapshot -q /aix61diffSnapshots for /aix61diffCurrent Name Time* snap01 Tue Sep 25 11:17:51 CDT 2007
To list the structure on the file system:# ls -l /aix61diff/.snapshot/snap01total 227328-rw-r--r-- 1 root system 10485760 Sep 25 11:33 file1-rw-r--r-- 1 scott staff 1048576 Sep 25 11:33 file2-rw-r--r-- 1 jenny staff 104857600 Sep 25 11:33 file3drwxr-xr-x 2 root system 256 Sep 24 17:57 lost+found
The previous output shows:
Note: The .snapshot directory in the root path of every snapped file system is not visible to the ls and find command. If the .snapshot directory is explicitly specified as an argument, they are able to display the content.
- All snapshots are accessible in the /fsmountpoint/.snapshot/ directory.
- The data in the snapshot directories are displayed with their original file permission and ownership. The files are read only; no modifications are allowed.
To delete an internal snapshot:# snapshot -d -n snap01 /aix61diff2.2.3 Considerations
The following applies for internal snapshots:
- A snapped file system can be mounted read only on previous AIX 5L versions. The snapshot itself cannot be accessed. The file system must be in a clean state; run the fsck command to ensure that this is true.
- A file system created with the ability for internal snapshots can still have external snapshots.
Once a file system has been enabled to use internal snapshots, this cannot be undone. If the fsck command has to modify the file system, any internal snapshots for the file system will be deleted by fsck. Snapped file systems cannot be shrunk. The defragfs command cannot be run on a file system with internal snapshots. Existing snapshot Web-based System Manager and SMIT panels are updated to support internal snapshots.
The following items apply to both internal and external snapshots:
- A file system can use exclusively one type of snapshot at the same time.
- Typically, a snapshot will need two to six percent of the space needed for the snapped file system. For a highly active file system, 15 percent is estimated.
- External snapshots are persistent across a system reboot.
- During the creation of a snapshots, only read access to the snapped file system is allowed.
- There is reduced performance for write operations to a snapped file system. Read operations are not affected.
- Snapshots are not replacement for backups. A snapshot depends always on the snapped file system, while backups have no dependencies on the source.
- Neither the mksysb nor alt_disk_install commands will preserve snapshots.
- A file system with snapshots cannot be managed by DMAPI. A file system being managed by DMAPI cannot create a snapshot.
Split mirror creates a physical clone of the storage entity, such as the file-system, volume, or LUN for which snapshot is being created, onto another entity of the same kind and the exact same size. The entire contents of the original volume are copied onto a separate volume. Clone copies are highly available, since they are exact duplicates of the original volume that resides on a separate storage space. However, due to the data copy, such snapshots cannot be created instantaneously. Alternatively, a clone can also be made available instantaneously by "splitting" a pre-existing mirror of the volume into two, with the side effect that original volume has one fewer synchronized mirror. This snapshot method requires as much storage space as the original data for each snapshot. This method has the performance overhead of writing synchronously to the mirror copy.
EMC Symmterix and AIX Logical Volume Manager support split mirror. Additionally, any raid system supporting multiple mirrors can be used to create a clone by splitting a mirror.
Copy-on-write with background copy (IBM FlashCopy)
Some vendors offer an implementation where a full copy of the snapshot data is created using copy-on-write and a background process that copies data from original location to snapshot storage space. This approach combines the benefits of copy-on-write and split mirror methods as done by IBM FlashCopy and EMC TimeFinder/Clone. It uses copy-on-write to create an instant snapshot and then optionally starts a background copy process to perform block-level copy of the data from the original volume (source volume) to the snapshot storage (target volume) in order to create an additional mirror of the original volume.
When a FlashCopy operation is initiated, a FlashCopy relationship is created between the source volume and target volume. This type of snapshot is called a COPY type of FlashCopy operation.
IBM incremental FlashCopy
Incremental FlashCopy tracks changes made to the source and target volumes when the FlashCopy relationships are established. This allows the capability to refresh a LUN or volume to the source or target's point in time content using only the changed data. The refresh can occur in either direction, and it offers improved flexibility and faster FlashCopy completion times.
This incremental FlashCopy option can be used to efficiently create frequent and faster backups and restore without the penalty of having to copy entire content of the volume .
Flashcopy on FAStT700 with AIX 5.2 (P Series Server)
Posted: Nov 23, 2006 04:57:45 AM
I am hoping somebody with Flashcopy experience on AIX may be able to help. We have had a few issues with the scripts we have been running to import Flashcopies of our databases to our P series host nightly to dump to tape; mostly centering around repository sizing which I have pretty much resolved.
I am struggling with a couple of legacy issues though. First of all we need to add some more disk (LUNS) from the array, and I am concerned about conflicts with pvids on the scripted Flashcopies.
Currently our import/export runs something like this :
- Recreate Flashcopy with SMcli(FAStT700)
- LUN Mapping with SMcli(FAStT700)
- hot_add (AIX cfgmgr) to import the disk devices to the host
- chdev -l hdisk -a pv=clear (AIX) to clear the source LVM data structures from the FC's
- recreatevg (AIX) to recreate the Flashcopy Volume Group
- mount filesystems
- umount filesystems (AIX)
- varyoffvg (AIX) varyoff the FC volume group
- exportvg (AIX) export the FC volume group
- rmdev -l hdisk -d (AIX) completely remove the FC devices from the ODM
- Remove LUN mapping with SMcli(FAStT700)
- Disable the Flashcopy with SMcli (FAStT700)
Problem is as we completely remove the FC pvs from AIX each time, they only keep the same pvids each time for scripting purposes because we never add any other volumes to the system.
Somebody on another forum suggested that we should only remove the FCs from AIX using rmdev -l hdisk (no -d flag) in order that they remain defined in the ODM. That theory seems to work; but unfortunately when re-importing the volumes as soon as you run 'chdev -l hdisk -a pv=clear'I can not get the 'defined' volumes to switch back to 'available', and subsequently re-running hot_add (cfgmgr) just brings the FCs on line with new pvids, leaving the original ones as 'defined'.
I have tried throwing a 'chdev -l hdisk -a pv=yes' into the equation and even a 'mkdev -l hdisk' into the mix in various orders but nothing seems to work. Has anybody got any ideas how I can effectively 'hardcode' the FC pvids in the ODM and reuse the same ones each time ?
The other issues we have experienced are failures due to either 'chdev -l hdisk -a pv=clear' fails; or corrupt jfs superblocks when mounting the imported volumes on the host, requiring an fsck or re-running the snapshot.
We shut down the database prior to recreating the FCs to quiesce i/o; but I was wondering if additionally we should be (a) running a synch call prior to recreating the FCs to flush the disk buffers and (b) running a full fsck prior to mounting the volumes as a matter of course.
Does anybody have any ideas/experience here ?
Sorry for the long post - but I'm an HP guy historically and I'm just getting to grips with this Flashcopy stuff; and unfortunately IBM won't ratify our 'home grown' scripts under our support agreement.
I will not duplicate this posting at this time but if anybody thinks it would be better in the AIX forum please let me know.
Help - chfs Command
Groupthink : Two Party System as Polyarchy : Corruption of Regulators : Bureaucracies : Understanding Micromanagers and Control Freaks : Toxic Managers : Harvard Mafia : Diplomatic Communication : Surviving a Bad Performance Review : Insufficient Retirement Funds as Immanent Problem of Neoliberal Regime : PseudoScience : Who Rules America : Neoliberalism : The Iron Law of Oligarchy : Libertarian Philosophy
War and Peace : Skeptical Finance : John Kenneth Galbraith :Talleyrand : Oscar Wilde : Otto Von Bismarck : Keynes : George Carlin : Skeptics : Propaganda : SE quotes : Language Design and Programming Quotes : Random IT-related quotes : Somerset Maugham : Marcus Aurelius : Kurt Vonnegut : Eric Hoffer : Winston Churchill : Napoleon Bonaparte : Ambrose Bierce : Bernard Shaw : Mark Twain Quotes
Vol 25, No.12 (December, 2013) Rational Fools vs. Efficient Crooks The efficient markets hypothesis : Political Skeptic Bulletin, 2013 : Unemployment Bulletin, 2010 : Vol 23, No.10 (October, 2011) An observation about corporate security departments : Slightly Skeptical Euromaydan Chronicles, June 2014 : Greenspan legacy bulletin, 2008 : Vol 25, No.10 (October, 2013) Cryptolocker Trojan (Win32/Crilock.A) : Vol 25, No.08 (August, 2013) Cloud providers as intelligence collection hubs : Financial Humor Bulletin, 2010 : Inequality Bulletin, 2009 : Financial Humor Bulletin, 2008 : Copyleft Problems Bulletin, 2004 : Financial Humor Bulletin, 2011 : Energy Bulletin, 2010 : Malware Protection Bulletin, 2010 : Vol 26, No.1 (January, 2013) Object-Oriented Cult : Political Skeptic Bulletin, 2011 : Vol 23, No.11 (November, 2011) Softpanorama classification of sysadmin horror stories : Vol 25, No.05 (May, 2013) Corporate bullshit as a communication method : Vol 25, No.06 (June, 2013) A Note on the Relationship of Brooks Law and Conway Law
Fifty glorious years (1950-2000): the triumph of the US computer engineering : Donald Knuth : TAoCP and its Influence of Computer Science : Richard Stallman : Linus Torvalds : Larry Wall : John K. Ousterhout : CTSS : Multix OS Unix History : Unix shell history : VI editor : History of pipes concept : Solaris : MS DOS : Programming Languages History : PL/1 : Simula 67 : C : History of GCC development : Scripting Languages : Perl history : OS History : Mail : DNS : SSH : CPU Instruction Sets : SPARC systems 1987-2006 : Norton Commander : Norton Utilities : Norton Ghost : Frontpage history : Malware Defense History : GNU Screen : OSS early history
The Peter Principle : Parkinson Law : 1984 : The Mythical Man-Month : How to Solve It by George Polya : The Art of Computer Programming : The Elements of Programming Style : The Unix Haterís Handbook : The Jargon file : The True Believer : Programming Pearls : The Good Soldier Svejk : The Power Elite
Most popular humor pages:
Manifest of the Softpanorama IT Slacker Society : Ten Commandments of the IT Slackers Society : Computer Humor Collection : BSD Logo Story : The Cuckoo's Egg : IT Slang : C++ Humor : ARE YOU A BBS ADDICT? : The Perl Purity Test : Object oriented programmers of all nations : Financial Humor : Financial Humor Bulletin, 2008 : Financial Humor Bulletin, 2010 : The Most Comprehensive Collection of Editor-related Humor : Programming Language Humor : Goldman Sachs related humor : Greenspan humor : C Humor : Scripting Humor : Real Programmers Humor : Web Humor : GPL-related Humor : OFM Humor : Politically Incorrect Humor : IDS Humor : "Linux Sucks" Humor : Russian Musical Humor : Best Russian Programmer Humor : Microsoft plans to buy Catholic Church : Richard Stallman Related Humor : Admin Humor : Perl-related Humor : Linus Torvalds Related humor : PseudoScience Related Humor : Networking Humor : Shell Humor : Financial Humor Bulletin, 2011 : Financial Humor Bulletin, 2012 : Financial Humor Bulletin, 2013 : Java Humor : Software Engineering Humor : Sun Solaris Related Humor : Education Humor : IBM Humor : Assembler-related Humor : VIM Humor : Computer Viruses Humor : Bright tomorrow is rescheduled to a day after tomorrow : Classic Computer Humor
The Last but not Least
Copyright © 1996-2018 by Dr. Nikolai Bezroukov. www.softpanorama.org was initially created as a service to the (now defunct) UN Sustainable Development Networking Programme (SDNP) in the author free time and without any remuneration. This document is an industrial compilation designed and created exclusively for educational use and is distributed under the Softpanorama Content License. Original materials copyright belong to respective owners. Quotes are made for educational purposes only in compliance with the fair use doctrine.
FAIR USE NOTICE This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available to advance understanding of computer science, IT technology, economic, scientific, and social issues. We believe this constitutes a 'fair use' of any such copyrighted material as provided by section 107 of the US Copyright Law according to which such material can be distributed without profit exclusively for research and educational purposes.
This is a Spartan WHYFF (We Help You For Free) site written by people for whom English is not a native language. Grammar and spelling errors should be expected. The site contain some broken links as it develops like a living tree...
|You can use PayPal to make a contribution, supporting development of this site and speed up access. In case softpanorama.org is down you can use the at softpanorama.info|
The statements, views and opinions presented on this web page are those of the author (or referenced source) and are not endorsed by, nor do they necessarily reflect, the opinions of the author present and former employers, SDNP or any other organization the author may be associated with. We do not warrant the correctness of the information provided or its fitness for any purpose.
Last modified: September 12, 2017