||Home||Switchboard||Unix Administration||Red Hat||TCP/IP Networks||Neoliberalism||Toxic Managers|
|(slightly skeptical) Educational society promoting "Back to basics" movement against IT overcomplexity and bastardization of classic Unix|
|News||Oracle||Recommended Links||Recommended Papers||Redo Logs Backup||Filesystem free space monitoring||Humor||Etc|
From Wikipedia, the free encyclopedia
In the Oracle environment, redo logs comprise files in a proprietary format which log a history of all changes made to the database. Each redo log file consists of redo records. A redo record, also called a redo entry, holds a group of change-vectors, each of which describes or represents a change made to a single block in the database.
For example, if a user
UPDATEs a salary-value in a table
containing employee-related data, the DBMS generates a redo record containing
change-vectors that describe changes to the data segment block for the table.
And if the user then
COMMITs the update, Oracle generates another
redo record and assigns the change a "system change number" (SCN).
A single transaction may involve multiple changes to data blocks, so it may have more than one redo record.
Redo log files (sometimes simply called "log files") contain redo entries for both committed and uncommitted transactions in medium-term storage.
Oracle redo log files contain the following information about database changes made by transactions:
Before a user receives a "Commit complete" message, the system must first successfully write the new or changed data to a redo log file.
If a database crashes, the recovery process has to apply all transactions, both uncommitted as well as committed, to the data-files on disk, using the information in the redo log files. Oracle must re-do all redo-log transactions that have both a begin and a commit entry, and it must undo all transactions that have a begin entry but no commit entry. (Re-doing a transaction in this context simply means applying the information in the redo log files to the database; the system does not re-run the transaction itself.) The system thus re-creates committed transactions by applying the “after image” records in the redo log files to the database, and undoes incomplete transactions by using the “before image” records in the undo tablespace.
Given the verbosity of the logging, Oracle Corporation provides methods for archiving redo logs (archive-logs), and this in turn can feed into data backup solutions and standby databases. Oracle rotates its redo log groups, so it will eventually overwrite an older group To prevent this situation from happening production databases should be run in archive log mode. In this mode Oracle makes sure that online redo log files are not overwritten unless they have been backuped.
The existence of a detailed series of individually logged transactions and actions provides the basis of several data-management enhancements such as Oracle Flashback, log-mining and point-in-time recovery.
For database tuning purposes, efficiently coping with redo logs requires plentiful and fast-access disk.
Sanjay Kumar Suri
It seems that log block size is constant for a OS as the following link says.
My experience on redo log is:
1. Keep log_buffer=128KB * number of CPUs in the server.
2. Redo log buffer performance can be improved by (a) Make redo log buffer bigger (b) Reduce log generation.
3. v$system_event can be used to monitor the activity of the Redo Logs.
4. Keep the redo logs on faster disks (RPM). These should only have redo logs. Keep Redo and mirror on separate disk.
Hein van den Heuvel
Emilio, Steve is NOT talking about DB blocksize in general, but about REDO in specific. Also, with recent Oracle versions you do not have to recreate the DB, you can just use mixed blocksize if you determine a larger blocksize is more suiteable for certain tablespaces.
Steve, Don't worry, be happy. Oracle is already doing 'the right thing'. It will use large IOs if a single transaction is putting a lot of stuff there, or if a lot of transactions came in while a prior redo IO was still finishing.
As soon as (but no sooner) a Tx commits, any Tx, it will initiate a redo IO, down to 512 bytes if the platform permits. Any and all Tx commits coming in while this IO is outstanding will be grouped in the logbuffer and commited as a group in the next IO.
Just check KB/sec divded by IO/sec with IOSTAT,SAR or SAM.
The redolog essentially runs at MB/sec, not IO/sec. The faster the IO system responds, the shorter the IO. If you have the luxury of playing with an IO subsystem to select varying drives (Single vs striped, controller vs direct attach, Writeback cache versus write through) then this is readily confirmed.
During my benchmarks I have that luxury and have measured this. For example with Write-back cachng on I might see 500 IO/sec @2KB/IO = 1MB/sec. Switch it off and get for example 20 IO/sec @ 50 KB... still 1MB/sec.
In my case, as you suspect, when it was running at larger IOs, the overall throughput was marginally (1 - 3%) higher, but the response time lower.
In our worst (Best!) cases under total max-out we would even see 500KB sized redo IO (and the blocksize was not set to 500KB :-).
You can force the issue somewhat by controller WB caching as I indicated, or by using direct attached storage, or by just using 1 or 2 physical disks for the logfile behind a controller unit. I like the latter setup as for most cases (up to 10mb/sec redo!) a single disk is fast enough anyway, and with the 'linear write' it is very effective.
Hope this helps,
LGWR writes to the online redo log files sequentially – that is, when the current
online redo group is filed, LGWR begins writing to the next group. When the last is
filled, LGWR returns to the first group and starts writing again.
DBA can also force log switches. Each time a log switch occurs and LGWR begins
writing a new log group, the oracle server assigns a number known as thelog
sequence number to identify the set of redo entries. When a log switch occurs, an event called acheckpoint is initiated.
Its the event during which LGWR stops writing to one online redo log group and
starts writing to another. You can even force it by:
ALTER SYSTEM SWITCH LOGFILE;
During a checkpoint
A number of dirty database buffers covered by the log being checkpointed are written to
the datafile's by DBWn. The number of buffers being written by DBWn is determined
by the parameter FAST_START_IO_TARGET, if specified. CKPT occurs at:
- every log switch.
- when forced by LOG_CHECKPOINT_INTERVAL, LOG_CHECKPOINT_TIMEOUT.
- info about each is recorded in alert file, if LOG_CHECKPOINTS_TO_ALERT is set to TRUE.
- force by DBA ALTER SYSTEM CHECKPOINT;
Jul 12, 2010 | Server Fault
I have a Sun M4000 connected to an EMC CX4-120 array with a write-heavy database. Writes peak at around 1200 IO/s and 12MB/s.
According to EMC, I am saturating the write cache on the EMC array.
I think the simplest solution is to move the redo logs to a DRAM based SSD. This will reduce the load on the EMC array by half and apps won't be seeing log buffer waits. Yes, the DBWR may become a bottleneck, but the apps won't be waiting for it (like they do on redo commits!)
I currently cycle through about 4 4GB redo logs, so even 20GB or so of SSD would make a big difference. Since this is short-term storage and is constantly being overwritten, Flash based SSDs probably aren't a great idea.
The M4000 doesn't have any extra drive lots, so a PCI-E card would be perfect, I could go external or move boot volumes to the EMC and free up the local drives.
Sun sells a Flash Accelerator F20 PCIe card, but that seems to be a cache for some SATA disks, not a DRAM SSD solution. Details are sketchy, it doesn't list the M4000 as supported, and I'm tired of fighting Sun's phone tree looking for human help. :(
First - I guess you have very few disks in the array. 1200IOPS can be easily supported be 12 spinning disks (100 IOPS per disk is very reasonable). If the cache can't handle it, it means that your sustained write rate of 1200 IOPS is way more than your disks can support.
Anyway, SSD for redo logs isn't likely to help. First, are your session wait mostly on the COMMIT statement? Check the top wait events in statspack / AWR to verify. I would guess ~95% of your I/O is not to the redo logs at all. For example, a single row insert to a table with 5 indexes can do 1 I/O to read a table block (that has space for the row), read 5 index blocks (to update them), write 1 data block, 1 undo block and 5 index blocks (or more, if non-leaf blocks are updated) and 1 redo block. So, check statspack and see your wait events, you are likely waiting a lot of both READs and WRITEs for data / indexes. Waiting for reads slows down the INSERT, and the WRITE activity makes READs even slower - it is the same disks (BTW - do you really need all the indexes? dropping those who aren't must have will accelerate the inserts).
Another thing to check is RAID definition - is it RAID1 (mirroring - each write is two writes) or RAID 5 (each write is 2 reads and two writes for checksum calculation). RAID 5 is way slower in write-intensive load.
BTW - if the disks can't hanlde the write load, DBWR will be a bottleneck. Your SGA will be full with dirty blocks, and you will not have room left to read new blocks (like index blocks that needs to be processed / updated) until DBWR can write some dirty blocks to disks. Again, check statspack / awr report /addm to diagnose what's the bottleneck, typically based on the top 5 wait events.
Sep 21, 2006
I recently had a case with a customer concerned over the time taken to dd an Oracle redo log file even though Oracle had been shutdown. Finding this strange I thought I'd use TNF probes to see what was happening.
At any given time, Oracle uses only one of the online redo log files to store redo records written from the redo log buffer.
The online redo log file that Log Writer (LGWR) is actively writing to is called the current online redo log file. Online redo Oracle log files that are required for instance recovery are called active online redo log files. Online redo log files that are not required for instance recovery are called inactive.
During testing, the easiest way to determine if the current online redo log configuration is satisfactory is to examine the contents of the LGWR trace file and the database's alert log.
If messages indicate that LGWR frequently has to wait for a group because a checkpoint has not completed or a group has not been archived, add groups.
LGWR writes to online redo log files in a circular fashion. When the current online redo log file fills, LGWR begins writing to the next available online redo log file.
When the last available online redo log file is filled, LGWR returns to the first online redo log file and writes to it, starting the cycle again. The numbers next to each line indicate the sequence in which LGWR writes to each online redo log file.
Filled online redo log files are available to LGWR for reuse depending on whether archiving is enabled or disabled:
- If archiving is disabled (NOARCHIVELOG mode), a filled online redo log file is available once the changes recorded in it have been written to the datafiles.
- If archiving is enabled (ARCHIVELOG mode), a filled online redo log file is available to LGWR once the changes recorded in it have been written to the datafiles and once the file has been archived.
Operations on Oracle log files :
- Forcing log file switches:
ALTER SYSTEM switch logfile;
ALTER SYSTEM checkpoint;
- Clear A Log File If It Has Become Corrupt:
ALTER DATABASE CLEAR LOGFILE GROUP group_number;
- This statement overcomes two situations where dropping redo logs is not possible: If there are only two log groups and if the corrupt redo log file belongs to the current group:
ALTER DATABASE CLEAR LOGFILE GROUP 4;
- Clear A Log File If It Has Become Corrupt And Avoid Archiving:
ALTER DATABASE CLEAR UNARCHIVED LOGFILE GROUP group_number;
- Use this version of clearing a log file if the corrupt log file has not been archived:
ALTER DATABASE CLEAR UNARCHIVED LOGFILE GROUP 3;
- Privileges Related To Managing Log Files:
- Init File Parameters Related To Log Files:
log_checkpoint_timeout ... set to 0
- Managing Log File Members:
ADD LOGFILE MEMBER 'log_member_path_and_name'
TO GROUP group_number;
- Adding log file group members:
ADD LOGFILE MEMBER '/oracle/dbs/log2b.rdo' TO GROUP 2;
- Droping log file group members:
DROP LOGFILE MEMBER log_member_path_and_name';
DROP LOGFILE MEMBER '/oracle/dbs/log3c.rdo';
- To create a new group of online redo log files, use the SQL statement ALTER DATABASE with the ADD LOGFILE clause:
The following statement adds a new group of redo Oracle log files to the database:
ALTER DATABASE ADD LOGFILE ('/oracle/dbs/log1c.rdo', '/
oracle/dbs/log2c.rdo') SIZE 500K;
An Oracle database can run in one of two modes. By default, the database is created in NOARCHIVELOG mode. When in NOARCHIVELOG mode the database runs normally, but there is no capacity to perform any type of point in time recovery operations or online backups. Thus, you have to shutdown the database to back it up, and when you recover the database you can only recover it to the point of the last backup. While this might be fine for a development environment, the big corporate types tend to frown when a weeks worth of current production accounting data is lost forever.
Using the ARCHIVELOG Mode
So, if you wish to avoid the wrath of the CEO and angry end-users, you will want to run Oracle in ARCHIVELOG mode. In ARCHIVELOG mode, the database will make copies of all online redo logs after they are filled. These copies are called archived redo logs. The archived redo logs are created via the ARCH process. The ARCH process copies the archived redo log files to one or more archive log destination directories.
The use of ARCHIVELOG mode requires some configuration of the database. First you must put the database in ARCHIVELOG mode and you must also configure the ARCH process, and prepare the archived redo log destination directories.
There are some down sides to running the database in ARCHIVELOG mode. For example, once an online redo log has been filled, it cannot be reused until it has been archived. If Oracle cannot archive the online redo log (for example, the destination directory for the archived redo logs is filled up), it will switch to the next online redo log and keep working. At the same time, Oracle will continue to try to archive the log file.
Unfortunately, once the database runs out of available online redo logs, we have a problem. If the log files can not be written out, then Oracle would have to overwrite them. This is not good because it means we would loose the data that was in those files since Oracle could not archive the file. As a result of this, in an effort to protect the database, Oracle will not overwrite data in an online redo log file until that log file has been archived. Until the file has been archived, the database will simply stop processing user requests. Once the log file has been archived, the database will be freed, and processing can proceed as normal.
You can see how an incorrect configuration of the database when it is in ARCHIVELOG mode can eventually lead to the database suspending operations because it can not archive the current online redo logs.
In the next sections we will look at how to configure the database for ARCHIVELOG mode and how to put the database in ARCHIVELOG mode.
Configuring the database for ARCHIVELOG Mode
One of the main features of a database that is in ARCHIVELOG mode is that it generates copies of the online redo logs called archived redo logs. By default in Oracle Database 10g Oracle will send archived redo logs to the flash recovery area and we recommend this configuration.
To properly setup the flash recovery area, you will want to set two parameters as seen in the following list:
db_recovery_file_dest - ORACLE_BASE/flash_recovery_area - This is the location of the flash recovery area.
db_recovery_file_dest_size - 2g - This is the maximum size that can be used by the flash recovery area. If this size limit is exceeded, you must clear out space or database operations will eventually stall.
Use the alter system command to set these parameters if you do not want to use the default values. You will find examples of the use of the alter system command to change parameters earlier in this chapter. We recommend that the db_recovery_file_dest parameter be set to a directory location that is separate from the location of the Oracle software, your redo logs, and your data files. You do not want to accidentally fill up ORACLE_HOME or cause performance issues due to contention.
When the flash recovery area is configured, a directory for the database will be created in the location defined by the db_recovery_file_dest parameter. For example, our database has a directory called:
Under this directory are individual directories for various file types such as ARCHIVELOG where the archived redo logs will reside.
In earlier versions of Oracle you had to enable a special Oracle process called ARCH by setting another parameter. Oracle Database 10g does not require this. When the database is in ARCHIVELOG mode, it will start the ARCH process automatically.
Putting the database in ARCHIVELOG Mode
Once you have configured the flash recovery area, you can put the database in ARCHIVELOG mode. Unfortunately, this requires that the database be shutdown first with the shutdown command (however, from earlier in the chapter, we note that shutdown immediate is the best option). Once you have shutdown the database, you will start the database in mount mode with the startup mount command. Then put the database in ARCHIVELOG mode, and finally open the database. Here is an example of how this all works from the command line:SQL> shutdown immediate Database closed. Database dismounted. ORACLE instance shut down. SQL> startup mount ORACLE instance started. Total System Global Area 272629760 bytes Fixed Size 788472 bytes Variable Size 103806984 bytes Database Buffers 167772160 bytes Redo Buffers 262144 bytes Database mounted. SQL> alter database archivelog; Database altered. SQL> alter database open; Database altered.
Once the database is in ARCHIVELOG mode, it will start generating archived redo logs. It's always a good idea to make sure that the archived redo logs are getting generated. To do this, first force a log switch with the alter system switch logfile command. Then check the flash recovery area to make sure an archived redo log is created.
Note! Oracle flash recovery area re-named to fast recovery area
The archived redo logs will be in the flash recovery area in the ARCHIVELOG directory. Under that directory you will find individual directories, each represents a different date such as 2005_03_09 for March 3, 1005. The directory structure on my computer looks like this:
It might look a little different on your computer (sometimes Oracle does different things on different Operating Systems) but it should be pretty easy to figure it out.
Now, go to the directory that is named for today's date. In my case, I'll go to the 2005_03_16 directory. Next, we do a directory listing in that directory and you should see archived redo logs in the directory. Here is an example of what you will see on your computer:C:\Oracle\product\flash_recovery_area\BOOKTST\ARCHIVELOG\2005_03_16
>dirVolume in drive C has no label. Volume Serial Number is 50FD-2353 Directory of c:\Oracle\product\flash_recovery_area\BOOKTST\ARCHIVELOG\2005_03_16 If you are seeing files get generated here, you know archiving is working all right.
Archived Redo Log Data Dictionary Views
Oracle provides data dictionary views for the archived redo logs as seen in this list:
- v$archived_log - Information about archived redo logs.
- v$parameter - Shows the location of the flash recovery area where archived redo logs are created.
- v$log_history - Contains information on previous redo logs
NOTE: In RAC, a separate set of archive log files is created by each instance. Since each RAC instance has its own redo log files, the corresponding archive log files are produced when the log switch takes place. The archive log files may be written to a local file system or to a cluster file system. Oracle does not insist upon a particular type of file system. Writing to a clustered file system has the added advantage of being available to archive all the nodes. More information is available on RAC archive log files is available HERE.
From the Oracle® Database Installation Guide 10g Release 1 (10.1) for UNIX Systems: AIX-Based Systems, hp HP-UX PA-RISC (64-bit), hp Tru64 UNIX, Linux x86, and Solaris Operating System (SPARC), the stated benefits include the ability to install different products with the same release number in the same Oracle base directory as well as the ability to install the same product more than once in the same Oracle base directory.
Also, see these related OFA notes:
oracle redo log files do not dynamically grow when more space is needed for redo entries; they have a fixed size (on SAP systems,typically 50MB). When the current online redo log file becomes full, the log writer process closes this file and starts writing into the next one. This is called a log switch.
Offline redo log files are written to the oraarch directory; their names are specified with help of Oracle instance configuration parameters, so the name <DBSID>arch1_<LSN>.dbf is just an example.
Moreover, offline redo log files should be stored on a mirrored disk to prevent loss of redo
information. A RAID system can be used for this purpose. If u loose a disk containing offline redo logs and data files, complete recovery is no longer possible. Therefore, offline redo logs and data files should be on different disks!
September 5, 2007
The redo log buffer is a circular buffer in the SGA that holds information about changes made to the database. This information is stored in redo entries. Redo entries contain the information necessary to reconstruct, or redo, changes made to the database by INSERT, UPDATE, DELETE, CREATE, ALTER, or DROP operations. Redo entries are used for database recovery, if necessary.
Redo entries are copied by Oracle database processes from the user's memory space to the redo log buffer in the SGA. The redo entries take up continuous, sequential space in the buffer. The background process LGWR writes the redo log buffer to the active redo log file (or group of files) on disk.
The initialization parameter LOG_BUFFER determines the size (in bytes) of the redo log buffer. In general, larger values reduce log file I/O, particularly if transactions are long or numerous. The default setting is either 512 kilobytes (KB) or 128 KB times the setting of the CPU_COUNT parameter, whichever is greater.
The log writer process (LGWR) is responsible for redo log buffer management-writing the redo log buffer to a redo log file on disk.LGWR writes all redo entries that have been copied into the buffer since the last time it wrote.
LGWR writes one contiguous portion of the buffer to disk. LGWR writes:
A commit record when a user process commits a transaction
- Redo log buffers
- Every three seconds
- When the redo log buffer is one-third full
- When a DBWn process writes modified buffers to disk, if necessary
Note: Before DBWn can write a modified buffer, all redo records associated with the changes to the buffer must be written to disk (the write-ahead protocol). If DBWn finds that some redo records have not been written, it signals LGWR to write the redo records to disk and waits for LGWR to complete writing the redo log buffer before it can write out the data buffers.
LGWR writes synchronously to the active mirrored group of redo log files. If one of the files in the group is damaged or unavailable, LGWR continues writing to other
files in the group and logs an error in the LGWR trace file and in the system alert file. If all files in a group are damaged, or the group is unavailable because it has not been archived, LGWR cannot continue to function.
When a user issues a COMMIT statement, LGWR puts a commit record in the redo log buffer and writes it to disk immediately, along with the transaction's redo entries. The corresponding changes to data blocks are deferred until it is more efficient to write them. This is called a fast commit mechanism. The atomic write of the redo entry containing the transaction's commit record is the single event that determines the transaction has committed. Oracle returns a success code to the committing transaction, although the data buffers have not yet been written to disk.
Note: Sometimes, if more buffer space is needed, LGWR writes redo log entries before a transaction is committed. These entries become permanent only if the transaction is later committed.
When a user commits a transaction, the transaction is assigned a system change number (SCN), which Oracle records along with the transaction's redo entries in the redo log. SCNs are recorded in the redo log so that recovery operations can be synchronized in Real Application Clusters and distributed databases.
In times of high activity, LGWR can write to the redo log file using group commits.For example, assume that a user commits a transaction. LGWR must write the transaction's redo entries to disk, and as this happens, other users issue COMMIT statements. However, LGWR cannot write to the redo log file to commit these transactions until it has completed its previous write operation. After the first transaction's entries are written to the redo log file, the entire list of redo entries of waiting transactions (not yet committed) can be written to disk in one operation, requiring less I/O than do transaction entries handled individually. Therefore, Oracle minimizes disk I/O and maximizes performance of LGWR. If requests to commit continue at a high rate, then every write (by LGWR) from the redo log buffer can contain multiple commit records.
Google matched content
Oracle Tuning- The Definitive Reference (Oracle In-Focus series)
Dissassembling the Oracle Redolog
Oracle Monitoring Redo Log Status
Groupthink : Two Party System as Polyarchy : Corruption of Regulators : Bureaucracies : Understanding Micromanagers and Control Freaks : Toxic Managers : Harvard Mafia : Diplomatic Communication : Surviving a Bad Performance Review : Insufficient Retirement Funds as Immanent Problem of Neoliberal Regime : PseudoScience : Who Rules America : Neoliberalism : The Iron Law of Oligarchy : Libertarian Philosophy
War and Peace : Skeptical Finance : John Kenneth Galbraith :Talleyrand : Oscar Wilde : Otto Von Bismarck : Keynes : George Carlin : Skeptics : Propaganda : SE quotes : Language Design and Programming Quotes : Random IT-related quotes : Somerset Maugham : Marcus Aurelius : Kurt Vonnegut : Eric Hoffer : Winston Churchill : Napoleon Bonaparte : Ambrose Bierce : Bernard Shaw : Mark Twain Quotes
Vol 25, No.12 (December, 2013) Rational Fools vs. Efficient Crooks The efficient markets hypothesis : Political Skeptic Bulletin, 2013 : Unemployment Bulletin, 2010 : Vol 23, No.10 (October, 2011) An observation about corporate security departments : Slightly Skeptical Euromaydan Chronicles, June 2014 : Greenspan legacy bulletin, 2008 : Vol 25, No.10 (October, 2013) Cryptolocker Trojan (Win32/Crilock.A) : Vol 25, No.08 (August, 2013) Cloud providers as intelligence collection hubs : Financial Humor Bulletin, 2010 : Inequality Bulletin, 2009 : Financial Humor Bulletin, 2008 : Copyleft Problems Bulletin, 2004 : Financial Humor Bulletin, 2011 : Energy Bulletin, 2010 : Malware Protection Bulletin, 2010 : Vol 26, No.1 (January, 2013) Object-Oriented Cult : Political Skeptic Bulletin, 2011 : Vol 23, No.11 (November, 2011) Softpanorama classification of sysadmin horror stories : Vol 25, No.05 (May, 2013) Corporate bullshit as a communication method : Vol 25, No.06 (June, 2013) A Note on the Relationship of Brooks Law and Conway Law
Fifty glorious years (1950-2000): the triumph of the US computer engineering : Donald Knuth : TAoCP and its Influence of Computer Science : Richard Stallman : Linus Torvalds : Larry Wall : John K. Ousterhout : CTSS : Multix OS Unix History : Unix shell history : VI editor : History of pipes concept : Solaris : MS DOS : Programming Languages History : PL/1 : Simula 67 : C : History of GCC development : Scripting Languages : Perl history : OS History : Mail : DNS : SSH : CPU Instruction Sets : SPARC systems 1987-2006 : Norton Commander : Norton Utilities : Norton Ghost : Frontpage history : Malware Defense History : GNU Screen : OSS early history
The Peter Principle : Parkinson Law : 1984 : The Mythical Man-Month : How to Solve It by George Polya : The Art of Computer Programming : The Elements of Programming Style : The Unix Hater’s Handbook : The Jargon file : The True Believer : Programming Pearls : The Good Soldier Svejk : The Power Elite
Most popular humor pages:
Manifest of the Softpanorama IT Slacker Society : Ten Commandments of the IT Slackers Society : Computer Humor Collection : BSD Logo Story : The Cuckoo's Egg : IT Slang : C++ Humor : ARE YOU A BBS ADDICT? : The Perl Purity Test : Object oriented programmers of all nations : Financial Humor : Financial Humor Bulletin, 2008 : Financial Humor Bulletin, 2010 : The Most Comprehensive Collection of Editor-related Humor : Programming Language Humor : Goldman Sachs related humor : Greenspan humor : C Humor : Scripting Humor : Real Programmers Humor : Web Humor : GPL-related Humor : OFM Humor : Politically Incorrect Humor : IDS Humor : "Linux Sucks" Humor : Russian Musical Humor : Best Russian Programmer Humor : Microsoft plans to buy Catholic Church : Richard Stallman Related Humor : Admin Humor : Perl-related Humor : Linus Torvalds Related humor : PseudoScience Related Humor : Networking Humor : Shell Humor : Financial Humor Bulletin, 2011 : Financial Humor Bulletin, 2012 : Financial Humor Bulletin, 2013 : Java Humor : Software Engineering Humor : Sun Solaris Related Humor : Education Humor : IBM Humor : Assembler-related Humor : VIM Humor : Computer Viruses Humor : Bright tomorrow is rescheduled to a day after tomorrow : Classic Computer Humor
The Last but not Least Technology is dominated by two types of people: those who understand what they do not manage and those who manage what they do not understand ~Archibald Putt. Ph.D
Copyright © 1996-2021 by Softpanorama Society. www.softpanorama.org was initially created as a service to the (now defunct) UN Sustainable Development Networking Programme (SDNP) without any remuneration. This document is an industrial compilation designed and created exclusively for educational use and is distributed under the Softpanorama Content License. Original materials copyright belong to respective owners. Quotes are made for educational purposes only in compliance with the fair use doctrine.
FAIR USE NOTICE This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available to advance understanding of computer science, IT technology, economic, scientific, and social issues. We believe this constitutes a 'fair use' of any such copyrighted material as provided by section 107 of the US Copyright Law according to which such material can be distributed without profit exclusively for research and educational purposes.
This is a Spartan WHYFF (We Help You For Free) site written by people for whom English is not a native language. Grammar and spelling errors should be expected. The site contain some broken links as it develops like a living tree...
|You can use PayPal to to buy a cup of coffee for authors of this site|
Last modified: March 12, 2019