|Home||Switchboard||Unix Administration||Red Hat||TCP/IP Networks||Neoliberalism||Toxic Managers|
|May the source be with you, but remember the KISS principle ;-)|
|SQL optimization||Database optimization||Application optimization||Oracle optimization||Tivoli||Humor||Etc|
You should be very careful in optimization and the main criteria is "not do any harm" rather then "achieve spectacular improvements".
|The best performance first of all means avoiding blunders in
installation and configuration. In a way it comes from the unnecessary
work you don't do.
Much depends on the level of qualification of a particular DBA or system administrator.
The higher the level qualification is the more probably that actions taken will have positive, not negative effect.
Excessive zeal is another danger. The key to database tuning is ability to objectively measure the performance. Good ideas without measurement of performance in optimization space are often turn to be useless or even bad ideas.
There is no free lunch and the more optimized system is, the more specialized for a particular application it became; as a result any changes in application can disproportionally affect performance. We can categorized the effects of performance tuning (aka optimization) in two categories:
There are five major areas of optimization:
As we go down the list we generally can get higher and higher returns on the efforts. But risks also generally increase. Also not all application are open source so application level optimization if often limited to the contains of the particular implementation.
The level of optimization available to the organization usually are dependent of the qualification of the staff. The higher the qualification of staff the more levels of optimization are available and the more is the potential return.
Here is a relevant quote from Oracle database Performance Tuning FAQ - Oracle FAQ
Consider the following areas for tuning. The order in which steps are listed needs to be maintained to prevent tuning side effects. For example, it is no good increasing the buffer cache if you can reduce I/O by rewriting a SQL statement.
- Database Design (if it's not too late): Poor system performance usually results from a poor database design. One should generally normalize to the 3NF. Selective denormalization can provide valuable performance improvements. When designing, always keep the "data access path" in mind. Also look at proper data partitioning, data replication, aggregation tables for decision support systems, etc.
- Application Tuning: Experience showed that approximately 80% of all Oracle system performance problems are resolved by coding optimal SQL. Also consider proper scheduling of batch tasks after peak working hours.
- Memory Tuning: Properly size your database buffers (shared_pool, buffer cache, log buffer, etc) by looking at your wait events, buffer hit ratios, system swapping and paging, etc. You may also want to pin large objects into memory to prevent frequent reloads.
- Disk I/O Tuning: Database files needs to be properly sized and placed to provide maximum disk subsystem throughput. Also look for frequent disk sorts, full table scans, missing indexes, row chaining, data fragmentation, etc.
- Eliminate Database Contention: Study database locks, latches and wait events carefully and eliminate where possible.
- Tune the Operating System: Monitor and tune operating system CPU, I/O and memory utilization. For more information, read the related Oracle FAQ dealing with your specific operating system.
What tools/utilities does Oracle provide to assist with performance tuning?
Oracle provide the following tools/ utilities to assist with performance monitoring and tuning:
- ADDM (Automated Database Diagnostics Monitor) introduced in Oracle 10g
- Oracle Enterprise Manager - Tuning Pack (cost option)
- Old UTLBSTAT.SQL and UTLESTAT.SQL - Begin and end stats monitoring
When is cost based optimization triggered?
It's important to have statistics on all tables for the CBO (Cost Based Optimizer) to work correctly. If one table involved in a statement does not have statistics, and optimizer dynamic sampling isn't performed, Oracle has to revert to rule-based optimization for that statement. So you really want for all tables to have statistics right away; it won't help much to just have the larger tables analyzed.
Generally, the CBO can change the execution plan when you:
- Change statistics of objects by doing an ANALYZE;
- Change some initialization parameters (for example: hash_join_enabled, sort_area_size, db_file_multiblock_read_count).
How can one optimize %XYZ% queries?
It is possible to improve %XYZ% (wildcard search) queries by forcing the optimizer to scan all the entries from the index instead of the table. This can be done by specifying hints.
If the index is physically smaller than the table (which is usually the case) it will take less time to scan the entire index than to scan the entire table.
Where can one find I/O statistics per table?
The STATSPACK and UTLESTAT reports show I/O per tablespace. However, they do not show which tables in the tablespace has the most I/O operations.
The $ORACLE_HOME/rdbms/admin/catio.sql script creates a sample_io procedure and table to gather the required information. After executing the procedure, one can do a simple SELECT * FROM io_per_object; to extract the required information.
For more details, look at the header comments in the catio.sql script.
My query was fine last week and now it is slow. Why?
The likely cause of this is because the execution plan has changed. Generate a current explain plan of the offending query and compare it to a previous one that was taken when the query was performing well. Usually the previous plan is not available.
Some factors that can cause a plan to change are:
- Which tables are currently analyzed? Were they previously analyzed? (ie. Was the query using RBO and now CBO?)
- Has OPTIMIZER_MODE been changed in INIT<SID>.ORA?
- Has the DEGREE of parallelism been defined/changed on any table?
- Have the tables been re-analyzed? Were the tables analyzed using estimate or compute? If estimate, what percentage was used?
- Have the statistics changed?
- Has the SPFILE/ INIT<SID>.ORA parameter DB_FILE_MULTIBLOCK_READ_COUNT been changed?
- Has the INIT<SID>.ORA parameter SORT_AREA_SIZE been changed?
- Have any other INIT<SID>.ORA parameters been changed?
What do you think the plan should be? Run the query with hints to see if this produces the required performance.
Does Oracle use my index or not?
One can use the index monitoring feature to check if indexes are used by an application or not. When the MONITORING USAGE property is set for an index, one can query the v$object_usage to see if the index is being used or not. Here is an example:SQL> CREATE TABLE t1 (c1 NUMBER); Table created. SQL> CREATE INDEX t1_idx ON t1(c1); Index created. SQL> ALTER INDEX t1_idx MONITORING USAGE; Index altered. SQL> SQL> SELECT table_name, index_name, monitoring, used FROM v$object_usage; TABLE_NAME INDEX_NAME MON USE ------------------------------ ------------------------------ --- --- T1 T1_IDX YES NO SQL> SELECT * FROM t1 WHERE c1 = 1; no rows selected SQL> SELECT table_name, index_name, monitoring, used FROM v$object_usage; TABLE_NAME INDEX_NAME MON USE ------------------------------ ------------------------------ --- --- T1 T1_IDX YES YES
To reset the values in the v$object_usage view, disable index monitoring and re-enable it:ALTER INDEX indexname NOMONITORING USAGE; ALTER INDEX indexname MONITORING USAGE;
Why is Oracle not using the damn index?
This problem normally only arises when the query plan is being generated by the Cost Based Optimizer (CBO). The usual cause is because the CBO calculates that executing a Full Table Scan would be faster than accessing the table via the index. Fundamental things that can be checked are:
- USER_TAB_COLUMNS.NUM_DISTINCT - This column defines the number of distinct values the column holds.
- USER_TABLES.NUM_ROWS - If NUM_DISTINCT = NUM_ROWS then using an index would be preferable to doing a FULL TABLE SCAN. As the NUM_DISTINCT decreases, the cost of using an index increase thereby making the index less desirable.
- USER_INDEXES.CLUSTERING_FACTOR - This defines how ordered the rows are in the index. If CLUSTERING_FACTOR approaches the number of blocks in the table, the rows are ordered. If it approaches the number of rows in the table, the rows are randomly ordered. In such a case, it is unlikely that index entries in the same leaf block will point to rows in the same data blocks.
- Decrease the INIT<SID>.ORA parameter DB_FILE_MULTIBLOCK_READ_COUNT - A higher value will make the cost of a FULL TABLE SCAN cheaper.
Remember that you MUST supply the leading column of an index, for the index to be used (unless you use a FAST FULL SCAN or SKIP SCANNING).
There are many other factors that affect the cost, but sometimes the above can help to show why an index is not being used by the CBO. If from checking the above you still feel that the query should be using an index, try specifying an index hint. Obtain an explain plan of the query either using TKPROF with TIMED_STATISTICS, so that one can see the CPU utilization, or with AUTOTRACE to see the statistics. Compare this to the explain plan when not using an index.
When should one rebuild an index?
You can run the ANALYZE INDEX <index> VALIDATE STRUCTURE command on the affected indexes - each invocation of this command creates a single row in the INDEX_STATS view. This row is overwritten by the next ANALYZE INDEX command, so copy the contents of the view into a local table after each ANALYZE. The 'badness' of the index can then be judged by the ratio of 'DEL_LF_ROWS' to 'LF_ROWS'.
How does one tune Oracle Wait event XYZ?
Here are some of the wait events from V$SESSION_WAIT and V$SYSTEM_EVENT views:
- db file sequential read: Tune SQL to do less I/O. Make sure all objects are analyzed. Redistribute I/O across disks.
- buffer busy waits: Increase DB_CACHE_SIZE (DB_BLOCK_BUFFERS prior to 9i)/ Analyze contention from SYS.V$BH
- log buffer space: Increase LOG_BUFFER parameter or move log files to faster disks
What is the difference between DBFile Sequential and Scattered Reads?
Both "db file sequential read" and "db file scattered read" events signify time waited for I/O read requests to complete. Time is reported in 100's of a second for Oracle 8i releases and below, and 1000's of a second for Oracle 9i and above. Most people confuse these events with each other as they think of how data is read from disk. Instead they should think of how data is read into the SGA buffer cache.
db file sequential read:
A sequential read operation reads data into contiguous memory (usually a single-block read with p3=1, but can be multiple blocks). Single block I/Os are usually the result of using indexes. This event is also used for rebuilding the controlfile and reading datafile headers (P2=1). In general, this event is indicative of disk contention on index reads.
db file scattered read:
Similar to db file sequential reads, except that the session is reading multiple data blocks and scatters them into different discontinuous buffers in the SGA. This statistic is NORMALLY indicating disk contention on full table scans. Rarely, data from full table scans could be fitted into a contiguous buffer area, these waits would then show up as sequential reads instead of scattered reads.
The following query shows average wait time for sequential versus scattered reads:prompt "AVERAGE WAIT TIME FOR READ REQUESTS" select a.average_wait "SEQ READ", b.average_wait "SCAT READ" from sys.v_$system_event a, sys.v_$system_event b where a.event = 'db file sequential read' and b.event = 'db file scattered read';
The database server's primary function is to store, search, retrieve, and update data from disk. Examples of Database engines include IBM DB2, Microsoft SQL Server, and Oracle. Due to the high number of random I/O requests that database servers are required to do and the computation intensive activities that occur, the potential areas that have the most impact on performance are:
A balanced system is especially important, for example, if adding additional CPUs, consider upgrading other subsystems such as increasing memory and ensuring that disk resources are adequate. The key subsystems that influence databsed performace in the servers are:
Processing power can be an important factor for database servers because some database queries and update operations require intensive CPU time. The database replication process also requires considerable amounts of CPU cycles.
Database servers are multi-threaded applications. So, SMP-capable systems
provide improved performance scaling to 16-way and beyond. L2 cache size
is also important due to the high hit ratio-the proportion of memory requests
that fill from the much faster cache instead of from memory. For example, SQL
Server's L2 cache hit ratio approaches 90%.
Memory speed is very important (the faster the better). Current X86 servers
can use 667MHz DIMMs. New memory technologies include the Fully Buffered DIMMs
(FBD), providing higher capacity and bandwidth, improved flexibility and memory
There are also several subsystems that doesn't affect server performance. one is the video subsystem which in a server is relatively insignificant.
Here are some Oracle database tuning topics that I have picked out from some of my classes and other sources. They are in no particular order - just whatever came across a listserv or prompted me from a book or manual. Tuning is an ongoing process, but, don't let it dominate your life! You might want to check out your configuration every month or so, or when there has been some large structural change made to it, or if your users are noticing a slowdown, but, it's usually not something that demands your constant attention.
I've tried to focus on statistics that are immediately available in tables, rather than having to run statistics gathering routines such as utlbstat/utlestat, since the reports from those can contain a majority of information that you will probably never use and will have to sift through to find what you are really looking for. Note that if you have just started up your Oracle database instance, this information will probably be irrelevant - you should probably wait several hours after startup to get a representative sample of your users' interactions with the database. Also, be aware that some statistics may be expressed as a "hit ratio", while others may be expressed as a "miss ratio" - they are different, and you can convert one to the other by subtracting it from 1. All of this information is generic to Oracle. Check back here again for additions and updates!
Description: A tutorial about one of the really hidden features of Solaris - CacheFS.
CacheFS is something similar to a caching proxy.
But this proxy don´t cache web page, it caches files from another filesystem.
Contact: joerg.moellenkamp [ at ] sun.com
Recommended Server Systems, 2008 Q3 - Dunnington six-core
Yet another update with the publication TPC results for the Intel X7460 six core (Dunnington)
X7460 2.67GHz 3x3M L2, 16M L3
Dell, HP lists availability Sep 15, 2008, IBM lists availability on the x3950M2 as Dec 10,
Dell R900 with 4 x X7460, 2.67GHz, 6 core, 16M L3, $17,195
HP DL580G5 with 4 x X7460, 2.67GHz 6 core, 16M L3 $19,151
I think the IBM x3950M2 with 4 x X7460 is $41K (understanding this system can be expanded to 16 sockets, and hence has higher cost structure)
Tukwila Quad Core Itanium due in Q1 2009?
I am looking over the Intel IDF slides on Tukwila. It is a quad-core Itanium, with probably just minor improvements in the core (specifically mentioned are HT) but with integrated memory controller and the QuickPath Interconnect (QPI) replacing the FSB. Tukwila will be 65nm when the first 45nm procs are 1 year old, meaning it really 2 years late (What I mean by this is a 65-nm quad core Itanium could have been built in late 2006/early 2007, if the prep work started early with clear objectives, and its performanc would have been earth shattering relative to x86/64). Frequency improvements are mentioned over the 90nm Dual-core, which is running about the same frequency as the 130nm single core Madison. Still, Tukwila has a large cache, 6M L3 per core, massive bandwith via QPI for good scaling characteristics (4 full width + 2 half width, allowing glueless 8-way), 4 DDR memory channels per socket, an HT, which is good for high call volume database apps. Intel mentioned about 2X performance, so they are probably targeting 740K tpm-C.
This means we will have the choice of 1) the six-core Dunnington, with the most powerful CPU core on earth (prior to Nehalem) on a weak chipset with 4 memory channels support 4 sockets of 24 cores, 2) the new quad-core Itanium with outstanding scaling characteristics plus HT for added throughput, but a weak core, 3) AMD Barcelona, also with very good scaling (8-way glueless), no HT, and slightly better than weak core, 4) Nehalem, with what will be the most powerful core, the new QPI for good scaling, 3 memory channels per socket, HT, but for the first year, only 2-sockets. Decisions, decisions.
Due out in Q4 2008, the initial Nehalem core will support 2-way, 3 memory channels, 2 QPI. About a year later, Beckton, the MP server (4 sockets and up) version come out, 4 mem channels/socket, 4QPI.
TPC-C (Windows Server 2003, SQL Server 2005SP2)
4 x Intel X7460 six core 2.67GHz, 634,825 tpm-C
4 x AMD 8360 quad-core 2.5GHz 471,883 tpm-C
4 x Intel X7350 quad-core 2.93GHz 407,079 tpm-C
TPC-E (W2K8, S2K8, Dell PowerEdge R900 results)
4 x Intel X7460 six core 2.67GHz, 671.35 tps-E
4 x Intel X7350 quad core 2.93GHz, 451.29 tps-E
4 x AMD 8360 quad core ??
TPC-H (W2K3, S2K5, SF 100)
4 x Intel X7350 quad core 2.93GHz, 46,034QphH
4 x Intel X7460 six core ??
4 x AMD 8360 quad core ??
8 x Intel X7350 QC 46,034 QqpH (IBM x3950M2, W2K3, S2K5 sp2)
8 x AMD 8360 QC 52,860 QqpH (HP DL785, W2K8, S2K8)
On TPC-C, the 7460 six core generated a 34% edge over the quad core AMD and 56% advantage over the quad core X3750. Even with the large cache, this is higher than expected. At the time, I suspected HP did not pursue optimization with the 407K result.
On TPC-E, the six core showed a 49% edge over the older quad core.
This could indicate the 7300 chipset with 4 memory channels cannot properly scale the 4 QC 2x4M L2 processors, but can scale the new six core 16M L3 procs.
What's missing are comparable TPC-H numbers, especially at 100GB. The big cache on X7460 helps high call volume apps like TPC-C and E, but not TPC-H. Can the 7300 chipset drive the extra 8 cores (in the X7460 over 16 core in the X7350) in DW queries?
There is an 8xOpteron QC and 8xX7350 at 300GB, but the Opteron is on S2K8 while the X7350 is on S2K5, which has different characteristics.
The X7460 (Dunnington) is a clear winner at 4-way for the high-call volume apps. There are not sufficient results in DW to make a call. AMD does have a small openning in the fairly low-priced 8-way (compared with hard-NUMA systems).
As much as I would like to buy one of these for my own use in researching SQL Server performance characteristics, I am holding my 2008 budget for a Nehalem system as soon as it comes out, and a SSD array. As soon as I can confirm an SSD can do 10K IOPS on random 8K reads (I see the IDF announcements that the new Intel SSD due early 2009 will do 30K IOPS at 4K), I will get a dozen to see what is involved in reaching 100K IOPS from SQL queries. A few years ago, a quick test on a TMS SSD SAN showed 45K, limited by the SQL Server side CPU. On Nehalem, the big question is whether the Hyper-Threading issues of NetBurst has been fixed.
This is an update to the original post on Server Sizing for SQL Server to reflect the new quad-core Opteron systems. The recommended server systems, as of Q3 2008, for line-of-business database applications are:
2-way Intel Xeon: HP ProLiant ML370G5 and Dell PowerEdge 2900 III
4-way Intel: Dell PowerEdge R900 and HP ProLiant DL580G5
2-way AMD Opteron: Dell PowerEdge R805
4-way AMD Opteron: Dell PowerEdge R905 and HP ProLiant DL585G5
8-way AMD Opteron: HP ProLiant DL 785G5
For 2-socket Xeon, the top processors include the X5460 3.16GHz, and the E5440 2.83GHz, both 2x6M cache.
For 4-socket Xeon, the X7350 2.93GHz 2x4M.
For 4 & 8 socket Opteron, the top processor is the8360SE at 2.5GHz.
I really do not want to get heavily into Xeon versus Opteron. It is too emotional a subject for many people and too infested with FUD driven by marketing people. This frequently involves valid technical points taken out of context. What it comes down to is the Core 2 architecture has by far the highest SPEC CPU integer (not rate) scores, and will generate the best results in certain categories of performance tests. This is most evident in single large query tests.
At the 4 & 8-socket level, AMD Opteron has the better memory architecture, with 2 DDR2 memory channels per socket, 8 total in a 4-socket system and 16 memory channels in an 8-socket system, compared with 4 in the 4-socket Xeon with the 7300 chipset. This may yield an advantage in full saturation tests, which are more difficult to run. So at the 4-socket level, the difference is Xeon has the more compute power in the processor cores, while Opteron can turn memory faster. What is better: a 400HP engine with a transmission with 70% efficiency or a 310 HP engine with a 90% transmission efficiency?
At the 8-socket level, Opteron is the best choice for most situations. The 8-socket Opteron (Barcelona) system has what is considered to be soft a NUMA architecture, meaning that memory latency difference between local and remote nodes is low or inconsequential (i.e. do not set the NUMA flag). The IBM and Unisys big iron systems are considered hard NUMA, meaning that memory latency between local and remote nodes is high. Hard NUMA systems can scale, but would most likely required specialized performance analysis skills which are not easily found.
I rate the HP ProLiant ML370G5 over the PowerEdge 2900 III on technical grounds: more memory sockets, and more PCI-E sockets. On the same grounds, I rate the Dell PowerEdge R805 over the ProLiant DL385G5 because the R805 has 16 DIMM sockets over 8 for the DL385G5.
At the 4-socket level, for Intel platforms, the Dell and HP systems are sufficiently comparable.
Note that Dual-Core Opteron processors are an option in the 2 and 4-socket systems, but not the 8 socket DL785. The original and dual core Opteron processors have up to 3 full (16-bit) HT links, of which 2 connect to other processors, and 1 connects to an IO hub. In a 4-socket system, the processors are at the corners of a square, with each processor connected to processors on the two adjacent corners. Hence there is a far processor two hops away.
The Barcelona quad-core has up to 4 full width HT links, each of which can be split as two half-width (8-bit) HT links. In a 4-socket system, each processor can connect directly to all of the other three sockets with a full HT link, leaving one for IO. In an 8-socket system, each processor can connect directly to all seven other sockets with one half-wide HT link, leaving one half-wide link for IO. The HP 4-socket DL585G5 and 8-socket DL785 only support quad-core Opteron, not dual core, which may indicate the use of three full HT links to processors. The Dell R905 supports both dual and quad-core Opteron, which may indicate an older 2-hop to the far processor.
Finally, until quad-core on a current generation manufacturing process is available, Itanium has the very high memory capacity (>256-512GB) and IO bandwidth (>10GB/sec) niche. It could be pointed out that whatever the criticism of Itanium be since its launch, at the time it was conceived in the 1990s, it was a forgone conclusion that RISC would overwhelm x86, which would be not able to benefit from advanced design concepts. Intel was not content to do a johnny come lately to the RISC party, and with HP, came up with a better idea than RISC. And yet, what processor today has the best SPEC CPU int.
IBM Xeon Systems
I have said before that I do not have recent experience with IBM systems. Just from looking at the IBM redbook ob the x3850 M2, it looks very impressive. For this and the x3950 M2, IBM does their own chipset, which supports a NUMA architecture to 4 nodes of 4 sockets. I do not know if 8 nodes are still supported. What I like about the x3850 M2 memory controller is the 8 DDR2 memory channels. I really think the Intel 7300 with 4 memory channels is too weak to support 4 quad cores, and now 4 six core procs. Intel always was afraid of the high-end chipset, obsessively looking at the entry point price, which drags down the high-end configuration. The IBM x3850 M2 did post a TPC-E of 479.51 tps-E over Dell's 451.29. The IBM system has 128GB memory compared with 64GB for Dell, so it is not clear if the 8 memory channels contributed.
Open Profiler offers functions for both technical and business users. It can quickly build statistics that reflects the usability of the information within a database table. As it finds corrupt or inconsistent data, it can scrub bad information from database structures. Additionally, Open Profiler simplifies the repetitive nature of statistical analysis while reducing labor costs and errors.
Getting familiar with Open Profiler
I found it easy to use Open Profiler to find the information needed to perform an analysis of columns. For only a few items did I find myself looking at documentation, and unfortunately was unable to find specifics on how certain aspects fit together. For instance, a metadata repository is supposed to keep a history of profiles, but nowhere did I find how this works.
Data profiling with Open Profiler is easy and is accomplished within a few simple steps. Talend provides executables for Windows, Solaris, Mac OS X, Linux, and AIX. Since I was using Linux I invoked TalendOpenProfiler-linux-gtk-x86 from the command prompt. When the application loads it displays a three-column panel. The left column shows a simple tree structure that allows you to explore and select past data profiling runs and drill down into database connections. The middle work panel is where you build, through selection of databases and objects, the analysis work for a data profiling run. The right help panel provides wizards for defining database connections and analysis runs.
To get started, the first thing you need to do is create a connection to a database by traversing the tree structure in the left panel from Metadata -> DB Connections. Right-click on DB Connections and click on "Create a new connection" to bring up a connection definition screen. Here you can enter the standard connection information for your database. For Oracle, for instance, you must enter in a login, password, hostname, port, and system ID (SID). Once you enter the DB Connection information, Open Profiler allows you to begin drilling down the tree structure, depending on your privileged access to the database. You can traverse through owners, tables, columns, attributes, and views for the specified database connection.
Once you've made a database connection you can being analyzing data in tables, which is the main purpose of this tool. As you traverse the tree structure you find the table you want to analyze, expand the tables columns, then highlight the columns to include for analysis. Right-click on them to bring up the option to analyze, then the Create New Analysis screen. Here you define the type of statistics to run on your selected columns by selecting indicators for each column, which will enable you to choose the type of statistics to use (Simple, Text, Summary, or Advanced). You can then run the analysis. For assistance in these steps, Talend provides the Open Profiler Getting Started Guide, which walks you through these steps and gives you an understanding of the options involved. I wish this manual went a bit more in depth, but it will get you started quickly.
I tested Talend Open Profiler on a CentOS 5 system configured with Java version 1.6.0_06, Perl v5.8.8, and an Oracle 11g database. The download and installation process was quick and easy; from download to GUI took about 10 minutes.
In my testing, the refresh rate could have been a bit better. I noticed some table, view, and column counts that did not seem to be updated until I traversed out and then back into the tree structure. And I'd like to see Talend add support for database connections besides Oracle and MySQL.
Data analysts, DBAs, and business professionals constantly investigate the validity of data. Usually the investigation requires not only a specialized skill set but also a highly paid technical professional or expensive tool set. Talend Open Profiler effectively brings the data closer to business users through a common interface with analysts and DBAs.
Groupthink : Two Party System as Polyarchy : Corruption of Regulators : Bureaucracies : Understanding Micromanagers and Control Freaks : Toxic Managers : Harvard Mafia : Diplomatic Communication : Surviving a Bad Performance Review : Insufficient Retirement Funds as Immanent Problem of Neoliberal Regime : PseudoScience : Who Rules America : Neoliberalism : The Iron Law of Oligarchy : Libertarian Philosophy
War and Peace : Skeptical Finance : John Kenneth Galbraith :Talleyrand : Oscar Wilde : Otto Von Bismarck : Keynes : George Carlin : Skeptics : Propaganda : SE quotes : Language Design and Programming Quotes : Random IT-related quotes : Somerset Maugham : Marcus Aurelius : Kurt Vonnegut : Eric Hoffer : Winston Churchill : Napoleon Bonaparte : Ambrose Bierce : Bernard Shaw : Mark Twain Quotes
Vol 25, No.12 (December, 2013) Rational Fools vs. Efficient Crooks The efficient markets hypothesis : Political Skeptic Bulletin, 2013 : Unemployment Bulletin, 2010 : Vol 23, No.10 (October, 2011) An observation about corporate security departments : Slightly Skeptical Euromaydan Chronicles, June 2014 : Greenspan legacy bulletin, 2008 : Vol 25, No.10 (October, 2013) Cryptolocker Trojan (Win32/Crilock.A) : Vol 25, No.08 (August, 2013) Cloud providers as intelligence collection hubs : Financial Humor Bulletin, 2010 : Inequality Bulletin, 2009 : Financial Humor Bulletin, 2008 : Copyleft Problems Bulletin, 2004 : Financial Humor Bulletin, 2011 : Energy Bulletin, 2010 : Malware Protection Bulletin, 2010 : Vol 26, No.1 (January, 2013) Object-Oriented Cult : Political Skeptic Bulletin, 2011 : Vol 23, No.11 (November, 2011) Softpanorama classification of sysadmin horror stories : Vol 25, No.05 (May, 2013) Corporate bullshit as a communication method : Vol 25, No.06 (June, 2013) A Note on the Relationship of Brooks Law and Conway Law
Fifty glorious years (1950-2000): the triumph of the US computer engineering : Donald Knuth : TAoCP and its Influence of Computer Science : Richard Stallman : Linus Torvalds : Larry Wall : John K. Ousterhout : CTSS : Multix OS Unix History : Unix shell history : VI editor : History of pipes concept : Solaris : MS DOS : Programming Languages History : PL/1 : Simula 67 : C : History of GCC development : Scripting Languages : Perl history : OS History : Mail : DNS : SSH : CPU Instruction Sets : SPARC systems 1987-2006 : Norton Commander : Norton Utilities : Norton Ghost : Frontpage history : Malware Defense History : GNU Screen : OSS early history
The Peter Principle : Parkinson Law : 1984 : The Mythical Man-Month : How to Solve It by George Polya : The Art of Computer Programming : The Elements of Programming Style : The Unix Hater’s Handbook : The Jargon file : The True Believer : Programming Pearls : The Good Soldier Svejk : The Power Elite
Most popular humor pages:
Manifest of the Softpanorama IT Slacker Society : Ten Commandments of the IT Slackers Society : Computer Humor Collection : BSD Logo Story : The Cuckoo's Egg : IT Slang : C++ Humor : ARE YOU A BBS ADDICT? : The Perl Purity Test : Object oriented programmers of all nations : Financial Humor : Financial Humor Bulletin, 2008 : Financial Humor Bulletin, 2010 : The Most Comprehensive Collection of Editor-related Humor : Programming Language Humor : Goldman Sachs related humor : Greenspan humor : C Humor : Scripting Humor : Real Programmers Humor : Web Humor : GPL-related Humor : OFM Humor : Politically Incorrect Humor : IDS Humor : "Linux Sucks" Humor : Russian Musical Humor : Best Russian Programmer Humor : Microsoft plans to buy Catholic Church : Richard Stallman Related Humor : Admin Humor : Perl-related Humor : Linus Torvalds Related humor : PseudoScience Related Humor : Networking Humor : Shell Humor : Financial Humor Bulletin, 2011 : Financial Humor Bulletin, 2012 : Financial Humor Bulletin, 2013 : Java Humor : Software Engineering Humor : Sun Solaris Related Humor : Education Humor : IBM Humor : Assembler-related Humor : VIM Humor : Computer Viruses Humor : Bright tomorrow is rescheduled to a day after tomorrow : Classic Computer Humor
The Last but not Least
Copyright © 1996-2018 by Dr. Nikolai Bezroukov. www.softpanorama.org was initially created as a service to the (now defunct) UN Sustainable Development Networking Programme (SDNP) in the author free time and without any remuneration. This document is an industrial compilation designed and created exclusively for educational use and is distributed under the Softpanorama Content License. Original materials copyright belong to respective owners. Quotes are made for educational purposes only in compliance with the fair use doctrine.
FAIR USE NOTICE This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available to advance understanding of computer science, IT technology, economic, scientific, and social issues. We believe this constitutes a 'fair use' of any such copyrighted material as provided by section 107 of the US Copyright Law according to which such material can be distributed without profit exclusively for research and educational purposes.
This is a Spartan WHYFF (We Help You For Free) site written by people for whom English is not a native language. Grammar and spelling errors should be expected. The site contain some broken links as it develops like a living tree...
|You can use PayPal to make a contribution, supporting development of this site and speed up access. In case softpanorama.org is down you can use the at softpanorama.info|
The statements, views and opinions presented on this web page are those of the author (or referenced source) and are not endorsed by, nor do they necessarily reflect, the opinions of the author present and former employers, SDNP or any other organization the author may be associated with. We do not warrant the correctness of the information provided or its fitness for any purpose.
Last modified: September 12, 2017