Softpanorama

Home Switchboard Unix Administration Red Hat TCP/IP Networks Neoliberalism Toxic Managers
May the source be with you, but remember the KISS principle ;-)
Bigger doesn't imply better. Bigger often is a sign of obesity, of lost control, of overcomplexity, of cancerous cells

HP-UX Internet Express:
precompiled open source software for HP-UX

News

Enterprise Unix System Administration

Recommended Links Selected man pages HPUX Reference LAMP Stack Apache Webserver
Bash on HP-UX Sudo for HPUX RPM on HP-UX Perl History Humor Etc

HP-UX Internet Express is a collection of popular open source software products. It provides the following key benefits:

hpux11i-internet-networking

HP provides two major links to precompiled Open Source programs:

HP does not provide support for components listed in Table 1 that are delivered through HP-UX Internet Express either through Web download or through the HP-UX 11i media kits. However, you can notify the HP Internet Express team if you find defects. HP will report defects to the related open source communities and incorporate the appropriate fixes in each new release. To provide feedback or report a defect, email the HP-UX Internet Express Team.

HP recommends that you read the HP-UX Internet Express Frequently Asked Questions before reporting a defect.


NOTE:

Starting with HP-UX Internet Express Version A.09.00, the TCOpenSource components are not delivered in the HP-UX Internet Express product suite.

The TCOpenSource components are now available for download at the HP-UX Porting Centre. The HP-UX Porting Centre contains Open Source packages that are configured and ported on the HP-UX 11i operating systems.


HP-UX Internet Express Version A.15.00 Bundles

HP-UX Internet Express components for HP-UX 11i v2 are delivered in four software bundles. You can either download the bundles or the individual components from a bundle. Table 1 lists the bundle names, and the components included in each bundle.

Table 1. HP-UX Internet Express Version A.15.00 Bundles

Bundle Name Components
internet
Curl ProFTPD*
CyrusIMAP* Procmail*
Fetchmail* Qpopper*
Hypermail Rsync
Libpcap TCPdump
lsof UW-IMAP
Majordomo* Wget
Net-SNMP Wireshark
Postfix Wput
Pine  
security
Chkrootkit PAM_mkhomedir
ClamAV PAM_passwdqc
CyrusSASL perl-ldap
DanteSOCKS* SpamAssassin
FSH Snort*
GnuPG SSLDump
ModSecurity Stunnel*
Nagios Sudo
Nikto Tripwire
OpenSAML Wipe
OpenSC/OpenCT Xinetd*
web1
Axis RubyGems
Calamaris RubyOnRails
HSQLDB Soap
Horde Squid*
IMP UDD14J
Libxml2 Xalan-C
OpenJMS Xerces-C
PostgreSQL* zlib
Ruby  
web2
Ant Python
Ecplise SourceIDSAMLJ
Jabber* Struts
Jython SugarCRM
MySQL* Twiki
OfBiz Xdoclet
OpenLDAP*  
 

* The component can be configured using the Webmin administration utility.


IMPORTANT: Only five individual components can be downloaded simultaneously.
Additional product information

Product #: HPUXIEXP1123

Software specification:

internet A.15.00-004
security A.15.00-004
web1 A.15.00-004
web2 A.15.00-004
Ant A.15.00-1.8.1.001
Axis A.15.00-1.4.001
Calamaris A.15.00-2.99.4.0.001
Chkrootkit A.15.00-0.49.001
ClamAV A.15.00-0.96.1.001
Curl A.15.00-7.20.1.001
CyrusIMAP A.15.00-2.3.16.001
CyrusSASL A.15.00-2.1.23.001
DanteSOCKS A.15.00-1.2.1.001
Eclipse A.15.00-3.4.2.001
Fetchmail A.15.00-6.3.16.001
FSH A.15.00-1.2.001
GnuPG A.15.00-1.4.7.001
Horde A.15.00-3.3.8.001
Hsqldb A.15.00-1.8.1.2.001
Hypermail A.15.00-2.3.0.001
IMP A.15.00-4.3.6.001
Jabber A.15.00-1.6.1.1.001
Jython A.15.00-2.2.1.001
Libpcap A.15.00-1.1.1.001
Libxml2 A.15.00-2.7.7.001
Lsof A.15.00-4.82.001
Majordomo A.15.00-1.94.5.001
Modsecurity A.15.00-2.5.12.001
MySQL A.15.00-5.1.49.001
Nagios A.15.00-3.2.1.004
Net-SNMP A.15.00-5.5.001
Nikto A.15.00-2.1.1.001
OfBiz A.15.00-4.0.001
OpenJMS A.15.00-0.7.6.1.001
OpenLDAP A.15.00-2.4.23.001
OpenSAML A.15.00-1.1b.001
OpenSC A.15.00-0.11.13.001
PAM_mkhomedir A.15.00-1.0.001
PAM_passwdqc A.15.00-1.0.5.001
Perl-LDAP A.15.00-0.39.001
Pine A.15.00-4.64.001
Postfix A.15.00-2.7.0.001
PostgreSQL A.15.00-8.4.4.001
ProcMail A.15.00-3.22.001
ProFTPD A.15.00-1.3.3.001
Python A.15.00-2.7.001
Qpopper A.15.00-4.0.16.001
Rsync A.15.00-3.0.5.001
Ruby A.15.00-1.9.1-p429.001
RubyGems A.15.00-1.3.7.001
RubyOnRails A.15.00-2.3.8.001
Snort A.15.00-2.8.6.001
SOAP A.15.00-2.3.1.001
SourceIDSAMLJ A.15.00-2.0.001
SpamAssassin A.15.00-3.3.1.001
Squid A.15.00-2.7.s9.001
SSLDUMP A.15.00-0.9b3.001
Struts A.15.00-1.3.10.001
Stunnel A.15.00-4.32.001
Sudo A.15.00-1.7.2p8.001
SugarCRM A.15.00-4.5.1i.001
Tcpdump A.15.00-4.1.1.001
Tripwire A.15.00-2.4.2.001
Twiki A.15.00-4.3.2.001
UDDI4J A.15.00-2.0.5.001
UW-IMAP A.15.00-2007e.001
Wget A.15.00-1.10.2.001
Wipe A.15.00-2.3.1.001
Wireshark A.15.00-1.0.15.001
Wput A.15.00-0.6.2.001
Xalan-C A.15.00-1.10.001
XDoclet A.15.00-1.2.3.001
Xerces-C A.15.00-3.1.1.001
Xinetd A.15.00-2.3.15.001
Zlib A.15.00-1.2.4.001


Top Visited
Switchboard
Latest
Past week
Past month

NEWS CONTENTS

Old News ;-)

[Jul 10, 2017] Crowdsourcing, Open Data and Precarious Labour by Allana Mayer Model View Culture

Notable quotes:
"... Photo CC-BY Mace Ojala. ..."
"... Photo CC-BY Samantha Marx. ..."
Jul 10, 2017 | modelviewculture.com
Crowdsourcing, Open Data and Precarious Labour Crowdsourcing and microtransactions are two halves of the same coin: they both mark new stages in the continuing devaluation of labour. by Allana Mayer on February 24th, 2016 The cultural heritage industries (libraries, archives, museums, and galleries, often collectively called GLAMs) like to consider themselves the tech industry's little siblings. We're working to develop things like Linked Open Data, a decentralized network of collaboratively-improved descriptive metadata; we're building our own open-source tech to make our catalogues and collections more useful; we're pushing scholarly publishing out from behind paywalls and into open-access platforms; we're driving innovations in accessible tech.

We're only different in a few ways. One, we're a distinctly feminized set of professions , which comes with a large set of internally- and externally-imposed assumptions. Two, we rely very heavily on volunteer labour, and not just in the internship-and-exposure vein : often retirees and non-primary wage-earners are the people we "couldn't do without." Three, the underlying narrative of a "helping" profession ! essentially a social service ! can push us to ignore the first two distinctions, while driving ourselves to perform more and expect less.

I suppose the major way we're different is that tech doesn't acknowledge us, treat us with respect, build things for us, or partner with us, unless they need a philanthropic opportunity. Although, when some ingenue autodidact bootstraps himself up to a billion-dollar IPO, there's a good chance he's been educating himself using our free resources. Regardless, I imagine a few of the issues true in GLAMs are also true in tech culture, especially in regards to labour and how it's compensated.

Crowdsourcing

Notecards in a filing drawer: old-fashioned means of recording metadata.

Photo CC-BY Mace Ojala.

Here's an example. One of the latest trends is crowdsourcing: admitting we don't have all the answers, and letting users suggest some metadata for our records. (Not to be confused with crowdfunding.) The biggest example of this is Flickr Commons: the Library of Congress partnered with Yahoo! to publish thousands of images that had somehow ended up in the LOC's collection without identifying information. Flickr users were invited to tag pictures with their own keywords or suggest descriptions using comments.

Many orphaned works (content whose copyright status is unclear) found their way conclusively out into the public domain (or back into copyright) this way. Other popular crowdsourcing models include gamification , transcription of handwritten documents (which can't be done with Optical Character Recognition), or proofreading OCR output on digitized texts. The most-discussed side benefits of such projects include the PR campaign that raises general awareness about the organization, and a "lifting of the curtain" on our descriptive mechanisms.

The problem with crowdsourcing is that it's been conclusively proven not to function in the way we imagine it does: a handful of users end up contributing massive amounts of labour, while the majority of those signed up might do a few tasks and then disappear. Seven users in the "Transcribe Bentham" project contributed to 70% of the manuscripts completed; 10 "power-taggers" did the lion's share of the Flickr Commons' image-identification work. The function of the distributed digital model of volunteerism is that those users won't be compensated, even though many came to regard their accomplishments as full-time jobs .

It's not what you're thinking: many of these contributors already had full-time jobs , likely ones that allowed them time to mess around on the Internet during working hours. Many were subject-matter experts, such as the vintage-machinery hobbyist who created entire datasets of machine-specific terminology in the form of image tags. (By the way, we have a cute name for this: "folksonomy," a user-built taxonomy. Nothing like reducing unpaid labour to a deeply colonial ascription of communalism.) In this way, we don't have precisely the free-labour-for-exposure/project-experience problem the tech industry has ; it's not our internships that are the problem. We've moved past that, treating even our volunteer labour as a series of microtransactions. Nobody's getting even the dubious benefit of job-shadowing, first-hand looks at business practices, or networking. We've completely obfuscated our own means of production. People who submit metadata or transcriptions don't even have a means of seeing how the institution reviews and ingests their work, and often, to see how their work ultimately benefits the public.

All this really says to me is: we could've hired subject experts to consult, and given them a living wage to do so, instead of building platforms to dehumanize labour. It also means our systems rely on privilege , and will undoubtedly contain and promote content with a privileged bias, as Wikipedia does. (And hey, even Wikipedia contributions can sometimes result in paid Wikipedian-in-Residence jobs.)

For example, the Library of Congress's classification and subject headings have long collected books about the genocide of First Nations peoples during the colonization of North America under terms such as "first contact," "discovery and exploration," "race relations," and "government relations." No "subjugation," "cultural genocide," "extermination," "abuse," or even "racism" in sight. Also, the term "homosexuality" redirected people to "sexual perversion" up until the 1970s. Our patrons are disrespected and marginalized in the very organization of our knowledge.

If libraries continue on with their veneer of passive and objective authorities that offer free access to all knowledge, this underlying bias will continue to propagate subconsciously. As in Mechanical Turk , being "slightly more diverse than we used to be" doesn't get us any points, nor does it assure anyone that our labour isn't coming from countries with long-exploited workers.

Labor and Compensation

Rows and rows of books in a library, on vast curving shelves.

Photo CC-BY Samantha Marx.

I also want to draw parallels between the free labour of crowdsourcing and the free labour offered in civic hackathons or open-data contests. Specifically, I'd argue that open-data projects are less ( but still definitely ) abusive to their volunteers, because at least those volunteers have a portfolio object or other deliverable to show for their work. They often work in groups and get to network, whereas heritage crowdsourcers work in isolation.

There's also the potential for converting open-data projects to something monetizable: for example, a Toronto-specific bike-route app can easily be reconfigured for other cities and sold; while the Toronto version stays free under the terms of the civic initiative, freemium options can be added. The volunteers who supply thousands of transcriptions or tags can't usually download their own datasets and convert them into something portfolio-worthy, let alone sellable. Those data are useless without their digital objects, and those digital objects still belong to the museum or library.

Crowdsourcing and microtransactions are two halves of the same coin: they both mark new stages in the continuing devaluation of labour, and they both enable misuse and abuse of people who increasingly find themselves with few alternatives. If we're not offering these people jobs, reference letters, training, performance reviews, a "foot in the door" (cronyist as that is), or even acknowledgement by name, what impetus do they have to contribute? As with Wikipedia, I think the intrinsic motivation for many people to supply us with free labour is one of two things: either they love being right, or they've been convinced by the feel-good rhetoric that they're adding to the net good of the world. Of course, trained librarians, archivists, and museum workers have fallen sway to the conflation of labour and identity , too, but we expect to be paid for it.

As in tech, stereotypes and PR obfuscate labour in cultural heritage. For tech, an entrepreneurial spirit and a tendency to buck traditional thinking; for GLAMs, a passion for public service and opening up access to treasures ancient and modern. Of course, tech celebrates the autodidactic dropout; in GLAMs, you need a masters. Period. Maybe two. And entry-level jobs in GLAMs require one or more years of experience, across the board.

When library and archives students go into massive student debt, they're rarely apprised of the constant shortfall of funding for government-agency positions, nor do they get told how much work is done by volunteers (and, consequently, how much of the job is monitoring and babysitting said volunteers). And they're not trained with enough technological competency to sysadmin anything , let alone build a platform that pulls crowdsourced data into an authoritative record. The costs of commissioning these platforms aren't yet being made public, but I bet paying subject experts for their hourly labour would be cheaper.

Solutions

I've tried my hand at many of the crowdsourcing and gamifying interfaces I'm here to critique. I've never been caught up in the "passion" ascribed to those super-volunteers who deliver huge amounts of work. But I can tally up other ways I contribute to this problem: I volunteer for scholarly tasks such as peer-reviewing, committee work, and travelling on my own dime to present. I did an unpaid internship without receiving class credit. I've put my research behind a paywall. I'm complicit in the established practices of the industry, which sits uneasily between academic and social work: neither of those spheres have ever been profit-generators, and have always used their codified altruism as ways to finagle more labour for less money.

It's easy to suggest that we outlaw crowdsourced volunteer work, and outlaw microtransactions on Fiverr and MTurk, just as the easy answer would be to outlaw Uber and Lyft for divorcing administration from labour standards. Ideally, we'd make it illegal for technology to wade between workers and fair compensation.

But that's not going to happen, so we need alternatives. Just as unpaid internships are being eliminated ad-hoc through corporate pledges, rather than being prohibited region-by-region, we need pledges from cultural-heritage institutions that they will pay for labour where possible, and offer concrete incentives to volunteer or intern otherwise. Budgets may be shrinking, but that's no reason not to compensate people at least through resume and portfolio entries. The best template we've got so far is the Society of American Archivists' volunteer best practices , which includes "adequate training and supervision" provisions, which I interpret to mean outlawing microtransactions entirely. The Citizen Science Alliance , similarly, insists on "concrete outcomes" for its crowdsourcing projects, to " never waste the time of volunteers ." It's vague, but it's something.

We can boycott and publicly shame those organizations that promote these projects as fun ways to volunteer, and lobby them to instead seek out subject experts for more significant collaboration. We've seen a few efforts to shame job-posters for unicorn requirements and pathetic salaries, but they've flagged without productive alternatives to blind rage.

There are plenty more band-aid solutions. Groups like Shatter The Ceiling offer cash to women of colour who take unpaid internships. GLAM-specific internship awards are relatively common , but could: be bigger, focus on diverse applicants who need extra support, and have eligibility requirements that don't exclude people who most need them (such as part-time students, who are often working full-time to put themselves through school). Better yet, we can build a tech platform that enables paid work, or at least meaningful volunteer projects. We need nationalized or non-profit recruiting systems (a digital "volunteer bureau") that matches subject experts with the institutions that need their help. One that doesn't take a cut from every transaction, or reinforce power imbalances, the way Uber does. GLAMs might even find ways to combine projects, so that one person's work can benefit multiple institutions.

GLAMs could use plenty of other help, too: feedback from UX designers on our catalogue interfaces, helpful tools , customization of our vendor platforms, even turning libraries into Tor relays or exits . The open-source community seems to be looking for ways to contribute meaningful volunteer labour to grateful non-profits; this would be a good start.

What's most important is that cultural heritage preserves the ostensible benefits of crowdsourcing – opening our collections and processes up for scrutiny, and admitting the limits of our knowledge – without the exploitative labour practices. Just like in tech, a few more glimpses behind the curtain wouldn't go astray. But it would require deeper cultural shifts, not least in the self-perceptions of GLAM workers: away from overprotective stewards of information, constantly threatened by dwindling budgets and unfamiliar technologies, and towards facilitators, participants in the communities whose histories we hold.

Tech Workers Please Stop Defending Tech Companies by Shanley Kane Model View Culture

[Feb 17, 2011] Perl for HP-UX 11i

HP OpenSource Perl is built using HP C/C++ Compilers. 32-bit and 64-bit perl can be installed independently with the default being 32-bit.

HP OpenSource Perl is optimized for HP-UX 11i PA-RISC and HP-UX 11i Itanium Processor Family (IPF) platforms. The perl bundle revision E.5.8.8.D or later versions are available as HP OpenSource Perl.

HP OpenSource Perl is based on OpenSource Perl similar to the previously delivered ActivePerl.

Perl is available as an always-installed product on all HP-UX 11i Operating Environment media. The following are the list of ActivePerl Modules and their equivalent CPAN modules.

ActivePerl Module OpenSource Perl Module (CPAN)
ActiveState::Browser Mozilla::DOM::WebBrowser
ActiveState::Bytes Number::Bytes
ActiveState::Color Color
ActiveState::DateTime DateTime::Event::Random
ActiveState::Dir File::Path
ActiveState::DiskUsage FileSys::DiskUsage
ActiveState::Duration DateTime::Duration & Time::Duration
ActiveState::File File
ActiveState::Handy Several (spread across modules)
ActiveState::Indenter Several (spread across modules)
ActiveState::Menu IWL:Menu & Prima:Menu
ActiveState::ModInfo ModInfo
ActiveState::Path File::PathList
ActiveState::Prompt Term::prompt & Prompt:readkey
ActiveState::Run Run, Arch:Run
ActiveState::Scineplex None
ActiveState::StopWatch Benchmark::Stopwatch, Time::StopWatch
ActiveState::Table Data::Table
ActiveState::Utils Several (spread across modules)
ActiveState::Unix Several (spread across modules)
ActiveState::Config config_data
ActiveState::DocTools POD
PPM PPM
For ActivePerl, refer http://www.activestate.com/activeperl

System Requirements

Perl version Package size
5.8.8 (11.11 PA depot) 195072000 bytes
5.8.8 (11.23 and 11.31 IA/PA dual depot) 423792640 bytes

[Feb 17, 2011] Five simple ways to tune your LAMP application by John Mertic

IBM developerWorks Open source tutorials and projects

Use an opcode cache

The easiest thing to boost performance of any PHP application (the "P" in LAMP, of course) is to take advantage of an opcode cache. For any website I work with, it's the one thing I make sure is present, since the performance impact is huge (many times with response times half of what they are without an opcode cache). But the big question most people new to PHP have is why the improvement is so drastic.

... ... ...

Since PHP is an interpreted language rather than a compiled one like C or the Java language, the entire parse-compile-execute steps are carried out for every request. You can see how this can be time- and resource-consuming, especially when scripts rarely change between requests. After the script is parsed and compiled, the script is in a machine parseable state as a series of opcodes. This is where an opcode cache comes into effect. It caches these compiled scripts as a series of opcodes to avoid the parse and compile steps for every request.

... ... ...

So when the cached opcodes of a PHP script exists, we can skip by the parse and compile steps of the PHP request process and directly execute the cache opcodes and output the results. The checking algorithm takes care of situations where you may have made a change to the script file, so on the first request of the changed script, the opcodes will be automatically recompiled and cached then for subsequent requests, replacing the cached script.

Opcode caches have long been popular for PHP, with some of the first ones coming about back in the heyday of PHP V4. Today there are a few popular choices that are in active development and being used:

Without a doubt, an opcode cache is the first step in speeding up PHP by removing the need to parse and compile a script on every request. Once this first step is completed, you should see an improvement in response time and server load. But there is more you can do to optimize PHP, which we'll look next.

Optimize your PHP setup

While implementing an opcode cache is a big bang for performance improvement, there are a number of other tweaks you can do to optimize your PHP setup, based upon the settings in your php.ini file. These settings are more appropriate for production instances; on development or testing instances, you may not want to make these changes as it can make it more difficult to debug application issues.

Let's take a look at a few items that are important to help performance.

Things that should be disabled

There are several php.ini settings that should be disabled, since they are often used for backward-compatibility:

Disabling these options on legacy code can be risky, however, since they may be depending upon them being set for proper execution. Any new code should not be developed depending on these options being set, and you should look for ways to refactor your existing code away from using them if possible.

Things that should be enabled or have its setting tweaked

There are some good performance options you can enable in the php.ini file to give your scripts a bit of a speed boost:

These are considered "low-hanging fruit" in terms of settings that should be configured on your production instance. There is one more thing you should look at as far as PHP in concerned. This is the use of require() and include() (as well as their siblings require_once() and include_once()) in your application. These optimize your PHP configuration and code to prevent unneeded file status checks on every request, which can slow down response times.

Manage your require()s and include()s

File status calls (meaning calls made to the underlying file system to check for the existence of a file) can be quite costly in terms of performance. One of the biggest culprits of file stats comes in the form of the require() and include() statement, which are used to bring code into your script. The sibling calls of require_once() and include_once() can be more problematic, as they not only need to verify the existence of the file, but also that it hasn't be included before.

So what's the best way to deal with this? There are a few things you can do to speed this up.

APC and Wincache also have mechanisms for caching the results of file status checks made by PHP, so repeated file-system checks are not needed. They are most effective when you keep your include file names static rather than variable-driven, so it's important to try to do this whenever possible.

Optimize your database

Database optimization can become a pretty advanced topic quickly, and I don't have nearly the space here to do this topic full justice. But if you are looking at optimizing the speed of your database, there are a few steps that you should take first which should help the most common issues encountered.

Put the database on its own machine

Database queries can become quite intense on their own, often pegging a CPU at 100 percent for doing simple SELECT statement with reasonable size datasets. If both your web server and database server are competing for CPU time on a single machine, this will definitely slow down your request. Thus I consider it a good first step to have the web server and database server on separate machines and be sure you make your database server the beefier of the two (database servers love lots of memory and multiple CPUs).

Properly design and index tables

Probably the biggest issues with database performance come as a result of poor database design and missing indexes. SELECT statements are usually overwhelmingly the most common types of queries run in a typical web application. They are also the most time-consuming queries run on a database server. Additionally, these kinds of SQL statements are the most sensitive to proper indexing and database design, so look to the following pointers for tips for optimal performance.

Analyze the queries being run on the server

The best tool for improving database performance is analyzing what queries are being run on your database server and how long they are taking to run. Just about every database out there has tools for doing this. With MySQL, you can take advantage of the slow query log to find the problematic queries. To use it, set the slow_query_log setting to 1 in the MySQL configuration file, then log_output to FILE to have them logged to the file hostname-slow.log. You can set the long_query_time threshold to how long the query must run in number of seconds to be considered a "slow query." I'd recommend setting this to 5 seconds at first and move it down to 1 second over time, depending upon your data set. If you look at this file, you'll see the queries detailed similar to Listing 1.

Listing 1. MySQL slow query log
/usr/local/mysql/bin/mysqld, Version: 5.1.49-log, started with:
Tcp port: 3306  Unix socket: /tmp/mysql.sock
Time                 Id Command    Argument
# Time: 030207 15:03:33
# User@Host: user[user] @ localhost.localdomain [127.0.0.1]
# Query_time: 13  Lock_time: 0  Rows_sent: 117  Rows_examined: 234
use sugarcrm;
select * from accounts inner join leads on accounts.id = leads.account_id;

The key thing we want to look at is Query_time, which shows how long the query took. Another thing to look at is the numbers of Rows_sent and Rows_examined, since these can point to situations where a query might be written incorrectly if it's looking at too many rows or returning too many rows. You can delve deeper into how a query is written by prepending EXPLAIN to the query, which will return the query plan instead of the result set, as show in Listing 2.


Listing 2. MySQL EXPLAIN results
mysql> explain select * from accounts inner join leads on accounts.id = leads.account_id;
+----+-------------+----------+--------+--------------------------+---------+---
| id | select_type | table    | type   | possible_keys           
 | key     | key_len | ref                       | rows | Extra |
+----+-------------+----------+--------+--------------------------+---------+--------
|  1 | SIMPLE      | leads    | ALL    | idx_leads_acct_del       | NULL    | NULL    
| NULL                      |  200 |       |
|  1 | SIMPLE      | accounts | eq_ref | PRIMARY,idx_accnt_id_del | PRIMARY | 108    
| sugarcrm.leads.account_id |    1 |       |
+----+-------------+----------+--------+--------------------------+---------+---------
2 rows in set (0.00 sec)

The MySQL manual dives much deeper into the topic of the EXPLAIN output (see Resources), but the big thing I look at is places where the 'type' column is 'ALL', since this requires MySQL to do a full table scan and doesn't use a key for a lookup. These help point you to places where adding indexes will significantly help query speed.

Effectively cache data

As we saw in the previous section, databases can easily be the biggest pain point of performance in your web application. But what if the data you are querying doesn't change very often? In this case, it may be a good option to store those results locally instead of calling the query on every request.

Two of the opcode caches we looked at earlier, APC and Wincache, have facilities for doing just this, where you can store PHP data directly into a shared memory segment for quick retrieval. Listing 3 provides an example on how to do this.

Listing 3. Example of using APC for caching database results
<?php

function getListOfUsers()
{
    $list = apc_fetch('getListOfUsers');
    
    if ( empty($list) ) {
        $conn = new PDO('mysql:dbname=testdb;host=127.0.0.1', 'dbuser', 'dbpass');
        $sql = 'SELECT id, name FROM users ORDER BY name';
        foreach ($conn->query($sql) as $row) {
            $list[] = $row;
        }
        
        apc_store('getListOfUsers',$list);
    }
    
    return $list;
}

We'll only need to do the query one time. Afterward, we push the result into the APC user cache under the key getListOfUsers. From here on out, until the cache expires, you will be able to fetch the result array directly out of cache, skipping over the SQL query.

APC and Wincache aren't the only choices for a user cache; memcache and Redis are other popular choices that don't require you to run the user cache on the same server as the Web server. This gives added performance and flexibility, especially if your web application is scaled out across several Web servers.

Tags: apache, lamp, linux, mysql, php,

Internet and Security Solutions

The customer documentation for e-security and internet software solutions to secure your extended Enterprise has been migrated to the Business Support Center (BSC). To navigate to the corresponding BSC index page, please use the appropriate redirection links below.
" HP-UX 11i Role-based Access Control (RBAC) Software
" HP-UX 11i Secure Shell Software
" HP-UX 11i Security Containment Software
" HP-UX Bastille Software
" HP-UX Boot Authenticator Software
" HP-UX Directory Server
" HP-UX Encrypted Volume and File System Software
" HP-UX Host Intrusion Detection System Software
" HP-UX Identity Management Integration Software
" HP-UX Internet Express Software
" HP-UX IPFilter Software
" HP-UX IPSec Identity Management Software
" HP-UX Kerberos Data Security Software
" HP-UX LDAP-UX Integration Software
" HP-UX AAA Server (RADIUS) Software
" HP-UX Netscape Directory Server and Red Hat Directory Server Software
" HP-UX OpenSSL Software
" HP-UX Secure Resource Partitions SRP Software
" HP-UX Security Patch Check Software
" HP-UX Security Products and Features Software
" HP-UX Software Assistant SWA Software
" HP-UX Trusted Computing Services TCS Software
" HP-UX Virtual Vault Software

Recommended Links

Google matched content

Softpanorama Recommended

Top articles

Sites

hpux11i-internet-networking

For more information on HP-UX Internet Express Version A.15.00, see the following documents:

HP provides two major links to precompiled Open Source programs:



Etc

Society

Groupthink : Two Party System as Polyarchy : Corruption of Regulators : Bureaucracies : Understanding Micromanagers and Control Freaks : Toxic Managers :   Harvard Mafia : Diplomatic Communication : Surviving a Bad Performance Review : Insufficient Retirement Funds as Immanent Problem of Neoliberal Regime : PseudoScience : Who Rules America : Neoliberalism  : The Iron Law of Oligarchy : Libertarian Philosophy

Quotes

War and Peace : Skeptical Finance : John Kenneth Galbraith :Talleyrand : Oscar Wilde : Otto Von Bismarck : Keynes : George Carlin : Skeptics : Propaganda  : SE quotes : Language Design and Programming Quotes : Random IT-related quotesSomerset Maugham : Marcus Aurelius : Kurt Vonnegut : Eric Hoffer : Winston Churchill : Napoleon Bonaparte : Ambrose BierceBernard Shaw : Mark Twain Quotes

Bulletin:

Vol 25, No.12 (December, 2013) Rational Fools vs. Efficient Crooks The efficient markets hypothesis : Political Skeptic Bulletin, 2013 : Unemployment Bulletin, 2010 :  Vol 23, No.10 (October, 2011) An observation about corporate security departments : Slightly Skeptical Euromaydan Chronicles, June 2014 : Greenspan legacy bulletin, 2008 : Vol 25, No.10 (October, 2013) Cryptolocker Trojan (Win32/Crilock.A) : Vol 25, No.08 (August, 2013) Cloud providers as intelligence collection hubs : Financial Humor Bulletin, 2010 : Inequality Bulletin, 2009 : Financial Humor Bulletin, 2008 : Copyleft Problems Bulletin, 2004 : Financial Humor Bulletin, 2011 : Energy Bulletin, 2010 : Malware Protection Bulletin, 2010 : Vol 26, No.1 (January, 2013) Object-Oriented Cult : Political Skeptic Bulletin, 2011 : Vol 23, No.11 (November, 2011) Softpanorama classification of sysadmin horror stories : Vol 25, No.05 (May, 2013) Corporate bullshit as a communication method  : Vol 25, No.06 (June, 2013) A Note on the Relationship of Brooks Law and Conway Law

History:

Fifty glorious years (1950-2000): the triumph of the US computer engineering : Donald Knuth : TAoCP and its Influence of Computer Science : Richard Stallman : Linus Torvalds  : Larry Wall  : John K. Ousterhout : CTSS : Multix OS Unix History : Unix shell history : VI editor : History of pipes concept : Solaris : MS DOSProgramming Languages History : PL/1 : Simula 67 : C : History of GCC developmentScripting Languages : Perl history   : OS History : Mail : DNS : SSH : CPU Instruction Sets : SPARC systems 1987-2006 : Norton Commander : Norton Utilities : Norton Ghost : Frontpage history : Malware Defense History : GNU Screen : OSS early history

Classic books:

The Peter Principle : Parkinson Law : 1984 : The Mythical Man-MonthHow to Solve It by George Polya : The Art of Computer Programming : The Elements of Programming Style : The Unix Hater’s Handbook : The Jargon file : The True Believer : Programming Pearls : The Good Soldier Svejk : The Power Elite

Most popular humor pages:

Manifest of the Softpanorama IT Slacker Society : Ten Commandments of the IT Slackers Society : Computer Humor Collection : BSD Logo Story : The Cuckoo's Egg : IT Slang : C++ Humor : ARE YOU A BBS ADDICT? : The Perl Purity Test : Object oriented programmers of all nations : Financial Humor : Financial Humor Bulletin, 2008 : Financial Humor Bulletin, 2010 : The Most Comprehensive Collection of Editor-related Humor : Programming Language Humor : Goldman Sachs related humor : Greenspan humor : C Humor : Scripting Humor : Real Programmers Humor : Web Humor : GPL-related Humor : OFM Humor : Politically Incorrect Humor : IDS Humor : "Linux Sucks" Humor : Russian Musical Humor : Best Russian Programmer Humor : Microsoft plans to buy Catholic Church : Richard Stallman Related Humor : Admin Humor : Perl-related Humor : Linus Torvalds Related humor : PseudoScience Related Humor : Networking Humor : Shell Humor : Financial Humor Bulletin, 2011 : Financial Humor Bulletin, 2012 : Financial Humor Bulletin, 2013 : Java Humor : Software Engineering Humor : Sun Solaris Related Humor : Education Humor : IBM Humor : Assembler-related Humor : VIM Humor : Computer Viruses Humor : Bright tomorrow is rescheduled to a day after tomorrow : Classic Computer Humor

The Last but not Least Technology is dominated by two types of people: those who understand what they do not manage and those who manage what they do not understand ~Archibald Putt. Ph.D


Copyright © 1996-2018 by Dr. Nikolai Bezroukov. www.softpanorama.org was initially created as a service to the (now defunct) UN Sustainable Development Networking Programme (SDNP) in the author free time and without any remuneration. This document is an industrial compilation designed and created exclusively for educational use and is distributed under the Softpanorama Content License. Original materials copyright belong to respective owners. Quotes are made for educational purposes only in compliance with the fair use doctrine.

FAIR USE NOTICE This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available to advance understanding of computer science, IT technology, economic, scientific, and social issues. We believe this constitutes a 'fair use' of any such copyrighted material as provided by section 107 of the US Copyright Law according to which such material can be distributed without profit exclusively for research and educational purposes.

This is a Spartan WHYFF (We Help You For Free) site written by people for whom English is not a native language. Grammar and spelling errors should be expected. The site contain some broken links as it develops like a living tree...

You can use PayPal to make a contribution, supporting development of this site and speed up access. In case softpanorama.org is down you can use the at softpanorama.info

Disclaimer:

The statements, views and opinions presented on this web page are those of the author (or referenced source) and are not endorsed by, nor do they necessarily reflect, the opinions of the author present and former employers, SDNP or any other organization the author may be associated with. We do not warrant the correctness of the information provided or its fitness for any purpose.

The site uses AdSense so you need to be aware of Google privacy policy. You you do not want to be tracked by Google please disable Javascript for this site. This site is perfectly usable without Javascript.

Last modified: December 26, 2017