|Contents||Bulletin||Scripting in shell and Perl||Network troubleshooting||History||Humor|
|News||C language||Best C books||Recommended Links||Debugging||Make|
|Richard Stallman and War of Software Clones||The Short History of GCC development||History||Humor||Etc|
GCC was and still is the main programming achievement of RMS. I think that the period when RMS wrote gcc by reusing Pastel compiler developed at the Lawrence Livermore Lab constitutes the most productive part of RMS' programmer career, much more important then his Emacs efforts, although he is still fanatically attached to Emacs.
He stared writing the compiler being almost thirty years old. That means at the time when his programming abilities started to decline. Therefore gcc can be considered to be his "swan song" as a programmer. The prototype that he used was Pastel compiler developed at the Lawrence Livermore Lab. He is his version of the history GCC creation:
Shortly before beginning the GNU project, I heard about the Free University Compiler Kit, also known as VUCK. (The Dutch word for "free" is written with a V.) This was a compiler designed to handle multiple languages, including C and Pascal, and to support multiple target machines. I wrote to its author asking if GNU could use it.
He responded derisively, stating that the university was free but the compiler was not. I therefore decided that my first program for the GNU project would be a multi-language, multi-platform compiler.
Hoping to avoid the need to write the whole compiler myself, I obtained the source code for the Pastel compiler, which was a multi-platform compiler developed at Lawrence Livermore Lab. It supported, and was written in, an extended version of Pascal, designed to be a system-programming language. I added a C front end, and began porting it to the Motorola 68000 computer. But I had to give that up when I discovered that the compiler needed many megabytes of stack space, and the available 68000 Unix system would only allow 64k.
I then realized that the Pastel compiler functioned by parsing the entire input file into a syntax tree, converting the whole syntax tree into a chain of "instructions", and then generating the whole output file, without ever freeing any storage. At this point, I concluded I would have to write a new compiler from scratch. That new compiler is now known as GCC; none of the Pastel compiler is used in it, but I managed to adapt and use the C front end that I had written.
Of course there were collaborators, because RMS seems to be far from the cutting edge of compiler technology and never wrote any significant paper about compiler construction. But still he was the major driving force behind the project and the success of this project was achieved to large extent due to his personal efforts.
From the political standpoint it was a very bright idea: a person who controls the compiler has enormous influence on everybody who use it. Although at the beginning it was just C-compiler, eventually "after-Stallman" GCC (GNU-cc) became one of the most flexible, most powerful and most portable C ,C++, FORTRAN and (now) even Java compiler. Together with its libraries, it constitutes the powerful development platform that gives the possibility to write code which is still portable to almost all computer platforms you can imagine, from handhelds to supercomputers.
The first version of GCC seems to be finished around 1985 when RMS was 32 year old. Here is the first mention of gcc in GNU manifesto:
So far we have an Emacs text editor with Lisp for writing editor commands, a source level debugger, a yacc-compatible parser generator, a linker, and around 35 utilities. A shell (command interpreter) is nearly completed. A new portable optimizing C compiler has compiled itself and may be released this year. An initial kernel exists but many more features are needed to emulate Unix. When the kernel and compiler are finished, it will be possible to distribute a GNU system suitable for program development. We will use TeX as our text formatter, but an nroff is being worked on. We will use the free, portable X window system as well. After this we will add a portable Common Lisp, an Empire game, a spreadsheet, and hundreds of other things, plus on-line documentation. We hope to supply, eventually, everything useful that normally comes with a Unix system, and more.
GNU will be able to run Unix programs, but will not be identical to Unix. We will make all improvements that are convenient, based on our experience with other operating systems. In particular, we plan to have longer file names, file version numbers, a crashproof file system, file name completion perhaps, terminal-independent display support, and perhaps eventually a Lisp-based window system through which several Lisp programs and ordinary Unix programs can share a screen. Both C and Lisp will be available as system programming languages. We will try to support UUCP, MIT Chaosnet, and Internet protocols for communication.
Compilers take a long time to mature. The first more or less stable release seems to be 1.17 (January 9, 1988) That was pure luck as in 1989 the Emacs/Xemacs split started that consumed all his energy. RMS used a kind of Microsoft "embrace and extend" policy in GCC development: extensions to the C-language were enabled by default.
It looks like RMS personally participated in the development of the compiler at least till the GCC/egcs split (i.e 1997). Being a former compiler writer myself, I can attest that it is pretty physically challenging to run such a project for more then ten years, even if you are mostly a manager.
Development was not without problems with Cygnus emerging as an alternative force. Cygnus, the first commercial company devoted to provide commercial support for GNU software and especially GCC complier was co-founded by Michael Tiemann in 1989 and Tieman became the major driving force behind GCC since then.
In 1997 RMS was so tired of his marathon that he just wanted the compiler to be stable because it was the most important publicity force for FSF. But with GNU license you cannot stop: it enforces the law of jungle and the strongest take it all. Interest of other more flesh and motivated developers (as well as RMS personal qualities first demonstrated in Emacs/Emacs saga) led to a painful fork:
Subject: A new compiler project to merge the existing GCC forksA bunch of us (including Fortran, Linux, Intel and RTEMS hackers) have decided to start a more experimental development project, just like Cygnus and the FSF started the gcc2 project about 6 years ago. Only this time the net community with which we are working is larger! We are calling this project 'egcs' (pronounced 'eggs'). Why are we doing this? It's become increasingly clear in the course of hacking events that the FSF's needs for gcc2 are at odds with the objectives of many in the community who have done lots of hacking and improvement over the years. GCC is part of the FSF's publicity for the GNU project, as well as being the GNU system's compiler, so stability is paramount for them. On the other hand, Cygnus, the Linux folks, the pgcc folks, the Fortran folks and many others have done development work which has not yet gone into the GCC2 tree despite years of efforts to make it possible. This situation has resulted in a lot of strong words on the gcc2 mailing list which really is a shame since at the heart we all want the same thing: the continued success of gcc, the FSF, and Free Software in general. Apart from ill will, this is leading to great divergence which is increasingly making it harder for us all to work together -- It is almost as if we each had a proprietary compiler! Thus we are merging our efforts, building something that won't damage the stability of gcc2, so that we can have the best of both worlds. As you can see from the list below, we represent a diverse collection of streams of GCC development. These forks are painful and waste time; we are bringing our efforts together to simplify the development of new features. We expect that the gcc2 and egcs communities will continue to overlap to a great extent, since they're both working on GCC and both working on Free Software. All code will continue to be assigned to the FSF exactly as before and will be passed on to the gcc2 maintainers for ultimate inclusion into the gcc2 tree. Because the two projects have different objectives, there will be different sets of maintainers. Provisionally we have agreed that Jim Wilson is to act as the egcs maintainer and Jason Merrill as the maintainer of the egcs C++ front end. Craig Burley will continue to maintain the Fortran front end code in both efforts. What new features will be coming up soon? There is such a backlog of tested, un-merged-in features that we have been able to pick a useful initial set: New alias analysis support from John F. Carr. g77 (with some performance patches). A C++ repository for G++. A new instruction scheduler from IBM Haifa. A regmove pass (2-address machine optimizations that in future will help with compilation for the x86 and for now will help with some RISC machines). This will use the development snapshot of 3 August 97 as its base -- in other words we're not starting from the 18 month old gcc-2.7 release, but from a recent development snapshot with all the last 18 months' improvements, including major work on G++. We plan an initial release for the end of August. The second release will include some subset of the following: global cse and partial redundancy elimination. live range splitting. More features of IBM Haifa's instruction scheduling, including software pipelineing, and branch scheduling. sibling call opts. various new embedded targets. Further work on regmove. The egcs mailing list at cygnus.com will be used to discuss and prioritize these features. How to join: send mail to egcs-request at cygnus.com. That list is under majordomo. We have a web page that describes the various mailing lists and has this information at: http://www.cygnus.com/egcs. Alternatively, look for these releases as they spread through other projects such as RTEMS, Linux, etc. Come join us! David Henkel-Wallace (for the egcs members, who currently include, among others): Per Bothner Joe Buck Craig Burley John F. Carr Stan Cox David Edelsohn Kaveh R. Ghazi Richard Henderson David Henkel-Wallace Gordon Irlam Jakub Jelinek Kim Knuttila Gavin Koch Jeff Law Marc Lehmann H.J. Lu Jason Merrill Michael Meissner David S. Miller Toon Moene Jason Molenda Andreas Schwab Joel Sherrill Ian Lance Taylor Jim Wilson
After version 2.8.1, GCC development split into FSF GCC on the one hand, and Cygnus EGCS on the other. The first EGCS version (1.0.0) was released by Cygnus on December 3, 1997 and that instantly put FSF version on the back burner:
March 15, 1999 egcs-1.1.2 is released. March 10, 1999 Cygnus donates improved global constant propagation and lazy code motion optimizer framework. March 7, 1999 The egcs project now has additional online documentation. February 26, 1999 Richard Henderson of Cygnus Solutions has donated a major rewrite of the control flow analysis pass of the compiler. February 25, 1999 Marc Espie has donated support for OpenBSD on the Alpha, SPARC, x86, and m68k platforms. Additional targets are expected in the future. January 21, 1999 Cygnus donates support for the PowerPC 750 processor. The PPC750 is a 32bit superscalar implementation of the PowerPC family manufactured by both Motorola and IBM. The PPC750 is targeted at high end Macs as well as high end embedded applications. January 18, 1999 Christian Bruel and Jeff Law donate improved local dead store elimination. January 14, 1999 Cygnus donates support for Hypersparc (SS20) and Sparclite86x (embedded) processors. December 7, 1998 Cygnus donates support for demangling of HP aCC symbols. December 4, 1998 egcs-1.1.1 is released. November 26, 1998 A database with test results is now available online, thanks to Marc Lehmann. November 23, 1998 egcs now can dump flow graph information usable for graphical representation. Contributed by Ulrich Drepper. November 21, 1998 Cygnus donates support for the SH4 processor. November 10, 1998 An official steering committee has been formed. Here is the original announcement. November 5, 1998 The third snapshot of the rewritten libstdc++ is available. You can read some more on http://sources.redhat.com/libstdc++/. October 27, 1998 Bernd Schmidt donates localized spilling support. September 22, 1998 IBM Corporation delivers an update to the IBM Haifa instruction scheduler and new software pipelining and branch optimization support. September 18, 1998 Michael Hayes donates c4x port. September 6, 1998 Cygnus donates Java front end. September 3, 1998 egcs-1.1 is released. August 29, 1998 Cygnus donates Chill front end and runtime. August 25, 1998 David Miller donates rewritten sparc backend. August 19, 1998 Mark Mitchell donates load hoisting and store sinking support. July 15, 1998 The first snapshot of the rewritten libstdc++ is available. You can read some more here. June 29, 1998 Mark Mitchell donates alias analysis framework. May 26, 1998 We have added two new mailing lists for the egcs project. gcc-cvs and egcs-patches.
When a patch is checked into the CVS repository, a check-in notification message is automatically sent to the gcc-cvs mailing list. This will allow developers to monitor changes as they are made.
Patch submissions should be sent to egcs-patches instead of the main egcs list. This is primarily to help ensure that patch submissions do not get lost in the large volume of the main mailing list.
May 18, 1998 Cygnus donates gcse optimization pass. May 15, 1998 egcs-1.0.3 released!. March 18, 1998 egcs-1.0.2 released!. February 26, 1998 The egcs web pages are now supported by egcs project hardware and are searchable with webglimpse. The CVS sources are browsable with the free cvsweb package. February 7, 1998 Stanford has volunteered to host a high speed mirror for egcs. This should significantly improve download speeds for releases and snapshots. Thanks Stanford and Tobin Brockett for the use of their network, disks and computing facilities! January 12, 1998 Remote access to CVS sources is available!. January 6, 1998 egcs-1.0.1 released!. December 3, 1997 egcs-1.0 released!. August 15, 1997 The egcs project is announced publicly and the first snapshot is put on-line.
The egcs mailing list archives for details. I've also heard assertions that the only reason gcc-2.8 was released as quickly as it was is because of the pressures of the egcs release. Here is a Slashdot discussion that contains some additional info. After the fork egcs team proved to be definitely stronger and the development of the original branch stagnated.
This was pretty painful fork, especially personally for RMS, and consequences are still felt today. For example Linus Torvalds still prefer old GCC version and recompilation of the kernel with newer version lead to some subtle bugs due to not full standard compatibility of the old GCC compiler. Alan Cox has said for years that 2.0.x kernels is to be compiled by gcc, not egcs.
As FSF GCC died a silent death from malnutrition, both were (formally) reunited as of version 2.95 in April 1999. With a simple renaming trick, egcs became gcc now and formally the split was over:
Re: egcs to take over gcc maintenance
- To: firstname.lastname@example.org
- Subject: Re: egcs to take over gcc maintainance
- From: Theodore Papadopoulo <Theodore.Papadopoulo@sophia.inria.fr>
- Date: Fri, 16 Apr 1999 18:35:00 +0200
> I'm pleased to announce that the egcs team is taking over as the collective GCC maintainer of GCC. This means that the egcs > steering committee is changing its name to the gcc steering committee and future gcc releases will be made by the egcs
> (then gcc) team. This also means that the open development style is also carried over to gcc (a good thing).
That's a great piece of news...
Email: Theodore.Papadopoulo@sophia.inria.fr Tel: (33) 04 92 38 76 01
More information about the event can be found in the following Slashdot post:
yes, it's true; egcs is gcc. Some details (Score:4)
by JoeBuck (7947) on Tuesday April 20, @12:22PM (#1925069)
As a member of the egcs steering committee, which will become the gcc steering commitee, I can confirm that yes, the merger is official ... sometime in the near future there will be a gcc 3.0 from the egcs code base. The steering committee has been talking to RMS about doing this for months now; at times it's been contentious but now that we understand each other better, things are going much better.
The important thing to understand is that when we started egcs, this is what we were planning all along (well, OK, what some of us were planning). We wanted to change the way gcc worked, not just create a variant. That's why assignments always went to the FSF, why GNU coding style is rigorously followed.
Technically, egcs/gcc will run the same way as before. Since we are now fully GNU, we'll be making some minor changes to reflect that, but we've been doing them gradually in the past few months anyway so nothing that significant will change. Jeff Law remains the release manager; a number of other people have CVS write access; the steering committee handles the "political" and other nontechnical stuff and "hires" the release manager.
egcs/gcc is at this point considerably more bazaar-like than the Linux kernel in that many more people have the ability to get something into the official code (for Linux, only Linus can do that). Jeff Law decides what goes in the release, but he delegates major areas to other maintainers.
The reason for the delay in the announcement is that we were waiting for RMS to announce it (he sent a message to the gnu.*.announce lists), but someone cracked an important FSF machine and did an rm -rf / command. It was noticed and someone powered off the machine, but it appears that this machine hosted the GNU mailing lists, if I understand correctly, so there's nothing on gnu.announce. I don't know why there's still nothing on www.gnu.org (which was not cracked). Why do people do things like this?
Currently GCC Release Manager is Mark Mitchell, CodeSourcery's President and Chief Technical Officer. He received a MS in Computer Science from Stanford in 1999 and a BA from Harvard in 1994. His research interests centered around computational complexity and computer security. Mark worked at CenterLine Software as a software engineer before co-founding CodeSourcery. In his recent interview he provided some interesting facts about current problems and perspectives of GCC development as well as the reasons of growing independence of the product from FSF (for example a pretty interesting fact that version 2.96 of GCC was not even FSF version at all):
JB: There has been a problem with so called gcc-2.96. Why did several distributors create this version?
It's important for everyone to know that there was no version of GCC 2.96 from the FSF. I know Red Hat distributed a version that it called 2.96, and other companies may have done that too. I only know about the Red Hat version.
It is too bad that this version was released. It is essentially a development snapshot of the GCC tree, with a lot of Red Hat fixes. There are a lot of bugs in it, relative to either 2.95 or 3.0, and the C++ code generated is incompatible (at the binary level) with either 2.95 or 3.0. It's been very confusing to users, especially because the error messages generated by the 2.96 release refer users back to the FSF GCC bug-reporting address, even though a lot of the bugs in that release don't exist in the FSF releases. The saddest part is that a lot of people at Red Hat knew that using that release was a bad idea, and they couldn't convince their management of that fact.
Partly, this release is our fault, as GCC maintainers. There was a lot of frustruation because it took so long to produce a new GCC release. I'm currently leading an effort to reduce the time between GCC releases so that this kind of thing is less likely to happen again. I can understand why a company might need to put out an intermediate release of GCC if we are not able to do it ourselves. That's why I think it's important for people to support independent development of GCC, which is, of course, what CodeSourcery does. We're not affiliated with any of the distributors, and so we can act to try to improve the FSF version of GCC directly. When people financially support our work on the releases, that helps to make sure that there are frequent enough release to avoid these problems.
JB: Do you feel, that the 2.96 release speeded up the development and allowed gcc-3.0 to be ready faster?
That is a very difficult question to answer. On the one hand, Red Hat certainly fixed some bugs in Red Hat's 2.96 version, and some of those improvements were contributed back for GCC 3.0. (I do not know if all of them were contributed back, or not.) On the other hand, GCC developers at Red Hat must have spent a lot of time on testing and improving their 2.96 version, and therefore that time was not spent on GCC 3.0.
The problem is that for a company like Red Hat (or CodeSourcery) you can't choose between helping out with the FSF release of GCC and doing something else based just on what would be good for the FSF release. You have to try to make the best business decision, which might mean that you have to do something to please a customer, even though it doesn't help the FSF release.
If people would like to keep companies from making their own releases, there are two things to do: a) make that sentiment known to the companies, since companies like to please their customers, and b) hire companies like CodeSourcery to help work on the FSF releases.
JB: How many developers are currently working on GCC?
It's impossible to count. Hundreds, probably -- but there is definitely a group of ten or twenty that is responsible for most of the changes.
... ... ...
JB: Do you see compiling java to native code as a drawback when using free (speech) code? (compared to using p-code only)
I'm not so moralistic about these issues as some people. I think it's good that we support compiling from byte-code because that's how lots of Java code is distributed. Whether or not that code is free, we're providing a useful product. I suspect the FSF has a different viewpoint.
JB: What are future plans in gcc development?
I think the number one issue is the performance of the generated code. People are looking into different ways to optimize better. I'd also like to see a more robust compiler that issues better error messages and never crashes on incorrect input.
JB: Often, the major problem with hardware vendors is that they don't want to provide the technical documentation for their hardware (forcing you to use their or third party proprietary code). Is this also true with processor documentation?
Most popular workstation processors are well-documented from the point of view of their instruction set. There often isn't as much information available about timing and scheduling information. And some embedded vendors never make any information about their chips available, which means that they can't really distribute a version of GCC for their chips because the GCC source code would give away information about the chip.
AMD is a great example of a company trying to work closely with GCC up front. They made a lot of information about their new chip available very early in the process.
JB: Which systems do currently use GCC as their primary compiler set (not counting *BSD and GNU/Linux)?
Apple's OS X. If Apple succeeds, there will probably be more OS X developers using GCC than there are GNU/Linux developers.
... ... ...
Having RMS as a member of gcc steering committee(SC) has its problems and still invites forking ;-). As one of the participants of the Slashdot discussion noted:
Re:Speaking as a GCC maintainer, I call bullshit (Score:3, Informative)
by devphil (51341) on Sunday August 15, @02:16PM (#9974914)
You're not completely right, and not completely wrong. The politics are exceedingly complicated, and I regret it every time I learn more about them.
RMS doesn't have dictatorial power over the SC, nor a formal veto vote.
He does hold the copyright to GCC. (Well, the FSF holds the copyright, but he is the FSF.) That's a lot more important that many people realize.
Choice of implementation language is, strictly speaking, a purely technical issue. But it has so many consequences that it gets special attention.
The SC specifically avoids getting involved in technical issues whenever possible. Even when the SC is asked to decide something, they never go to RMS when they can help it, because he's so unaware of modern real-world technical issues and the bigger picture. It's far, far better to continue postponing a question than to ask it, when RMS is involved, because he will make a snap decision based on his own bizarre technical ideas, and then never change his mind in time for the new decision to be worth anything.
He can be convinced. Eventually. It took the SC over a year to explain and demonstrate that Java bytecode could not easily be used to subvert the GPL, therefore permitting GCJ to be checked in to the official repository was okay. I'm sure that someday we'll be using C++ in core code. Just not anytime soon.
As for forking again... well, yeah, I personally happen to be a proponent of that path. But I'm keenly aware of the damange that would to do GCC's reputation -- beyond the short-sighted typical
/. viewpoint of "always disobey every authority" -- and I'm still probably underestimating the problems.
Some additional information about gcc development can be found at History - GCC Wiki
About history of gcc development see: The Short History of GCC development The first version of gcc was released in March 1987:
Date: Sun, 22 Mar 87 10:56:56 EST
From: rms (Richard M. Stallman)
The GNU C compiler is now available for ftp from the file
/u2/emacs/gcc.tar on prep.ai.mit.edu. This includes machine
descriptions for vax and sun, 60 pages of documentation on writing
machine descriptions (internals.texinfo, internals.dvi and Info
This also contains the ANSI standard (Nov 86) C preprocessor and 30
pages of reference manual for it.
This compiler compiles itself correctly on the 68020 and did so
recently on the vax. It recently compiled Emacs correctly on the
68020, and has also compiled tex-in-C and Kyoto Common Lisp.
However, it probably still has numerous bugs that I hope you will
find for me.
I will be away for a month, so bugs reported now will not be
handled until then.
If you can't ftp, you can order a compiler beta-test tape from the
Free Software Foundation for $150 (plus 5% sales tax in
Massachusetts, or plus $15 overseas if you want air mail).
Free Software Foundation
1000 Mass Ave
Cambridge, MA 02138
Sunfreeware.com has packages for gcc-3.4.2 and gcc-3.3.2 for Solaris 9 and package for gcc-3.3.2 for Solaris 10. Usually one needs only gcc_small package has ONLY the C and C++ compilers and is a much smaller download but sunfreeware.com does not provide that.
If you use gcc it's very convenient to use Midnight Commander on Solaris as a command line pseudo IDE.
Please note that Sun Studio 11 is free both for Solaris and Linux and might be a better option for compilation on UltraSparc then GCC (10% or more faster code).
Mar 22, 2012
GCC 4.7.0 is out; it is being designated as a celebration of GCC's first 25 years.When Richard Stallman announced the first public release of GCC in 1987, few could have imagined the broad impact that it has had. It has prototyped many language features that later were adopted as part of their respective standards -- everything from "long long" type to transactional memory. It deployed an architecture-neutral automatic vectorization facility, OpenMP, and Polyhedral loop nest optimization. It has provided the toolchain infrastructure for the GNU/Linux ecosystem used everywhere from Google and Facebook to financial markets and stock exchanges. We salute and thank the hundreds of developers who have contributed over the years to make GCC one of the most long-lasting and successful free software projects in the history of this industry.
New features include software transactional memory support, more C11 and C++11 standard features, OpenMP 3.1 support, better link-time optimization, and support for a number of new processor architectures. See the GCC 4.7 changes page for lots more information.
Posted Mar 22, 2012 17:05 UTC (Thu) by jd (guest, #26381) [Link]The egcs vs. gcc fiasco comes to mind, but IIRC there have been a number of major reworkings. Certainly, there is an unbroken lineage from the original release to the present day - that's indisputable. Equally, it's indisputable that GCC is one of the most popular and powerful compilers out there. By these metrics, the claims are entirely correct.
However, having said that, the modern GCC wouldn't pass the "heraldry test" and there have been more than a few occasions when politics have delayed progress or disrupted true openness. The first of these is really a non-issue unless GCC applies for a coat of arms, but the second is more problematic. As GCC grows and matures, the more politics interferes, the more likely we are to see splintering.
Indeed, rival FLOSS compiler projects are taking off already, suggesting that the splintering has become enough of a problem for other projects to be able to reach critical mass.
Personally, I'd like to see GCC celebrate a 50 year anniversary as the top compiler. Language frontend developers can barely keep up with GCC, they won't be able to keep up with other compilers as well. Maximum language richness means you want as few core engine APIs as possible where the APIs have everything needed to support the languages out there. GCC can do that and has done for some time, which makes it a great choice.
But the GCC team (and the GLibC team) could do with being less provincial and more open. Those will be key to the next 25 years.
Posted Mar 22, 2012 17:17 UTC (Thu) by josh (subscriber, #17465) [Link]I agree entirely. Personally, I think GCC would benefit massively from a better culture of patch review. As far as I can tell, GCC's development model seems designed around having contributors do enough work to justify giving them commit access, and *then* they can actually contribute. GCC doesn't do a good job of handling mailed patches or casual contributors.
On top of that, GCC still uses Subversion for their primary repository, rather than a sensible distributed version control system. A Git mirror exists, but gcc.gnu.org doesn't point to it anywhere prominent. As a result, GCC doesn't support the model of "clone, make a series of changes, and publish your branch somewhere"; only people with commit access do their work on branches. And without a distributed system, people can't easily make a pile of small changes locally (as separate commits) rather than one giant change, nor can they easily keep their work up to date with changes to GCC.
Changing those two things would greatly reduce the pain of attempting to contribute to GCC, and thus encourage a more thriving development community around it.
(That leaves aside the huge roadblock of having to mail in paper copyright assignment forms before contributing non-trivial changes, but that seems unlikely to ever change.)
Posted Mar 22, 2012 18:51 UTC (Thu) by Lionel_Debroux (subscriber, #30014) [Link]Another thing that would reduce the pain to contribute to GCC is a code base with a lower entry barrier. Despite the introduction of the plugin architecture in GCC, which already lowered it quite a bit, the GCC code base remains held as harder to hack on, less modular, less versatile than the LLVM/Clang code base.
The rate of progress on Clang has been impressive: self-hosting occurred only two years ago, followed three months later by building Boost without defect macros, and six months later by building Qt. On the day g++ 4.7 is released, clang++ is the only compiler whose C++11 support can be said to rival that of g++ (clang++ doesn't support atomics and forward declarations for enums, but fully supports alignment).
GCC isn't alone in not having switched to DVCS yet: LLVM, and its sub-projects, haven't either... However, getting commit access there is quite easy, and no copyright assignment paperwork is required.
Posted Mar 23, 2012 3:20 UTC (Fri) by wahern (subscriber, #37304) [Link]I've delved into both GCC and clang to write patches, albeit simple ones. GCC is definitely arcane, but both are pretty impenetrable initially. You can glance at the clang source code and fool yourself into thinking it's easy to hack, but there's no shortage of things to complain about.
Compiler writing is extremely well trodden ground. It shouldn't be surprising that it's fairly easy to go from 0-60 quickly. But it's a marathon, not a sprint. The true test of clang/LLVM is whether it can weather having successive generations of developers hack on it without turning into something that's impossible to work with. GCC has clearly managed this, despite all the moaning, and despite not being sprinkled with magic C++/OOP fairy dust. The past few years have seen tremendously complex features added, and clang/LLVM isn't keeping apace.
And as far as C++11 support, they look neck-and-neck to me:
Posted Mar 22, 2012 20:59 UTC (Thu) by james_ulrich (guest, #83666) [Link]Having lurked around gcc for close to 5 years, it seems to me like whole patch review culture simply stems from the fact that there is not a single person who cares about the compiler as a whole. Sure individuals care about their passes (IRA stands out here as being well taken care of), but seldom as to what happens outside of it. And when people do do reviews it very much feels like the response is just "I'll just ack it to get you off my back" which obviously doesn't do wonders for quality. Unless a Linus-of-GCC person comes along, I don't see much long-term future for the project.
The recent decision to move the project code base to C++ is also something that I think will actually hurt them badly in the long run. The GCC code base is very hard to read as-is and moving it to a language that is notorious for being hard to read and understand will not make things any better. (I'm well aware that some amazing pieces of code have been written in C++, but it is not a simple fix to the code cleanliness problem)
Posted Mar 22, 2012 21:18 UTC (Thu) by HelloWorld (subscriber, #56129) [Link]The GCC code base is very hard to read as-is and moving it to a language that is notorious for being hard to read and understand will not make things any better.The GCC code base is actually a perfect example of things being convoluted because of missing functionality in the C language. C++, when used in a sensible way, is a way to fix this.
Posted Mar 22, 2012 22:05 UTC (Thu) by james_ulrich (guest, #83666) [Link]You mean C++ would magically make 2000+ line functions with variable declarations spanning over 50 lines easy to read? I think there are much lower hanging fruit in making the GCC code base readable before throwing C++ at it would be beneficial.
The decision to go with C++ seems (to me, an outside observer) to have been driven firstly by some people (I remember Ian Lance Taylor's name, but there where others pushing) "because I like to code in C++", rather than there being a pressing needed feature that would make the code clearer.
Posted Mar 22, 2012 23:30 UTC (Thu) by elanthis (guest, #6227) [Link]> You mean C++ would magically make 2000+ line functions with variable declarations spanning over 50 lines easy to read? I think there are much lower hanging fruit in making the GCC code base readable before throwing C++ at it would be beneficial.
Recompiling existing crappy C code with a C++ compiler does no such thing. It may very well provide the tools to rewrite that functions in a readable, sane way that C cannot easily do.
The one clear winner in C++ is data structures and templates. I cannot stress the importance of that enough.
The second you have to write a data structure that uses nothing but void* elements, or which has to be written as a macro, or which has to be copied-and-pasted for every different element type, you have a serious problem.
GCC is a heavy user of many complex data structures, many of which are written as macros. Compare this to the LLVM/Clang codebase, where such data structures are written once in clean, readable, testable, debugging C++ code, and reused in many places with an absolute minimum of fuss or danger.
I present you with the following link, which illustrates a number of very useful data structures in LLVM/Clang that are used all over the place, and which either do not exist, exist but are a bitch to use correctly, or which are copy-pastad all over the place in GCC:
Posted Mar 23, 2012 6:36 UTC (Fri) by james_ulrich (guest, #83666) [Link]I can see that the structures and constructs used in compilers lends itself very well to the features of C++.
My point is that the reason GCC is a mess is not because it is written in C. Even with C++, 2000 line functions need to be logically split, and 20 line if() statements with 5 levels deep subexpression nesting also need to be split to make it readable. These, and other, de-facto coding style idiosyncrasies need to be fixed (or at least agreed upon not to write code like that), which is in no way affected by the C/C++ decision.
GCC also has this "property", let's say, that code is never actually re-written, only new methods added in parallel to the old ones. Classic examples are the CC_FLAGS/cc0 thing and best of all reload. Everyone knew it sucked 15 years ago, yet only now are motions made in the form of LRA to replace it (which, BTW, are in now way motivated by using C++). The same can be said for the old register allocator, combine, etc. I somehow doubt that C++ alone would magically motivate anyone to start rewriting these old, convoluted but critical pieces.
Based on past observations my prediction for GCC-in-C++ is that all the old ugly code will simply stay, the style will not really change, but now it will ugly code mixed with C++ constructs.
Posted Mar 23, 2012 2:00 UTC (Fri) by HelloWorld (subscriber, #56129) [Link]I would have responded to your posting, but elanthis was faster at making my point.
Posted Mar 23, 2012 3:31 UTC (Fri) by wahern (subscriber, #37304) [Link]Indeed. If you read the clang source code, instead of having 2000 line functions, you have things implemented with something approximating 2000, single line functions. Both are impenetrable. Where GCC abuses macros, clang/LLVM abuses classing and casting. (You wouldn't think that possible, but analyze the clang code for awhile and you'll see what I mean.)
Posted Mar 22, 2012 23:59 UTC (Thu) by slashdot (subscriber, #22014) [Link]It's hard to read BECAUSE it is not in C++, obviously.
Although the fundamental reason it's hard to read is because it's not fully a library like LLVM/Clang, so they don't need to write clean reusable code with documented interfaces, and it shows.
The real question is: does it make sense to try to clean up, modularize and "C++ize" gcc?
Or it is simpler and more effective to just stop development on GCC, and move to work on a GPL or LGPL licensed fork of LLVM, porting any good things GCC has that LLVM doesn't?
Posted Mar 23, 2012 6:59 UTC (Fri) by james_ulrich (guest, #83666) [Link]Why does everyone pass around using C++ as some magic bullet that fixes all ugliness now and forever? It doesn't and it never will. The only lesson to be learnt from LLVM is that, when *starting from scratch*, a compiler can be well written in C++. Extrapolating that to "GCC's main problem is is not being written in C++ and doing so will fix all our problems" is plain idiotic.
Even if you start coding in C++, you still need to think about how to split long functions, ridiculous if() statements and make other general ugliness clearer. Take this (random example, there are much worse ones):
if (REG_P (src) && REG_P (dest)
&& ((REGNO (src) < FIRST_PSEUDO_REGISTER
&& ! fixed_regs[REGNO (src)]
&& CLASS_LIKELY_SPILLED_P (REGNO_REG_CLASS (REGNO (src))))
|| (REGNO (dest) < FIRST_PSEUDO_REGISTER
&& ! fixed_regs[REGNO (dest)]
&& CLASS_LIKELY_SPILLED_P (REGNO_REG_CLASS (REGNO (dest))))))
How exactly will C++ make this more obvious? Ofcourse it won't.
And, no, GCC not being a library is not it its main problem either.
Posted Mar 23, 2012 4:06 UTC (Fri) by Cyberax (subscriber, #52523) [Link]Rewriting stuff in another language is actually a good way to clean up the code.
Which gcc badly needs.
Posted Mar 23, 2012 13:27 UTC (Fri) by jzbiciak (✭ supporter ✭, #5246) [Link]Allow the cynic in me to make a possibly unfounded comment:
For some of GCC's ugliness, more of the improvement may come from the "rewrite" part than the "in C++" part. The "in C++" part just encourages a more thorough refactoring and rethinking of the problem, than a superficial tweaking-for-less-ugly.
In any case, nothing will fix GNU's ugly indenting standards as long as the language has a C/C++ style syntax. ;-)
Posted Mar 22, 2012 22:26 UTC (Thu) by flewellyn (subscriber, #5047) [Link]I'm sorry, I didn't catch the reference. What is the "heraldry test"?
Posted Mar 23, 2012 2:26 UTC (Fri) by ghane (subscriber, #1805) [Link]In this case the current GCC is from the bastard (and disowned) son of the family (EGCS), who took over the coat of arms when the legitimate branch of the family died out, and was blessed by all. http://en.wikipedia.org/wiki/GNU_Compiler_Collection#EGCS...
My grandfather's axe, my father changed the handle, I changed the blade, but it is still my grandfather's axe.
Posted Mar 23, 2012 20:43 UTC (Fri) by JoeBuck (subscriber, #2330) [Link]
The use of the metaphor is mistaken. Much of the code in the first egcs release that wasn't in GCC 2.7.x was already checked in to the FSF tree, and merges continued to take place back and forth. Thinking that GCC was somehow a completely new compiler with the same name after the EGCS/GCC remerger is just wrong. Furthermore it was the same people developing the compiler before and after. What really happened was that there was a management shakeup
This manual documents how to use the GNU compilers, as well as their features and incompatibilities, and how to report bugs. It corresponds to GCC version 3.4.4. The internals of the GNU compilers, including how to port them to new targets and some information about how to write front ends for new languages, are documented in a separate manual.
The software used to create gcc 3.4.2 (the steps are very similar for earlier versions of gcc) was all from packages on sunfreeware.com. These include the gcc-3.3.2, bison-1.875d, flex-2.5.31, texinfo-4.2, autoconf-2.59, make-3.80, and automake-1.9 packages. It may also be important to install the libiconv-1.8 package to use some of the languages in gcc 3.4.2. See also a comment below about the libgcc package.
There are differences between this version of gcc and previous 2.95.x versions on Solaris systems. For details, go to
- I downloaded the gcc-3.4.2.tar.gz source from the GNU site.
- I put the source in a directory with plenty of disk space.
- I then rangunzip gcc-3.4.2.tar.gz tar xvf gcc-3.4.2.tar (this creates a directory gcc-3.4.2) cd gcc-3.4.2 mkdir objdir cd objdir ../configure --with-as=/usr/ccs/bin/as --with-ld=/usr/ccs/bin/ld --disable-nls (except in the case of Solaris 2.5 on SPARC where I used ../configure --with-as=/usr/ccs/bin/as --with-ld=/usr/ccs/bin/ld --disable-nls --disable-libgcj --enable-languages=c,c++,objc and left out the other language options - like the gnat ada.) (I choose to use the as and ld that come with Solaris and are usually in the /usr/ccs/bin directory. These files are only there if the SUNW developer packages from the Solaris have been installed. I have noticed problems with the NLS support and so I disable that. The default installation directory is /usr/local.) make bootstrap make install (this puts a lot of files in /usr/local subdirectories) and the gcc and other executables in /usr/local/bin.
- I put /usr/local/bin and /usr/ccs/bin in my PATH environment.
In particular, gcc-3.4.2 offers support for the creation of 64-bit executables when the source code permits it. Programs like top, lsof, ipfilter, and others support, and may need, such compiles to work properly when running the 64-bit versions of Solaris 7, 8, and 9 on SPARC platforms. In some cases, simply using the -m64 flag for gcc during compiles (which may require either editing the Makefiles to add -m64 to CFLAGS or just doing gcc -m64 on command lines) works.
When you compile something with any of these compilers, the executable may end up depending on one or more of the libraries in /usr/local/lib such as libgcc_s.so. An end user may need these libraries, but not want the entire gcc file set. I have provided a package called libgcc (right now this is for gcc-3.3.x but a version for 3.4.x is being created) for each level of Solaris. This contains all the files from /usr/local/lib generated by a gcc package installation. An end user can install this or a subset. You can determine if one of these libraries is needed by running ldd on an executable to see the library dependencies.
I am happy to hear about better ways to create gcc or problems that may be specific to my packages. Detailed problems with gcc can be asked in the gnu.gcc.help newsgroup or related places.
Phil's Solaris hints
#if defined (__SVR4) && defined (__sun)This should work on gcc, sun cc, and lots o other compilers, on both sparc and intel.
If for some reason, you want to know that Sun forte CC (c++) compiler is being used, something that seems to work is
#if defined(__SUNPRO_CC)Whereas for forte cc (regular C), you can use
Developing for Linux on Intel - For many Windows* developers, Linux* presents a learning challenge. Not only does Linux have a different programming model, but it also requires its own toolchain, as programmers must leave behind the Visual Studio* (VS) or Visual Studio* .NET (VS.NET) suites and third-party plug-in ecosystem. This article helps Windows developers understand the options available as they seek to replicate, on Linux or Solaris, the rich and efficient toolchain experience they've long enjoyed on Windows.
One persistent misfeature of open source development is thoughtless mimicry, copying the behaviors of other projects without considering if they work or if there are better options under the current circumstances. At best, these practices are conventional wisdom, things that everybody believes even if nobody really remembers why. At worst, they're lies we tell ourselves.
Perhaps "lies" is too strong a word. "Myths" is better; these ideas may not be true, but we don't intend to deceive ourselves. We may not even be dogmatic about them, either. Ask any experienced open source developer if his users really want to track the latest CVS sources. Chances are, he doesn't really believe that.
In practice, though, what we do is more important than what we say. Here's the problem. Many developers act as if these myths are true. Maybe it's time to reconsider our ideas about open source development. Are they true today? Were they ever true? Can we do better?
Some of these myths also apply to proprietary software development. Both proprietary and open models have much room to improve in reliability, accessibility of the codebase, and maturity of the development process. Other myths are specific to open source development, though most stem from treating code as the primary artifact of development (not binaries), not from any relative immaturity in its participants or practices.
Not every open source developer believes every one of these ideas, either. Many experienced coders already have good discipline and well-reasoned habits. The rest of us should learn from their example, understanding when and why certain practices work and don't work.
Publishing your Code Will Attract Many Skilled and Frequent Contributors
Myth: Publicly releasing open source code will attract flurries of patches and new contributors.
Reality: You'll be lucky to hear from people merely using your code, much less those interested in modifying it.
While user (and developer) feedback is an advantage of open source software, it's not required by most licenses, nor is it guaranteed by any social or technical means. When was the last time you reported a bug? When was the last time you tried to fix a bug? When was the last time you produced a patch? When was the last time you told a developer how her work solved your problem?
By WilΒ Wheaton
Some projects grow large and attract many developers. Many more projects have only a few developers. Most of the code in a given project comes from one or a few developers. That's not bad most projects don't need to be huge to be successful but it's worth keeping in mind.
The problem may be the definition of success. If your goal is to become famous, open source development probably isn't for you. If your goal is to become influential, open source development probably isn't for you. Those may both happen, but it's far more important to write and to share good software. Success is also hard to judge by other traditional means. It's difficult to count the number of people using your code, for example.
It's far more important to write and to share good software. Be content to produce a useful program of sufficiently high quality. Be happy to get a couple of patches now and then. Be proud if one or two developers join your project. There's your success.
This isn't a myth because it never happens. It's a myth because it doesn't happen as often as we'd like.
Feature Freezes Help Stability
Myth: Stopping new development for weeks or months to fix bugs is the best way to produce stable, polished software.
Reality: Stopping new development for awhile to find and fix unknown bugs is fine. That's only a part of writing good software.
The best way to write good software is not to write bugs in the first place. Several techniques can help, including test-driven development, code reviews, and working in small steps. All three ideas address the concept of managing technical debt: entropy increases, so take care of small problems before they grow large.
Think of your project as your home. If you put things back when you're done with them, take out the trash every few days, and generally keep things in order, it's easy to tidy up before having friends over. If you rush around in the hours before your party, you'll end up stuffing things in drawers and closets. That may work in the short term, but eventually you'll need something you stashed away. Good luck.
By avoiding bugs where possible, keeping the project clean and working as well as possible, and fixing things as you go, you'll make it easier for users to test your project. They'll probably find smaller bugs, as the big ones just won't be there. If you're lucky (the kind of luck attracted by clear-thinking and hard work), you'll pick up ideas for avoiding those bugs next time.
Another goal of feature freezes is to solicit feedback from a wider range of users, especially those who use the code in their own projects. This is a good practice. At best, only a portion of the intended users will participate. The only way to get feedback from your entire audience is to release your code so that it reaches as many of them as possible.
Many of the users you most want to test your code before an official release won't. The phrase "stable release" has special magic that "alpha," "beta," and "prelease" lack. The best way to get user feedback is to release your code in a stable form.
Make it easy to keep your software clean, stable, and releasable. Make it a priority to fix bugs as you find them. Seek feedback during development, but don't lose momentum for weeks on end as you try to convince everyone to switch gears from writing new code to finding and closing old bugs.
This isn't a myth because it's bad advice. It's only a myth because there's better advice.
The Best Way to Learn a Project is to Fix its Bugs and Read its Code
Myth: New developers interested in the project will best learn the project by fixing bugs and reading the source code.
Reality: Reading code is difficult. Fixing bugs is difficult and probably something you don't want to do anyway. While giving someone unglamorous work is a good way to test his dedication, it relies on unstructured learning by osmosis.
Learning a new project can be difficult. Given a huge archive of source code, where do you start? Do you just pick a corner and start reading? Do you fire up the debugger and step through? Do you search for strings seen in the user interface?
While there's no substitute for reading the code, using the code as your only guide to the project is like mapping the California coast one pebble at a time. Sure, you'll get a sense of all the details, but how will you tell one pebble from the next? It's possible to understand a project by working your way up from the details, but it's easier to understand how the individual pieces fit together if you've already seen them from ten thousand feet.
Writing any project over a few hundred lines of code means creating a vocabulary. Usually this is expressed through function and variable names. (Think of "interrupts," "pages," and "faults" in a kernel, for example.) Sometimes it takes the form of a larger metaphor. (Python's Twisted framework uses a sandwich metaphor.)
Your project needs an overview. This should describe your goals and offer enough of a roadmap so people know where development is headed. You may not be able to predict volunteer contributions (or even if you'll receive any), but you should have a rough idea of the features you've implemented, the features you want to implement, and the problems you've encountered along the way.
If you're writing automated tests as you go along (and you certainly should be), these tests can help make sense of the code. Customer tests, named appropriately, can provide conceptual links from working code to required features.
Keep your overview and your tests up-to-date, though. Outdated documentation can be better than no documentation, but misleading documentation is, at best, annoying and unpleasant.
This isn't a myth because reading the code and fixing bugs won't help people understand the project. It's a myth because the code is only an artifact of the project.
Packaging Doesn't Matter
Myth: Installation and configuration aren't as important as making the source available.
Reality: If it takes too much work just to get the software working, many people will silently quit.
Potential users become actual users through several steps. They hear about the project. Next, they find and download the software. Then they must brave the installation process. The easier it is to install your software, the sooner people can play with it. Conversely, the more difficult the installation, the more people will give up, often without giving you any feedback.
Granted, you may find people who struggle through the installation, report bugs, and even send in patches, but they're relatively few in number. (I once wrote an installation guide for a piece of open source software and then took a job working on the code several months later. Sometimes it's worth persisting.)
Difficulties often arise in two areas: managing dependencies and creating the initial configuration. For a good example of installation and customization, see Brian Ingerson's Kwiki. The amount of time he put into making installation easier has paid off by saving many users hours of customization. Those savings, in turn, have increased the number of people willing to continue using his code. It's so easy to use, why not set up a Kwiki for every good idea that comes along?
It's OK to expect that mostly programmers will use development tools and libraries. It's also OK to assume that people should skim the details in the
INSTALLfiles before trying to build the code. If you can't easily build, test, and install your code on another machine, though, you have no business releasing it to other people.
It's not always possible, nor advisable, to avoid dependencies. Complex web applications likely require a database, a web server with special configurations (
mod_python, or a Java stack). Meta-distributions can help. Apache Toolbox can take out much of the pain of Apache configuration. Perl bundles can make it easier to install several CPAN modules. OS packages (RPMs, debs, ebuilds, ports, and packages) can help.
It takes time to make these bundles and you might not have the hardware, software, or time to write and test them on all possible combinations. That's understandable; source code is the real compatibility layer on the free Unix platforms anyway.
At a minimum, however, you should make your dependencies clear. Your configuration process should detect as many dependencies as possible without user input. It's OK to require more customization for more advanced features. However, users should be able to build and to install your software without having to dig through a manual or suck down the latest and greatest code from CVS for a dozen other projects.
This isn't a myth because people really believe software should be difficult to install. It's a myth because many projects don't make it easier to install.
It's Better to Start from Scratch
Myth: Bad or unappealing code or projects should be thrown away completely.
Reality: Solving the same simple problems again and again wastes time that could be applied to solving new, larger problems.
Writing maintainable code is important. Perhaps it's the most important practice of software development. It's secondary, though, to solving a problem. While you should strive for clean, well-tested, and well-designed code that's reasonably easy to modify as you add features, it's even more important that your code actually works.
Throwing away working code is usually a mistake. This applies to functions and libraries as well as entire programs. Sometimes it seems as if most of the effort in writing open source software goes to creating simple text editors, weblogs, and IRC clients that will never attract more than a handful of users.
Many codebases are hard to read. It's hard to justify throwing away the things the code does well, though. Software isn't physical it's relatively easy to change, even at the design level. It's not a building, where deciding to build four stories instead of two means digging up the entire foundation and starting over. Chances are, you've already solved several problems that you'd need to rediscover, reconsider, re-code, and re-debug if you threw everything away.
Every new line of code you write has potential bugs. You will spend time debugging them. Though discipline (such as test-driven development, continual code review, and working in small steps) mitigates the effects, they don't compare in effectiveness to working on already-debugged, already-tested, and already-reviewed code.
Too much effort is spent rewriting the simple things and not enough effort is spent reusing existing code. That doesn't mean you have to put up with bad (or simply different) ideas in the existing code. Clean them up as you go along. It's usually faster to refine code into something great than to wait for it to spring fully formed and perfect from your mind.
This isn't a myth because rewriting bad code is wrong. It's a myth because it can be much easier to reuse and to refactor code than to replace it wholesale.
Programs Suck; Frameworks Rule!
Myth: It's better to provide a framework for lots of people to solve lots of problems than to solve only one problem well.
Reality: It's really hard to write a good framework unless you're already using it to solve at least one real problem.
Which is better, writing a library for one specific project or writing a library that lots of projects can use?
Software developers have long pursued abstraction and reuse. These twin goals have driven the adoption of structured programming, object orientation, and modern aspects and traits, though not exactly to roaring successes. Whether proprietary code, patent encumbrances, or not-invented-here stubbornness, there may be more people producing "reusable" code than actually reusing code.
Part of the problem is that it's more glamorous (in the delusive sense of the word) to solve a huge problem. Compare "Wouldn't it be nice if people had a fast, cross-platform engine that could handle any kind of 3D game, from shooter to multiplayer RPG to adventure?" to "Wouldn't it be nice to have a simple but fun open source shooter?"
Big ambitions, while laudable, have at least two drawbacks. First, big goals make for big projects projects that need more resources than you may have. Can you draw in enough people to spend dozens of man-years on a project, especially as that project only makes it possible to spend more time making the actual game? Can you keep the whole project in your head?
Second, it's exceedingly difficult to know what is useful and good in a framework unless you're actually using it. Is one particular function call awkward? Does it take more setup work than you need? Have you optimized for the wrong ideas?
Curiously, some of the most portable and flexible open source projects today started out deliberately small. The Linux kernel originally ran only on x86 processors. It's now impressively portable, from embedded processors to mainframes and super-computer clusters. The architecture-dependent portions of the code tend to be small. Code reuse in the kernel grew out of refining the design over time.
Solve your real problem first. Generalize after you have working code. Repeat. This kind of reuse is opportunistic.
This isn't a myth because frameworks are bad. This is a myth because it's amazingly difficult to know what every project of a type will need until you have at least one working project of that type.
I'll Do it Right *This* Time
Myth: Even though your previous code was buggy, undocumented, hard to maintain, or slow, your next attempt will be perfect.
Reality: If you weren't disciplined then, why would you be disciplined now?
Widespread Internet connectivity and adoption of Free and Open programming languages and tools make it easy to distribute code. On one hand, this lowers the barriers for people to contribute to open source software. On the other hand, the ease of distribution makes finding errors less crucial. This article has been copyedited, but not to the same degree as a print book; it's very easy to make corrections on the Web.
It's very easy to put out code that works, though it's buggy, undocumented, slow, or hard to maintain. Of course, imperfect code that solves a problem is much better than perfect code that doesn't exist. It's OK (and even commendable) to release code with limitations, as long as you're honest about its limitations though you should remove the ones that don't make sense.
The problem is putting out bad code knowingly, expecting that you'll fix it later. You probably won't. Don't keep bad code around. Fix it or throw it away.
This may seem to contradict the idea of not rewriting code from scratch. In conjunction, though, both ideas summarize to the rule of "Know what's worth keeping." It's OK to write quick and dirty code to figure out a problem. Just don't distribute it. Clean it up first.
Develop good coding habits. Training yourself to write clean, sensible, and well-tested code takes time. Practice on all code you write. Getting out of the habit is, unfortunately, very easy.
If you find yourself needing to rewrite code before you publish it, take notes on what you improve. If a maintainer rejects a patch over cleanliness issues, ask the project for suggestions to improve your next attempt. (If you're the maintainer, set some guidelines and spend some time coaching people along as an investment. If it doesn't immediately pay off to your project, it may help along other projects.) The opportunity for code review is a prime benefit of participating in open source development. Take advantage of it.
This isn't a myth because it's impossible to improve your coding habits. This is a myth because too few developers actually have concrete, sensible plans to improve.
Warnings Are OK
Myth: Warnings are just warnings. They're not errors and no one really cares about them.
Reality: Warnings can hide real problems, especially if you get used to them.
It's difficult to design a truly generic language, compiler, or library partially because it's impossible to imagine all of its potential uses. The same rule applies to reporting warnings. While you can detect some dangerous or nonsensical conditions, it's possible that users who really know what they are doing should be able to bypass those warnings. In effect, it's sometimes very useful to be able to say, "I realize this is a nasty hack, but I'm willing to put up with the consequences in this one situation."
Other times, what you consider a warnable or exceptional condition may not be worth mentioning in another context. Of course, the developer using the tool could just ignore the warnings, especially if they're nonfatal and are easily shunted off elsewhere (even if it is
/dev/null). This is a problem.
When the "low oil pressure" or "low battery" light comes on in a car, the proper response is to make sure that everything is running well. It's possible that the light or a sensor is malfunctioning, but ignoring the real problem whether bad light or oil leak may exacerbate further problems. If you assume that the light has malfunctioned but never replace it, how will you know if you're really out of oil?
Similarly, an error log filled with trivial, fixable warnings may hide serious problems. Any well-designed tool generates warnings for a good reason: you're doing something suspicious.
When possible, purge all warnings from your code. If you expect a warning to occur and if you have a good reason for it disable it in the narrowest possible scope. If it's generated by something the user does and if the user is privy to the warning, make it clear how to avoid that condition.
Running a program that spews crazy font configuration questions and null widget access messages to the console is noisy and incredibly useless to anyone who'd rather run your software than fix your mess. Besides that, it's much easier to dig through error logs that only track real bugs and failures. Anything that makes it easier to find and fix bugs is nice.
This isn't a myth because people really ignore warnings. It's a myth because too few people take the effort to clean them up.
End Users Love Tracking CVS
Myth: Users don't mind upgrading to the latest version from CVS for a bugfix or a long-awaited feature.
Reality: If it's difficult for you to provide important bugfixes for previous releases, your CVS tree probably isn't very stable.
It's tricky to stay abreast of a project's latest development sources. Not only do you have to keep track of the latest check-ins, you may have to guess when things are likely to spend more time working than crashing and build binaries right then. You can waste a lot of time watching compiles fail. That's not much fun for a developer. It's even less exciting for someone who just wants to use the software.
Building software from CVS also likely means bypassing your distribution's usual package manager. That can get tangled very quickly. Try to keep required libraries up-to-date for only two applications you compiled on your own for awhile. You'll gain a new appreciation for people who make and test packages.
There are two main solutions to this trouble.
First, keep your main development sources stable and releasable. It should be possible for a dedicated user (or, at least, a package maintainer for a distribution) to check out the current development sources and build a working program with reasonable effort. This is also in your best interests as a developer: the easier the build and the fewer compile, build, and installation errors you allow to persist, the easier it is for existing developers to continue their work and for new developers to start their work.
Second, release your code regularly. Backport fixes if you have to fix really important bugs between releases; that's why tags and branches exist in CVS. This is much easier if you keep your code stable and releasable. Though there's no guarantee users will update every release, working on a couple of features per release tends to be easier anyway.
This isn't a myth because developers believe that development moves too fast for snapshots. It's a myth because developers aren't putting out smaller, more stable, more frequent releases.
Common Sense Conclusions
Again, these aren't myths because they're never true. There are good reasons to have a feature freeze. There are good reasons to invite new developers to get to know a project by looking through small or simple bug reports. Sometimes, it does make sense to write a framework. They're just not always true.
It's always worth examining why you do what you do. What prevents you from releasing a new stable version every month or two? Can you solve that problem? Solve it. Would building up a good test suite help you cut your bug rates? Build it. Can you refactor a scary piece of code into something saner in a series of small steps? Refactor it.
Making your source code available to the world doesn't make all of the problems of software development go away. You still need discipline, intelligence, and sometimes, creative solutions to weird problems. Fortunately, open source developers have more options. Not only can we work with smart people from all over the world, we have the opportunity to watch other projects solve problems well (and, occasionally, poorly).
Learn from their examples, not just their code.
chromatic is the technical editor of the O'Reilly Network and the co-author of Perl Testing: A Developer's Notebook.
Here at the U of C, we have a big grid of Sun Blade 1000 workstations, with gcc and g++ for compilers. There are some subtle differences between GCC/Solaris and GCC/x86-Linux, and this is a list of what I've come accross so far.
This file describes differences between GNU compilers on x86 machines and Solaris machines. These are all from experience, so who knows how accurate they are.
Note that I'm assuming the code is being developed on a Linux box, and then later being ported.
Textbooks are full of good advices:
Use other aids as well. Explaining your code to someone else (even a teddy bear) is wonderfully effective. Use a debugger to get a stak trace. Use some of the commercial tools that check for memory leaks, array bounds violations, suspect code and the like. Step through your program when it has become clear that you have the wrong picture of how the code works.
Brian W. Kernighan, Rob Pike, The practice of programming, 1999 (Chapter 5: Debugging)
Enable every optional warning; view the warnings as a risk-free, high-return investment in your program. Don't ask, "Should I enable this warning?" Instead ask, "Why shouldn't I enable it?" Turn on every warning unless you have an excellent reason not to.
Steve Macguire, Writing solid code, 1993
Sounds familiar? But with which option? This page tries to answer that kind of question.
Constructive feedback is welcome.
Hi gcc 2.95.1 Solaris 2.7 c++ -Wall -g -W -Wpointer-arith -Wbad-function-cast -Wcast-align -Wmissing-prototypes -Wstrict-prototypes -c -o glx/i_figureeight.o -DHAVE_CONFIG_H -DDEF_FILESEARCHPATH=\"/usr/remote/lib/app-defaults/%N%C%S:/usr/remote/lib/app-defaults/%N%S\" -I. -I.. -I../../xlock/ -I../.. -I/usr/openwin/include -I/usr/remote/include/X11 -I/usr/remote/include -I/usr/dt/include -g -O2 ../../modes/glx/i_figureeight.cc In file included from ../../xlock/xlock.h:144, from ../../modes/glx/i_twojet.h:7, from ../../modes/glx/i_threejet.h:3, from ../../modes/glx/i_threejetvec.h:3, from ../../modes/glx/i_figureeight.h:3, from ../../modes/glx/i_figureeight.cc:1: /usr/openwin/include/X11/Xlib.h:2063: ANSI C++ forbids declaration `XSetTransientForHint' with no type I maintain xlock and older versions no longer compile out of the box. I am not in control of the include files that Sun distributes from /usr/openwin. A warning I can live with easier. The only way I see around this was to require -fpermissive if using g++ on Solaris. My worry is that -fpermissive may not be supported by all versions of g++ and may cause another error. -- Cheers, /X\ David A. Bagley (( X email@example.com http://www.tux.org/~bagleyd/ \X/ xlockmore and more ftp://ftp.tux.org/pub/tux/bagleyd
(Posted Apr 22, 2005 5:14 UTC (Fri) by guest yem) (Post reply)
Still no strong signature for the tarballs. What is with these guys?
(Posted Apr 22, 2005 6:43 UTC (Fri) by subscriber nix) (Post reply)
Er, the tarballs all have OpenPGP signatures.
(You can't upload anything to ftp.gnu.org and mirrors anymore without that.)
(Posted Apr 22, 2005 8:12 UTC (Fri) by guest yem) (Post reply)
Ah so they do. Sorry. ftp.gnu.org was down and the mirror I checked isn't carrying the signatures.
All is well :-)
gcc 4.0 available
(Posted Apr 22, 2005 5:21 UTC (Fri) by guest xoddam) (Post reply)
Congratulations and thanks to the gcc maintainers. This will be a big
step forward as it stabilised and becomes a preferred compiler. Though
I'm sure some people will go on using gcc 2.95 forever :-)
gcc-2.95.3 was a good vintage...
(Posted Apr 22, 2005 6:27 UTC (Fri) by subscriber dank) (Post reply)
Hey, don't knock gcc-2.95.3.
It was a very good release in many ways,
and on some benchmarks, beats every
later version of gcc so far, up to
and including gcc-3.4.
(I haven't tested gcc-4.0.0 yet, but
I gather it won't change that. I'm hoping gcc-4.1.0 finally
knocks gcc-2.95.3 off its last perch, myself.)
gcc-2.95.3 was a good vintage...
(Posted Apr 22, 2005 6:45 UTC (Fri) by subscriber nix) (Post reply)
It's when the RTL optimizations start getting disabled that you'll see real speedups. Right now most of them are enabled but not doing as much as they used to, which is why GCC hasn't slowed down significantly in 4.x despite having literally dozens of new optimization passes over 3.x.
gcc-2.95.3 was a good vintage...
(Posted Apr 22, 2005 11:29 UTC (Fri) by guest steven97) (Post reply)
You are making two assumptions that are wrong:
1) rtl optimizers will be disabled. It appears this won't happen any
2) rtl optimizers do less, so they consume less time. I wish that were
true. There is usually no relation between the number of transformations
and the running time of a pass. Most of the time is in visiting
instructions and doing the data flow analysis. That takes time even if
there isn't a single opportunity for a pass to do something useful.
gcc-2.95.3 was a good vintage...
(Posted Apr 22, 2005 12:39 UTC (Fri) by subscriber nix) (Post reply)
1) rtl optimizers will be disabled. It appears this won't happen any time soon.
I'm aware that you're involved in an ongoing flamewar, er, I mean animated discussion in this area, and I'm staying well out of it :)
If the damned things weren't so intertwined they'd be easier to ditch: and indeed it's their intertwined and hard-to-maintain nature that makes it all the more important to try to ditch them (or at least simplify them to an absolute minimum).
Obviously some optimizations (peepholes and such) actually benefit from being performed at such a low level, but does anyone really think that loop analysis, for instance, should be performed on RTL? It is, but its benefits at that level are... limited compared to its time cost.
2) rtl optimizers do less, so they consume less time. I wish that were true. There is usually no relation between the number of transformations and the running time of a pass. Most of the time is in visiting instructions and doing the data flow analysis. That takes time even if there isn't a single opportunity for a pass to do something useful.
Er, but the compiler's not slowed down significantly even with optimization on. Are the tree-ssa passes really so fast that they add nearly no time to the compiler's runtime? My -ftime-report dumps don't suggest so.
gcc-2.95.3 was a good vintage...
(Posted Apr 22, 2005 18:02 UTC (Fri) by guest steven97) (Post reply)
Most tree optimizers _are_ fast, but not so fast that they consume no
time at all. But they optimize so much away that the amount of RTL
produced is less. If that is what you had in mind when you said "RTL
optimizers do less", then yes, there is just less RTL to look at, so
while most RTL passes still look at the whole function, they look at a
smaller function most of the time. That is one reason.
The other reason why GCC4 is not slower (not much ;-) than GCC3 is that
many rather annoying quadratic algorithms in the compiler have been
removed. With a little effort, some of the patches for that could be
backported to e.g. GCC 3.4, and you'd probably get a significantly faster
GCC3. Other patches were only possible because there is an optimization
path now before RTL is generated.
gcc-2.95.3 was a good vintage...
(Posted Apr 22, 2005 21:02 UTC (Fri) by subscriber nix) (Post reply)
That's what I meant, yes, and it's so intuitively obvious that it amazed me to see you disagreeing. Obviously less code -> less work -> less time!
I didn't mean the RTL optimizers had become intrinsically faster (except inasmuch as code's been ripped out of them).
gcc-2.95.3 was a good vintage...
(Posted Apr 22, 2005 16:03 UTC (Fri) by subscriber Ross) (Post reply)
I'm not sure he was talking about the speed of the compiler. I read it as
talking about the quality of the generated code. I could easily be wrong
gcc-2.95.3 was a good vintage...
(Posted Apr 22, 2005 17:06 UTC (Fri) by guest mmarq) (Post reply)
" I'm not sure he was talking about the speed of the compiler. I read it as talking about the quality of the generated code. "
And that is what should matter the most; *The end user*. Because if people complain about the speed of the compilation process they should change to a better computer, perhaps a NForce 4/5 based for Atlhon64 or Pentium4 or the latest VIA chipsets with support for SLI and those Dual Core CPUs coming out soon!...
... he, right!, support for those beasts aren't good enough for Linux right now! But that isn't much different from what was in, say 2001!...
I've no intention to add to any flamewar, but my point is as exposed above that the community trend to discuss heavily on less important issues. The Linux commercial party are battling for scratchs while the majority of end users not only don't know, but they don't want to know, because Linux like Unix is viewed as something for geeks or bearded gurus,..., and worst of all standarts go at the speed of a snail, because the philosophy is add features and avoid standards.
There are hundreds of distros but i havent seen one that adds a report of tested hardware configurations (if anyone know one please link it), or care about those, or care about being *religious* about standards, because that is the only way to expose the masses of low tech end users to the same 'methods' and 'ways', for a much larger period of time, adding the change for distros to get very good at the 'interface for low tech users',and in consequence gain a larger adoption percentage.
Open Source community is closing itself inside it's own technical space! And when, if, that happens completly, then it's another Unix story almost like a carbon copy.
(Posted Apr 22, 2005 21:16 UTC (Fri) by subscriber sdalley) (Post reply)
Novell/Suse makes a reasonable attempt, see the links on http://cdb.novell.com/?LANG=en_UK .
(Posted Apr 24, 2005 17:09 UTC (Sun) by guest mmarq) (Post reply)
" Novell/Suse makes a reasonable attempt,... "
I've done a search on that site, on 'Certified Hardware' for hardware/software, for the companys ASUSTEK, ECS, EPOX, DFI, GIGABYTE and INTEL with the the keywords "motherbord", and without any keyword which means every piece of hardware.
Since those are manufactores that also do Graphic Boards besides Mobos, they would represent perhaps more than 70% of a common desktop system, and representing perhaps more than 70% of the market, very ahead of the integrators HP/Compaq, IBM, gateway and DELL all put togheter. And Since some of those manufactors also do server 'iron', Mobos or systems, i belive that is represented, perhaps not far from 90% of *all* (server+desktop) deployed base.
The only result i've got was for ASUSTEK showing old Network Servers, and INTEL showing LAN driver(net adpaters), RAID adapters and Network Servers, some aging a lot??!!!
Understanding what line of business Novell is in, still i consider this very far from reasonable... not reasonable even to them if they want to survive in a medium to long term!.
gcc-2.95.3 was a good vintage...
(Posted Apr 22, 2005 21:36 UTC (Fri) by subscriber nix) (Post reply)
The standards-compliance changes in GCC at least (and there have been many) weren't a matter of making GCC reject noncompliant code so much as they were one of making it accept compliant code it'd been wrongly rejecting. I mean, nobody waded into the compiler thinking `Shiver me lexers, what widely-used extension shall I remove today? Arrr!' --- it's more that, say, the entire C++ parser was rewritten (thank you, Mark!), and in the process a bunch of extensions got dropped because they were rarely-used and didn't seem worth reimplementing, and a bunch of noncompliant nonsense that G++ had incorrectly accepted was now correctly rejected, *simply because accepting the nonsense had always been a bug*, just one that had previously been too hard to fix.
(Oh, also, GCC is very much more than `the Linux compiler'. All the free BSDs use it, many commercial Unix shops use it, Cygwin uses it, Apple relies on it for MacOSX, and it's very widely used in the embedded and (I believe) avionics worlds. Even if, with the wave of a magic wand, the Hurd was perfected and Linux dissolved into mist tomorrow, GCC would still be here.)
gcc2.95.3 was a good vintage...
(Posted Apr 22, 2005 21:22 UTC (Fri) by subscriber nix) (Post reply)
Well in that case he's likely wrong :) Even though the focus of much of the 3.x series was standards-compliance, not optimization, and 3.x didn't have tree-ssa, there have been some notable improvements in that time, like the new i386 backend.
Alas, major performance gains on register-poor arches like IA32 may be hard to realize without rewriting the register allocator --- and the part of GCC that does register allocation (badly) also does so much else, and is so old and crufty, that rewriting it is a heroic task that has so far defeated all comers...
gcc-2.95.3 was a good vintage...
(Posted Apr 25, 2005 6:38 UTC (Mon) by guest HalfMoon) (Post reply)
Alas, major performance gains on register-poor arches like IA32 may be hard to realize without rewriting the register allocator ...
Then there's Opteron, or at least AMD64 ... odd how by the time GCC starts to get serious support for x86, the hardware finally started to grow up (no thanks to Intel of course).
gcc-2.95.3 was a good vintage...
(Posted Apr 24, 2005 20:35 UTC (Sun) by subscriber dank) (Post reply)
Yes, I meant the quality of the generated code.(I care about the speed of compilation, too, but gcc-4.0 is doing fine in that area, I think.)
I'd love to switch to the new compiler, because I value its improved standards compliance, but it's hard for me to argue for a switch when it *slows down* important applications.
I don't need *better* code before I switch, but I do need to verify there are no performance regressions. And sadly, there are still some of those in apps compiled with gcc-3.4.1 compared to gcc-2.95.3. As I said in my original post, I don't expect later versions to definitively beat 2.95.3 until gcc-4.1.
The whole point of gcc-4.0 is to shake out the bugs in the new tree optimization stuff. I am starting to build and test all my apps with gcc-4.0 and plan to help resolve any problems I find, because I want to be ready for gcc-4.1.
Strings are now unavoidably constants
(Posted Apr 22, 2005 11:35 UTC (Fri) by subscriber gdt) (Post reply)
The removal of the -fwritable-strings option will flush out any code yet to be moved to ANSI C. It will be interesting to see how much there is.
Our user group had an experience this week of a program SEGVing from writing to a string constant. The beginner programmer had typed in example code from a C textbook which was popular five years ago, and was obviously confused and concerned that it didn't work. So not all pre-ANSI code will be old.
Strings are now unavoidably constants
(Posted Apr 22, 2005 21:39 UTC (Fri) by subscriber nix) (Post reply)
The removal of the -fwritable-strings option will flush out any code yet to be moved to ANSI C.
I think the removal of -traditional is more likely to do that --- and that happened in 3.3.x ;)
Strings are now unavoidably constants
(Posted Apr 22, 2005 22:38 UTC (Fri) by subscriber gdt) (Post reply)
Sorry, I expressed myself poorly. There's a lot of code out there that is K&R with ANSI function definitions. I'm interested to see how many of these break from this semantic change.
I've no idea if it is a little or a lot. It will be interesting to see.
Strings are now unavoidably constants
(Posted Apr 23, 2005 21:30 UTC (Sat) by subscriber nix) (Post reply)
Well, Apple's preserved the option in their tree (used for MacOS X) because they have some stuff they know breaks...
Softpanorama hot topic of the month
We strive to provide regular, high quality releases, which we want to work well on a variety of native (including GNU/Linux) and cross targets. To that end, we use an extensive test suite and automated regression testers as well as various benchmark suites and automated testers to maintain and improve quality.
The Pentium Compiler Group was founded in late '95 to enhance and support pentium optimization in GCC (the GNU C Compiler). GCC does a very good job when optimizing, but the Pentium Chip's architecture demanded different optimization strategies.
Linux.DaveCentral.com Programming - C-C++, Page 1
gcc for Win32
GCC Development Toolchain for x86-win32 targets
Welcome to the GCC Library
The Minimalist GNU Win32 Package is not a compiler or a compiler suite.
The Minimalist GNU Win32 Package (or Mingw) is simply a set of header files and initialization code which allows a GNU compiler to link programs with one of the C run-time libraries provided by Microsoft. By default it uses CRTDLL, which is built into all Win32 operating systems. This means that your programs are small, stand-alone, and reasonably quick. I personally believe it is a good option for programmers who want to do native Win32 programming, either of new code or when creating a native port of an application. For example, the latest versions of gcc itself, along with many of the supporting utilities, can be compiled using the Mingw headers and libraries.
Visit the Mingw mailing list on the Web at http://www.egroups.com/group/mingw32/. Also see the Mingw32 FAQ at http://www.mingw.org/mingwfaq.shtml.
Mingw was mentioned (in passing, down near the bottom... but that's enough for me) in an interview at O'Reilly. Aside from another mention in a Japanese magazine called "C Magazine" (as a sidebar in an article about Cygwin) this is only the second time I know of that Mingw was mentioned by any 'serious' media. Neat huh?
Both of these compiler suites, based on gcc, were built with Mingw32 and include Mingw32.
These are old and only of historical interest. Real developers interested in the source code should get the much newer Mingw runtime from Mumit Khan's ftp site.
The Cygwin Project by Cygnus Solutions is an attempt to provide a UNIX programming environment on Win32 operating systems. As part of this effort the suite of GNU software development tools (including gcc, the GNU C/C++ compiler) has been ported to Win32. The Cygwin project lead directly to the first versions of gcc that could produce Win32 software and allowed me to set up the first version of Mingw32.
For more information on Cygwin, including where to download it and how to subscribe to the Cygwin mailing list, visit the Cygwin Project Page.
Also try this page for more information about GNU programming tools on Win32, and how to install and use them.
I would like to list a bunch of them but... this is all I could find.
A Win32 Programming Tutorial
A tutorial on how use GNU tools and other free tools to write Win32 programs.
I also have some pointers and downloads of extras, useful tools and alternatives for GNU-Win32 or Minimalist GNU-Win32.
Slashdot GCC 4.0.0 Released
Re:Misplaced blame (Score:4, Funny)
by Screaming Lunatic (526975) on Friday April 22, @04:55AM (#12311278)
Blame the standards committee, not the GCC maintainers.
Insightful? Jesus eff-ing Christ. Now the slashbots don't like standards. I bet you wouldn't be presenting the same argument if this discussion was about the transition from MSVC 6.0 to 7.0/7.1.
Funny? Jesus eff-ing Christ. When did pointing out the hypocricy of slashdot group think become funny? I don't get which part of my original statement is funny.
Re:Misplaced blame (Score:5, Funny)
by Mancat (831487) on Thursday April 21, @11:40PM (#12310121)
Mechanic: Sir, your car is ready.
Customer: Thanks for fixing it so quickly!
Mechanic: We didn't fix it. We just brought it up to standards. Oh, by the way, your air conditioning no longer works, and your rear brakes are now disabled.
Customer: Uhh.. What?
Mechanic: That's right. The standard refrigerant is now R-134A, so we removed your old R-14 air conditioning system. Also, disc brakes are now standard in the autmotive world, so we removed your drum brakes. Don't drive too fast.
Customer: What the fuck?
Mechanic: Oh, I almost forgot. Your car doesn't have airbags. We're going to have to remove your car's body and replace it with a giant tube frame lined with air mattresses.
Build-Install OpenSolaris at OpenSolaris.org by Rich Teer
This is the first of two articles in which we describe how to acquire and build the source code for OpenSolaris. The first article provides all the necessary background information (terminology, where to get the tools, and so on) and describes a basic compilation and installation, and the second article will describe a more complicated compilation and installation.
These articles describe how to build and install OpenSolaris; they are not intended to be an "OpenSolaris developers' guide", so information beyond that which is required for building and installing OpenSolaris is not included. This should not be seen as a problem, however, because (as with most other large-scale open source projects) the number of active OpenSolaris developers who will need this knowledge is likely to be small compared to the number of people who will want to build it for their own edification.
These articles assume at least a passing familiarity with building major software projects and some C programming. It is unlikely that someone who is struggling to compile "Hello World" would be trying to compile OpenSolaris! However, we will otherwise assume no knowledge of building Solaris, and describe all the necessary steps.
Developing for Linux on Intel - For many Windows* developers, Linux* presents a learning challenge. Not only does Linux have a different programming model, but it also requires its own toolchain, as programmers must leave behind the Visual Studio* (VS) or Visual Studio* .NET (VS.NET) suites and third-party plug-in ecosystem. This article helps Windows developers understand the options available as they seek to replicate, on Linux, the rich and efficient toolchain experience they've long enjoyed on Windows.
Ah things no longer compiling
In the long term, I think it was a very good thing: coding C (and C++, but didn't have that much experience on that) got much more strict and in my experience, removes a lot of possible problems later on.
If someone had a lot of problems porting 2.95 to 3.2, his code needed to be reviewed anyway. It kind of removes the "boy" from "cowboys" in coders (experience is drawn from not-so-embedded systems).
Based on the remarks obtained from the compiler for embedded code (they made a lot of sense) during the switch and gcc becoming more strict, we now even compile everything with -Werror.
In our deeply embedded networking code, we got a speed improvement of 20% just
switching to 3.4 (from 3.3)
Re:i'm having horrible flashbacks... (Score:4,
by MarsDude (74832) on Friday April 22, @02:07AM (#12310773)
|... kind of removes the "boy" from "cowboys"
Which results in..... cows
FAIR USE NOTICE This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available in our efforts to advance understanding of environmental, political, human rights, economic, democracy, scientific, and social justice issues, etc. We believe this constitutes a 'fair use' of any such copyrighted material as provided for in section 107 of the US Copyright Law. In accordance with Title 17 U.S.C. Section 107, the material on this site is distributed without profit exclusivly for research and educational purposes. If you wish to use copyrighted material from this site for purposes of your own that go beyond 'fair use', you must obtain permission from the copyright owner.
ABUSE: IPs or network segments from which we detect a stream of probes might be blocked for no less then 90 days. Multiple types of probes increase this period.
Groupthink : Two Party System as Polyarchy : Corruption of Regulators : Bureaucracies : Understanding Micromanagers and Control Freaks : Toxic Managers : Harvard Mafia : Diplomatic Communication : Surviving a Bad Performance Review : Insufficient Retirement Funds as Immanent Problem of Neoliberal Regime : PseudoScience : Who Rules America : Neoliberalism : The Iron Law of Oligarchy : Libertarian Philosophy
War and Peace : Skeptical Finance : John Kenneth Galbraith :Talleyrand : Oscar Wilde : Otto Von Bismarck : Keynes : George Carlin : Skeptics : Propaganda : SE quotes : Language Design and Programming Quotes : Random IT-related quotes : Somerset Maugham : Marcus Aurelius : Kurt Vonnegut : Eric Hoffer : Winston Churchill : Napoleon Bonaparte : Ambrose Bierce : Bernard Shaw : Mark Twain Quotes
Vol 25, No.12 (December, 2013) Rational Fools vs. Efficient Crooks The efficient markets hypothesis : Political Skeptic Bulletin, 2013 : Unemployment Bulletin, 2010 : Vol 23, No.10 (October, 2011) An observation about corporate security departments : Slightly Skeptical Euromaydan Chronicles, June 2014 : Greenspan legacy bulletin, 2008 : Vol 25, No.10 (October, 2013) Cryptolocker Trojan (Win32/Crilock.A) : Vol 25, No.08 (August, 2013) Cloud providers as intelligence collection hubs : Financial Humor Bulletin, 2010 : Inequality Bulletin, 2009 : Financial Humor Bulletin, 2008 : Copyleft Problems Bulletin, 2004 : Financial Humor Bulletin, 2011 : Energy Bulletin, 2010 : Malware Protection Bulletin, 2010 : Vol 26, No.1 (January, 2013) Object-Oriented Cult : Political Skeptic Bulletin, 2011 : Vol 23, No.11 (November, 2011) Softpanorama classification of sysadmin horror stories : Vol 25, No.05 (May, 2013) Corporate bullshit as a communication method : Vol 25, No.06 (June, 2013) A Note on the Relationship of Brooks Law and Conway Law
Fifty glorious years (1950-2000): the triumph of the US computer engineering : Donald Knuth : TAoCP and its Influence of Computer Science : Richard Stallman : Linus Torvalds : Larry Wall : John K. Ousterhout : CTSS : Multix OS Unix History : Unix shell history : VI editor : History of pipes concept : Solaris : MS DOS : Programming Languages History : PL/1 : Simula 67 : C : History of GCC development : Scripting Languages : Perl history : OS History : Mail : DNS : SSH : CPU Instruction Sets : SPARC systems 1987-2006 : Norton Commander : Norton Utilities : Norton Ghost : Frontpage history : Malware Defense History : GNU Screen : OSS early history
The Peter Principle : Parkinson Law : 1984 : The Mythical Man-Month : How to Solve It by George Polya : The Art of Computer Programming : The Elements of Programming Style : The Unix Haters Handbook : The Jargon file : The True Believer : Programming Pearls : The Good Soldier Svejk : The Power Elite
Most popular humor pages:
Manifest of the Softpanorama IT Slacker Society : Ten Commandments of the IT Slackers Society : Computer Humor Collection : BSD Logo Story : The Cuckoo's Egg : IT Slang : C++ Humor : ARE YOU A BBS ADDICT? : The Perl Purity Test : Object oriented programmers of all nations : Financial Humor : Financial Humor Bulletin, 2008 : Financial Humor Bulletin, 2010 : The Most Comprehensive Collection of Editor-related Humor : Programming Language Humor : Goldman Sachs related humor : Greenspan humor : C Humor : Scripting Humor : Real Programmers Humor : Web Humor : GPL-related Humor : OFM Humor : Politically Incorrect Humor : IDS Humor : "Linux Sucks" Humor : Russian Musical Humor : Best Russian Programmer Humor : Microsoft plans to buy Catholic Church : Richard Stallman Related Humor : Admin Humor : Perl-related Humor : Linus Torvalds Related humor : PseudoScience Related Humor : Networking Humor : Shell Humor : Financial Humor Bulletin, 2011 : Financial Humor Bulletin, 2012 : Financial Humor Bulletin, 2013 : Java Humor : Software Engineering Humor : Sun Solaris Related Humor : Education Humor : IBM Humor : Assembler-related Humor : VIM Humor : Computer Viruses Humor : Bright tomorrow is rescheduled to a day after tomorrow : Classic Computer Humor
The Last but not Least
Copyright © 1996-2016 by Dr. Nikolai Bezroukov. www.softpanorama.org was created as a service to the UN Sustainable Development Networking Programme (SDNP) in the author free time. This document is an industrial compilation designed and created exclusively for educational use and is distributed under the Softpanorama Content License.
Original materials copyright belong to respective owners. Quotes are made for educational purposes only in compliance with the fair use doctrine.
FAIR USE NOTICE This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available to advance understanding of computer science, IT technology, economic, scientific, and social issues. We believe this constitutes a 'fair use' of any such copyrighted material as provided by section 107 of the US Copyright Law according to which such material can be distributed without profit exclusively for research and educational purposes.
This is a Spartan WHYFF (We Help You For Free) site written by people for whom English is not a native language. Grammar and spelling errors should be expected. The site contain some broken links as it develops like a living tree...
|You can use PayPal to make a contribution, supporting development of this site and speed up access. In case softpanorama.org is down you can use the at softpanorama.info|
The statements, views and opinions presented on this web page are those of the author (or referenced source) and are not endorsed by, nor do they necessarily reflect, the opinions of the author present and former employers, SDNP or any other organization the author may be associated with. We do not warrant the correctness of the information provided or its fitness for any purpose.
Last modified: October 27, 2015