Softpanorama

May the source be with you, but remember the KISS principle ;-)
Contents Bulletin Scripting in shell and Perl Network troubleshooting History Humor

Scripting Bulletin, 2004

Prev Up Next

Top Visited
Switchboard
Latest
Past week
Past month

NEWS CONTENTS

Old News ;-)

[Dec 14, 2004] Are Open Source apps for Windows bad for Linux (I don't think so)

Efforts to port parts of KDE to the Microsoft Windows platform prompted this question on Slashdot: Open Source on Windows - Boon or Bane for Linux?

As a person who uses both Microsoft Windows and Linux boxes on a daily basis, I think Open Source for Windows is a good thing and blog about it on my own site now at then when I spot an interesting FOSS application for Windows: Open Source for Windows blog

My take is that if Microsoft Windows has an estimated 90 to 95% of the desktop world and a good chunk of the server world, Open Source can get more noticed by more people and bring great applications and server options to the Windows world. I'd be hard pressed to function efficiently without WinSCP, Python, VNC, putty, JEdit and a bunch of other Open Source applications being available for Windows when I'm using that platform. So, I'd sure hate to see Open Source ports stop heading towards Windows.

Sun Microsystems: The Java Problem by Julian S. Taylor

INTERNALMEMOS.COM  

The Java Problem
Author: Julian S. Taylor
Reviewed by: Steve Talley, Mark Carlson, Henry Knapp, Willy (Waikwan) Hui, Eugene Krivopaltsev, Peter Madany, Michael Boucher

Executive Summary

While the Java language provides many advantages over C and C++, its implementation on Solaris presents barriers to the delivery of reliable applications. These barriers prevent general acceptance of Java for production software within Sun. A review of the problem indicates that these issues are not inherent to Java but instead represent implementation oversights and inconsistencies common to projects which do not communicate effectively with partners and users.

Within Sun, the institutional mechanism for promoting this sort of communication between partners is the System Architecture Council codified in the Software Development Framework (SDF). We propose that the process of releasing our Java implementation will benefit from conformance with the SDF.

Introduction

This document details the difficulties that keep our Solaris Java implementation from being practical for the development of common software applications. It represents a consensus of several senior engineers within Sun Microsystems. We believe that our Java implementation is inappropriate for a large number of categories of software application. We do not believe these flaws are inherent in the Java platform but that they relate to difficulties in our Solaris implementation.
We all agree that the Java language offers many advantages over the alternatives. We would generally prefer to deploy our applications in Java but the implementation provided for Solaris is inadequate to the task of producing supportable and reliable products.
Our experience in filing bugs against Java has been to see them rapidly closed as "will not fix". 22% of accepted non-duplicate bugs against base Java are closed in this way as opposed to 7% for C++. Key examples include:

4246106 Large virtual memory consumption of JVM
4374713 Anonymous inner classes have incompatible serialization
4380663 Multiple bottlenecks in the JVM
4407856 RMI secure transport provider doesn't timeout SSL sessions
4460368 For jdk1.4, JTable.setCellSelectionEnabled() does not work
4460382 For Jdk1.4, the table editors for JTable do not work.
4433962 JDK1.3 HotSpot JVM crashes Sun Management Center Console
4463644 Calculation of JTable's height is different for jdk1.2 and jdk1.4
4475676 [under jdk1.3.1, new JFrame launch causes jumping]

In personal conversations with Java engineers and managers, it appears that Solaris is not a priority and the resource issues are not viewed as serious. Attempts to discuss this have not been productive and the message we hear routinely from Java engineering is that new features are key and improvements to the foundation are secondary. This is mentioned only to make it clear that other avenues for change have been explored but without success. Here we seek to briefly present the problem and recommend a solution.

Defining the Java Problem

These are the problems we have observed which we believe indicate the need for an improved implementation and a modified approach.

1. The support model seems flawed
Since Java is not a self-contained binary, every Java program depends fundamentally upon the installed Java Runtime Environment (JRE). If that JRE is broken, correction and relief is required. This sort of relief needs to happen in a timely manner and needs to fix only the problem without the likelihood of introducing additional bugs. Java Software does not provide such relief.
Java packages are released (re-released) every four or five months, introducing bug fixes and new features and new bugs with each release. These releases are upgrading packages which remove all trace of the prior installed packages and cannot be down-graded in the event of an error. The standard release taxonomy used by the Architecture Review Committees (ARCs) was developed for use by Solaris and our other mission-critical software products to help solve these and many other problems.

It is impractical for a project based on Java to correct bugs in the Java implementation. Java Software corrects bugs only by releasing an entire new version. For that reason, projects seek to deliver their own copy of Java so they can maintain it without fear of a future upgrade. Outside vendors, such as TogetherJ, specify a particular release of Java for their product. The customer must locate that release and install it. If a future product seeks to use a different version, that version has to be installed side-by-side with the prior version or TogetherJ may no longer function.
The ARCs commonly see project submittals requesting permission to ship their own version of Java. The ARCs have been routinely forbidding projects to do this even though they are aware of specific cases wherein interfaces or their underlying behaviors have changed incompatibly across minor releases.

The threat of losing the ability to directly support such a substantial part of their product has inhibited projects from choosing Java as their implementation language and caused widely-discussed problems for customers of projects that have used Java. Consider that the Java language supports rapid development, simple testing and access to a wide variety of platforms. Why are the shelves at CompUSA (a Linux friendly store) not crammed with W32/Linux/etc offerings written in Java? As it stands client-side Java remains primarily a web language partly because the Netscape platform runs Java 1.1.5 and has not changed for years. It is buggy but very stable.

This indicates that Java must strictly enforce backward compatibility across minor releases and must adhere to Sun release taxonomy for the identification of releases. Further, existing releases must support some sort of remedy akin to a patch so that existing installations can be corrected through existing methods.

2. The JRE is very large.

The JRE is significantly larger than comparable runtime environments when considering resident set size (memory dedicated to this specific program). It has been seen to grow to as much as 900M. This has a drastic effect on both performance and resource usage. It also means that multiple JREs present critical resource constraints on the servers for such thin-client systems as SunRays. Typical resident set requirements for Java2 programs include:

Hello World 9M
SMC Server 38M
SLVM GUI 60M
Component Manager 160M
TogetherJ 300 - 900M

The largest program in that list is TogetherJ. From the standpoint of resource requirements, TogetherJ does much of what Rational Rose does but Rational Rose appears to function in less than 250M. Startup time is effected as well. For example, on an Ultra10 TogetherJ requires 5 minutes to load and start. SMC, Sun's flagship system admin console, takes between one and two minutes to reach the point that it can be used.

Some of this problem appears to relate to the JRE. We do not have the time or money to conduct a serious side-by-side study of Java vs other languages and are therefore calling upon our personal experiences with Java development. The fact that these experiences are hard to quantify forces us to try to support the validity of this concern through existing research.

A study performed by an outside team appears to indicate a rough parity in performance between Java and a common implementation of another OO language called Python (see IEEE Computing, October 2000, "An Empirical Comparison of Seven Programming Languages" by Lutz Prechelt of the University of Karlsruhe). Both platforms are Object Oriented, support web applications, serialization, internet connections and native interfaces. The key difference is that Python is a scripting language. This means there is no compilation to byte code so the Python runtime environment has to do two things in addition to what the Java runtime environment does. It has to perform syntax checks and it must parse the ascii text provided by the programmer. Both of those tasks are performed at compile time by Java and so that capability does not have to be in the JRE.

Given this data, it appears that the JRE can actually be simpler than the Python RE since Java does at least some of this work at compile time. The example above of "Hello World" is a good method for getting an idea of the minimum support code required at runtime. This support code includes garbage collector, byte code interpreter, exception processor and the like. Hello World written in Java2 requires 9M for this most basic support infrastructure. By comparison, this is slightly larger than automountd on Solaris8. The Python runtime required to execute Hello World is roughly 1.6M.

Further examples of what is possible include the compiling OO languages Eiffel and Sather which fit their garbage collector, exception processor and other infrastructure into roughly 400K of resident set. While the Java VM (as demonstrated above) grows rapidly as more complex code is executed, the Python VM grows quite slowly. Indeed, an inventory control program written entirely in Python having a SQL database, a curses UI, and network connectivity requires only 1.7M of resident set. This seems to indicate that the resident set requirements of the JRE could be reduced by at least 80%.

Imagine what happens if our current implementation of Java were ubiquitous and all 150 users on a SunRay server were running one and only one Java program equivalent to Component Manager above. The twenty-four gigabytes of RAM the server would have to supply exclusively to these users is well beyond the typical configuration. RAM is cheap but performance is what we sell, all customers on that SunRay server would see significant performance degradation even with the maximum amount of RAM installed as all other processes were forced to reside on swap.
The resident set size required by the JRE makes it impractical to run Java in an initial Solaris install environment. It is impractical to run it as a non-terminating daemon. A Java daemon could be started from inetd run long enough to do its job and then quit but the rpc protocol required to pass the socket port to the daemon is very complex and not Java-friendly. Java applications cannot be executed at boot time since the loading of the VM introduces an unacceptable performance degradation. If the Java runtime were as small as that of Python, it is likely that the Java daemon would become popular and could provide basic services to applications written in any number of languages.

3. Extensions do not support modularity.

As new extensions are introduced, they are released separately under their own names and distributed generally. Each one may go through several revisions as separate modules. At some point, they are then folded into base Java, tying base Java's version to the versions of dozens of smaller yet distinct functionalities. These functionalities are then restricted to a draconian backward-compatibility rule since once folded in, they are no longer selectable modules. Examples include modules that used to be called Swing, RTI, IDL, JSSE and JAAS. These are all good things that should be part of Java. Our concern is that these are not separable modules which can evolve as requirements change.

The Java system for evolving the interface (deprecation) does not serve production software very well. Once the interface disappears, the product just breaks. If the Java base were simpler and the more advanced features (those most likely to be deprecated) were delivered as versioned modules, it would be possible for a commercial product to retain it's older modules on the system and survive a large number of Java upgrades.

Production quality programs written in Java, like TogetherJ, indicate a specific Java version which must be installed before the program is run. If another program is installed, requiring a higher Java version, the user may be forced to decide which program stays and which goes away. Alternatively, the other Java version could be installed to a different base directory but this requires considerable sophistication on the part of the user, complicates administration and violates the ARC big rule that common software must be shared.

4. It is not backward-compatible across minor releases.

Among the various incompatibilities across minor releases are:
a) In JDK 1.1 Class.fields() returns only public variables. In 1.2, protected and private variables are returned.
b) Swing table sizing calculation changed from Java 1.3 to 1.4.
c) Swing JFrame launch behavior changed significantly from Java 1.2.2 to Java 1.3.1.

Each of these examples is simple, but they demonstrate the general problem that people cannot program for a particular release of Java and expect that their programs will continue to run. This is a serious problem now, but has the potential to become a show-stopper as technology such as auto-update advances.

What is perhaps more important is that the perception of Java as an unstable platform is widespread. This perception is restated with every Java-based project to come to ARC. Within Sun, Java is not viewed as a satisfactory language for the construction of commercial applications. This perception and the record require addressing.

The Java Problem is Recognized Internally

That our Java implementation is perceived as inappropriate for many uses is supported by internal documents and policies. For example:

1. SOESC AI - 092501.2 Java Dependencies for Deployment

In this document provided to SOESC, John Perry describes the concerns regarding the Solaris "JVM dependencies for deployment". Following is an excerpt:
-------
- Large footprint of applications when run on Solaris. A simple application ("hello world" type) has a total footprint of 35-40 megs on Solaris 9 (build 48, using Java 1.4 build 82) on both Intel and Sparc machines. Sparc machines, by far, have a much higher resident footprint then Intel machines (~30 megs, compared to ~11 megs). The same program run on a Windows machine has a footprint of ~5 megs, resident footprint being ~3.5 megs

- Slow start up times prevents Java applications from being started while Solaris is booting up and during mini-root time. This requires applications which are written in Java to have some kind of mechanism to start-up after the OS has been fully started.

- Instability of Native code (JNI) which can cause the entire VM to crash.
-------

2. Teams Are Looking for Options

The CIMOM (supporting WBEM) is a Java daemon. It initially occupies around 40M of RSS but grows from there. In order to address this problem, at least one Sun Engineer, Peter Madany, has been doing research to determine Java daemon memory utilization when running on a currently unsupported J2ME VM on Solaris. In other words, we are looking into demonstrating that resource exhaustion on Solaris Servers could be avoided by using some of the techniques used in an edition of Java intended for very small systems.

3. New Projects Explain Why They Are Not Using Java

Quoting from the recently submitted Nile case (SARC/2001/617) now under review:
-------
These libraries should be commercial implementations and must be in native platform code (ie not Java or Perl). Native code is a requirement because one of the core requirements for the proxy is for minimum impact on the target host. Java has too large a footprint (both memory and disk image) and may not be installed on the customer's host.
-------

4. ARCs Include the Java Problem in Rejection Reasoning

Quoting from the recently rejected SunMC PMA case (LSARC/2000/457):
-------
The CLI interpreter is implemented in Java, and the overhead of starting a JVM for each command execution is prohibitive. At least one of the votes to reject was related to this inappropriate use of Java. The Solaris implementation of Java is slow and very large. While this project did not provide a measurement of resident set for their CLI, the minimum RSS for the JVM is known to be 9MB and the typical RSS for a similar Java program is 30 to 40MB, and takes up to 15 seconds to start. The project team admitted in the review that this CLI may be used on a daily basis. For such a CLI, the delays and resource requirements of the Solaris Java implementation are unacceptable.
-------

5. Customers and Field Engineers Are Noticing the Problem

Following is an excerpt from Kevin Tay's e-mail to three Java aliases regarding a customer installation of a third-party product written in Java called Vitria. We see typical very large RSS numbers compared to a WinNT implementation combined with increased resource usage from Solaris7 to Solaris8:
-------
Customer said they have something like 450+ container servers and 80+ automator server for the Vitria system. So the estimation for the hardware RAM is around 9GB for USII machine and 14-15GB for the USIII machine. Questions:

1. Why is Sun systems using so much more memory?
2. Why is the UltraSPARC III/Solaris 8 system using a lot more memory than a UltraSPARC II/Solaris 7 system (with every other thing being equal)?
3. How can I reduce the memory utilization of the UltraSPARC III system?
-------
NOTE: The response to this e-mail was to suggest moving to a different build of Java 1.2.2 since the indicated build on Solaris 8 had a known bug; it should be noted, however, that the 9GB memory footprint for Solaris7 is still unusually large.

6. Close Call in Solaris9

Bug ID 4526853 describes a bug in Core Java which used to be an external module called JSSE. Among other products, PatchPro and PatchManager depend on the JSSE. As long as the module could be used, the JSSE interface could be trusted to remain stable despite extensive changes in core Java. Now the Java architecture makes it impossible to use the module. This bug in core Java completely disables PatchPro and PatchManager. It was introduced in build 83 of Java 1.4. It was detected and corrected before the final build of Solaris9. If it had not been detected before the final build, it would have shipped with Solaris9 FCS.

For those products that depend upon JSSE and operate on multiple OSs, there would have been no recourse except to deliver with their product an entire new Java distribution. This distribution would have to upgrade the existing Java installation. The fact that various products depend upon specific versions would mean that such an upgrade would carry the risk of breaking other Java-based software on the target system.

Correcting the Java Problem

We strongly recommend that management require Java to conform to the Software Development Framework especially from the standpoint of ARCreview. We believe that the next release of the Sun Java implementation should be brought to ARC while still in the prototype phase. Both PSARC and LSARC have dealt with the Java issues peripherally, recognizing numerous problems but unable to effect change in the underlying source of the difficulties - namely Java. By bringing the Sun Java implementation through ARC, these issues can be resolved.

See also Slashdot Even Sun Can't Use Java

 

Re:InternalMemos is notorious for hoaxes (Score:1)
by denny_d (454663) on Sunday February 09, @11:58AM (#5264814)
Glad to read someone's questioning the source of the memo. That said, I'm constantly bumping against java apps that consume all system memory. Are there specs. to be found on the HelloWorld example?...
[ Parent ]
 
Re:InternalMemos is notorious for hoaxes (Score:5, Informative)
by MikeFM (12491) on Sunday February 09, @01:49PM (#5265480)
(http://kavlon.org/ | Last Journal: Friday March 21, @02:10PM)
Julian.Taylor@central.sun.com is the address Google came up with for me. Not sure if it's the right one. Seems close enough though.

Since when are memos technically correct? You must work at a lot geekier place than I have. Not that I think InternalMemos isn't notorious for hoaxes.

I wouldn't hold out for Sun to switch from Java to Python either but I really wish they would. Java blows. Python is easier to develop (fewer required tools etc) and runs a lot better under both Linux and Windows. Python (with wxPython) produces nicer looking more functional gui programs to.
Re:InternalMemos is notorious for hoaxes (Score:2)
by RenQuanta (3274) on Sunday February 09, @02:11PM (#5265630)
Why is comparison with Python amusing? I've been using Python for three and a half years now, and everything that the memo states about Python is true and accurate:

A study performed by an outside team appears to indicate a rough parity in performance between Java and a common implementation of another OO language called Python (see IEEE Computing, October 2000, "An Empirical Comparison of Seven Programming Languages" by Lutz Prechelt of the University of Karlsruhe). Both platforms are Object Oriented, support web applications, serialization, internet connections and native interfaces. The key difference is that Python is a scripting language. This means there is no compilation to byte code so the Python runtime environment has to do two things in addition to what the Java runtime environment does. It has to perform syntax checks and it must parse the ascii text provided by the programmer. Both of those tasks are performed at compile time by Java and so that capability does not have to be in the JRE.
Given this data, it appears that the JRE can actually be simpler than the Python RE since Java does at least some of this work at compile time. The example above of "Hello World" is a good method for getting an idea of the minimum support code required at runtime. This support code includes garbage collector, byte code interpreter, exception processor and the like. Hello World written in Java2 requires 9M for this most basic support infrastructure. By comparison, this is slightly larger than automountd on Solaris8. The Python runtime required to execute Hello World is roughly 1.6M.

I've used all of those aspects of Python: OO, serialization, web application support, internet connections, and native interfaces. I've also used multi-threading, and GUI interfaces (PyGTK, and built-in PyTk). I have yet to find a problem that couldn't be easily solved with Python. It makes for rapid development and robust solutions.

Moreover, my experiences (as an end user, not developer) with Java have been misreable. It's performance sucks and is typically intolerable for daily usage.

Re:Hypocrisy? (Score:4, Insightful)
by Jahf (21968) on Sunday February 09, @12:37PM (#5264696)
(Last Journal: Wednesday March 24, @01:13PM)
I still work for Sun and have never seen anything like this memo. Java is still used daily for internal projects, still hyped strong and developed strong, and I've never seen a Sun person try and dissuade another from using it.

If the memo is real, then it's being kept in a very small group.

If it's fake, they did a good job with the language and examples.

Sun employee: memo is on target (Score:5, Interesting)
by joelparker (586428) <joel@school.net> on Sunday February 09, @08:56PM (#5267630)
(http://www.school.net/)
You haven't seen it? Is it possible you haven't looked for it?

I am a former Sun employee and I wrote these kinds of memos.

Specifically, I wrote that Java was unsuitable for Sun's own web development projects, and that this represented a serious problem in terms of missed opportunities to improve our software and for our public relations and marketing.

The memo may be a fake, but it's right on target. I especially agree with the problem of internal tech support for critical bug fixes.

I worked on several projects that were a nightmare due to subtle bugs in Java's HTML and XML classes. In each case, the bugs were easy to fix: a few lines of code, changing private methods to protected methods, etc.

The response from Sun support? "Will not fix."

So I had to rewrite the classes-- basically rederiving the entire Java HTML+XML parsing tree-- which stuck the customer using my custom code. Talk about a bad upgrade path!

There were many, many examples of this. As a result, I deployed many projects using Perl on Linux instead of Java on Solaris, and I wrote internal memos like the one in this article.

All that said, the Java engineers were some of the smartest, nicest people I've ever had the pleasure of working with. I have a lot of confidence in them, and each Java release gets substantially better and faster. The problem IMHO is not the engineers, but the corporate culture that misses opportunities to learn from employee projects.

The Sun engineers and internal developers can really do some amazing things, if McNealy and Zander could start prioritizing Java inside Sun, and start funding rapid-turnaround tech support for employee programmers.

Cheers,
Joel

Re:Hypocrisy? (Score:5, Insightful)
by g4dget (579145) on Sunday February 09, @11:00AM (#5264134)
I think the primary interest here is "server side Java", doing heavy lifting business applications. Currently Java/J2EE is in a competition with .Net ... in a race that has strong parallels with and implications for Unix/Linux vs Windows on the server side.

For server-side apps, it makes no difference whether Microsoft bundles the JRE or not--anybody putting together a bunch of servers is going to install the latest JRE directly from Sun anyway.

In fact, while Java is a decent language for server-side development (and that's pretty much the only thing it's really good at), it's ironic that its cross-platform features in particular are largely irrelevant there: for many other reasons, any reasonable place is going to have a homogeneous server environment for individual web apps, and re-compiling for that server environment is a tiny part of deployment.

So, something like GNU gcj, which requires recompilation for each target platform, may well be the better choice than Sun's bloated JRE: while you don't get universal byte code deployment, which you don't need, gcj binaries start up much faster and consume less resources, which may be more important on your server.

Re:Hypocrisy? (Score:5, Insightful)
by Zeinfeld (263942) on Sunday February 09, @11:01AM (#5264136)
(http://slashdot.org/ | Last Journal: Monday October 07, @06:09PM)
This smells bad. Sun have been forcing the monopoly thing down microsofts throat for so long, and now there they are victim of themselves again.

The note is certain to be used by Microsoft in their appeal against the Java injunction.

In particular the points about Java code being tied to a particular runtime completely negates Sun's claims about the need to distribute in the O/S base. Clearly that is not going to help much since Sun have no clue about dependency management.

Consider the following thought experiment. Microsoft distribute 30Mb of Java 1.3 with XP. Then Sun upgrade to 1.4, what does Microsoft do? Do they distribute 1.4 on the new O/S versions only, add it to the current release of XP or put it on instant update. None of these work. The instant update option will break existing java applets on the system. Mixed versions of java will mean that consumers buying a Java based progam will not be able to rely on the release number of XP to decide whether the program works on their machine. Waiting till the next O/S version is released will result in a lawsuit from sun.

The note shows clear similarities to the early articles on C# explaining the difference in approach between Java and dotNet. If the Java lobby was not so convinced that Java was the end of program language design they would have realised their significance.

To give one example, the version incompatibility problem is known to Windows developers as 'DLLHell'.

My company uses Java for a lot of projects. I would not be suprised however if we didn't end up on .NET server with the applications compiled down to native code through J# and IL.

Unfortunately Sun don't have a level 5 leader in charge. They have an egotistical idiot who is concentrated like a laser on another companies business instead of his own.Antics like those of McNeally and Ellison play well in the press but measured by the success of the companies stock price leaders like Jack Welch or Lee Iaccoca don't do as well as their PR would have it. Iaccoca may have saved Chrysler (it might also have been the government loans) but once he started concentrating his energies on being a folk hero Chrysler's performance went back down the tubes. Similarly Jack Welch's performance does not look that hot if you look at the growth in GE earnings rather than the stock price - which is certain to shrink as GE returns to its old P/E multiple.

One of the things a level 5 leader does is to encourage comment. The memo only says what others outside Sun have been saying for eight years.

My take on the Sun/Microsoft Java war is based on a lot of time working in standards groups with both groups of engineers. I think that the Microsoft engineers thought they could improve Java and got frustrated because the Sun engineers behaved - well like Microsoft engineers sometimes do.

Of course this will all be rationalised away. Of course it was all the fault of the Redmond club's evil schemes. Nobody outside Sun has any ideas of any value and Sun's JCM is genuinely open and not a proprietary farce.

  •  
    • Re:Not Java but the Solaris JRE (Score:5, Interesting)
      by Jugalator (259273) on Sunday February 09, @10:10AM (#5263869)
      (Last Journal: Wednesday June 30, @07:41PM)
      I don't think the "bugs" with huge memory usage and general slowness is limited to the Solaris platform since I've noticed it while running Java applications on Windows as well, while using Sun's JRE. Many of the bugs discussed in the memo is connected to the JDK itself as well, and Sun is concerned with how many bugs are closed with the "Will Not Fix" status. Since the JDK is mostly the same on all platforms due to Java's nature, I'm pretty sure this is a cross-platform problem in many ways.
  • Desperate measure (Score:5, Insightful)
    by Knacklappen (526643) <knacklappen@gmx.net> on Sunday February 09, @10:20AM (#5263921)
    (Last Journal: Wednesday June 30, @04:24PM)
    Reads to me like a memo that has intentionally leaked out into the open, trying to force Sun Management to act. Software Development Dept is clearly unhappy with the Solaris implementation of JRE and therefore stops all use of it, until is has been fixed. While the Java Dept does not seem to have too much hurry to do that (majority of cases closed - "will not fix".
    What would you do in your own line organization, when you are the boss of one department and the boss of the other department just gives you the finger? And your superior is unable/unwilling to solve the conflict? You write a flaming mail to your superior's superior, threaten to withdraw any support for the platform your company is famous for and leak the memo into the open to get public support. This, of course, has to be done nicely so that no-one can blame you directly for it.
    Re:Smells of a Fake (Score:5, Informative)
    by tom's a-cold (253195) on Sunday February 09, @01:51PM (#5265102)
    (http://slashdot.org/)

    Anyone that compares a scripting languate (python) to a full programming language that also as a VM has no clue. a scripting language has minimal overhead memory requirements because it does not have much of a memory management job to do.

    Your remarks about Python, and scripting languages in general, are not borne out by my own first-hand experience as a designer and developer.

    First, you make it sound like, in some sense, scripting languages are not as complete as "real" programming languages. And your comments about memory management make even less sense-- any language with OO features (and many without) are going to have to do dynamic allocation-- how else are object references going to be dealt with?-- and that means that they're going to have to deal with memory-management issues. And if you think that all scripts are like little baby shellscripts, you haven't been around much.

    I've developed medium-sized apps in Python and in Perl (on the order of 50K lines of executable code), and much bigger apps in Java. Python is semantically rich enough, and in most instances fast enough, to do anything that Java can do, and almost always with shorter, more readable code. The same can be said for Perl (though it requires more discipline to achieve the readability), and probably also Ruby and Scheme. From a software engineering point of view, I'd be happiest coding the whole app in Jython (the Python variant that compiles down to Java bytecodes), then recoding the hotspots in Java, or in some even lower-level language. Developers, even smart ones, usually guess wrong about what to optimize, so deferring tuning until you observe the working system is usually a good idea. Exceptions would be embedded and hard-realtime systems. Almost every business app I've seen is neither of these.

    This in no way eliminates the need to design your app before coding it, BTW, contrary to what some bozos who once read the blurb on the back of an XP how-to book might have you believe.

    When I did a demo of one Python-based app that I developed, my client was willing to accept a performance hit for the sake of better maintainability. When I benched its performance on one content-mangement task, it clocked in at 100 times faster than its C++ predecessor. Now obviously, a very clever C++ crew could have done a lot better than that. But in the real world, everyone's in a hurry and don't always choose the cleanest implementation. And when language features are too low-level, developers waste a lot of time reinventing "infrastructure." In this instance, they not only reinvented, but did it much more poorly than the developers of Python did.

    Re:Not too surprised (Score:4, Interesting)
    by Fjord (99230) on Sunday February 09, @11:01PM (#5268109)
    (http://slashdot.org/ | Last Journal: Tuesday December 16, @06:30PM)
    There's a difference in learning CS fundamentals and aligning your career. The fact is that a senior J2EE developer isn't considered a senior .NET developer and will take a cut in pay if they move to that space. Even architects are specialized. Now the developers that trancend their languages will advance quickly, but there are problems.

    I would get worried about Java as a career yet though. I just recently switched workplaces as a J2EE architect. At least in my town (Jacksonville), and accorind to the recruiting firms I talked to, there is very little else going on except J2EE. I can definitely see Java being the COBOL: great now, but antiquated later.

    More to the topic at hand, I don't see client side Java getting better anytime soon, because Sun already lost a lot on that side while gained a lot on the J2EE side. Future releases will be more geared towards long uptime high memory applications more than short small ones.
    Re:... nothing new under the sun (Score:4, Interesting)
    by DarthWiggle (537589) <parabaraba@yCHICAGOahoo.com minus city> on Sunday February 09, @11:05AM (#5264152)
    (Last Journal: Tuesday February 04, @09:22PM)
    Why in God's name is this modded troll? Have we offended the slathering hordes of Java devotees? Lemme tell all of you something, when I was laid-off from a position, I went to interview with two shops, both with a heavy Java focus, and roughly equivalent in their focus, style, and clients. I didn't get a job with the first. But with the second, I was given some very good advice: Talk a lot about J2EE, Beans, and a bunch of other buzzwords, a few of which I had never heard of. "Doesn't matter if you don't know, man, just throw the words in. That's all they care about."

    Got the job.

    Java is so much about a culture and not a technology that it's disgusting. And it's a pity too, because all the PRINCIPLES of Java (portability through the VM, objectification, etc.) are so good that Microsoft took them to build .NET. (Don't start gnawing on me because I put "portability" and ".NET" in the same sentence - I was referring to the VM.)

    Hell, I think Java is a great language. It concerns me that it takes seventeen steps to accomplish something as basic as opening a database connection, grabbing some results, and outputting them into an HTML stream (and seventeen may be generous). But, it's a very straightforward language, very teachable because it's so logical.

    But too much of the culture is fluff. Why is it that Sun doesn't focus on point releases that improve performance but instead focuses on getin gthe newest buzzword, uh, "feature" out the door? Why did they invent something called Java2 which is, IIRC, just Java 1.3? Because they're more concerned about IMAGE than about getting their product - which is great in principle - in a usable state.

    But let's talk substantively. I've developed large scale server-side applications in Java, C, C++, and - on the web-app side - PHP, Cold Fusion, and ASP.

    The slowest of those was, almost without exception, Java. Java took the most coding to do a basic task, and Java was BY FAR the most difficult to package, deploy, and deliver to my customers. That's a real pity, because I was about 90% certain that our customer's architecture didn't really matter if we were playing with Java. Upgrading those old Dell NT servers to IBM? No problem. We'll just move the app over, and it should run without a hitch.

    But, lord have mercy, it ran slowly.

    To top it all off, here's some advice I received from a Java-guru at another company. I was griping about how slow Java was, and he said to me, "Oh, everybody knows it's slow. But why worry? Hardware's getting faster every day. True, 2ms is half the speed of 1ms, but who's gonna notice?"

    I almost fell off my chair. It's that sort of laziness that makes my skin crawl.

    Look, I love Java. I want it to succeed. It's a brilliant idea: an utterly cross-platform language whose apps run without regard to the hardware and OS under them.

    But it's a seriously flawed masterpiece.

    (The funny thing is, I was just going to write "Why was this modded troll? But then my post bloated... kinda like how you go to write "Hello World" in Java and... ok, ok, nevermind.)
    Re:From the article... (Score:5, Insightful)
    by The Mayor (6048) on Sunday February 09, @11:08AM (#5264167)
    (http://www.bluelotussoftware.com/)
    It has always bugged you that Java had no good mechanism to compile simple expressions on-the-fly? Here are a few options for you:
    • Jython [jython.org] is a Python scripting engine for Java. There, now you can use Python within the JVM! <sarcasm>Get the worst of both worlds!</sarcasm\>
    • Rhino [mozilla.org] is a Javascript engine for Java.
    • Jacl [sourceforge.net] is a TCL engine for Java.
    • Bean Sripting Framework [ibm.com] is a generic wrapper for including scripting languages within your application. It's from IBM, and is intended to abstract away the implementation of the scripting language. It supports Jython, Jacl, and Rhino now. It seems like I remember IBM releasing something for REXX as well.

    My point here is that saying that Java doesn't include an interpreter is a downfall to Java is like saying that Perl not having a JVM is Perl's downfall. It's not their design goal. Java is a bytecode-interpreted language, not an interpreter. If you want an interpreter you can easily add one. And many are available.

    Performance isn't great, but reports have indicated that Jython is about 75% of the performance (near the end of the article...search for the word "performance") [oreilly.com] of CPython. It's slower than Java code of the same type. But, hey, if you wanted speed you wouldn't be using interpreted code (or byte-code interpreted code, for that matter), right?

    Re:From the article... (Score:5, Insightful)
    by Daniel Phillips (238627) on Sunday February 09, @12:25PM (#5264627)
    (http://nl.linux.org/~phillips)
    I am, however, a little leary on the performance parity bit. Don't get me wrong, I love programming in Python, but I know from experience that it still costs a good bit to create all the dictionaries that are used for frame construction, global maniuplation, and object management.

    I did a little benchmarking recently, and I can confirm that for typical algorithmic benchmarks (not heavily library or IO oriented) Python is more than 100 times slower than C/C++. There's a Python "specializing compiler" called Psyco that produces significant speedup, running my little fibonacci test around half the speed of C, very impressive.

    Java on the other hand has had huge amounts of effort and money put into making it run faster, and to my surprise, I found it now runs my fibonacci benchmark faster than gcc-compiled C. Overall, Java performance has improved from horrible to tolerable. Programs are still taking a long time to start, even on a beefy machine, but to be fair, I've seen some long startup times on some C++ programs as well.

    Python really beats Java in startup time, with the result that Python gets used here and Java doesn't.

    Python is, however, fast enough for a great many applications. I'm just a little skeptical about it being quite as fast in certain aspects.

    I see Pysco has made it into Debian Sid, this is a good sign.
    Re:From the article... (Score:1)
    by Internet Dog (86949) on Monday February 10, @01:22PM (#5272079)

    I did a little benchmarking recently, and I can confirm that for typical algorithmic benchmarks (not heavily library or IO oriented) Python is more than 100 times slower than C/C++. There's a Python "specializing compiler" called Psyco that produces significant speedup, running my little fibonacci test around half the speed of C, very impressive.

    If you are testing something like fibonacci encode in pure Python then yes it will be 100 times slower than C/C++. But if you are doing real world work then you can use the Python library for the most commonly used algorithms, and the libraries are generally well optimized. It's the Batteries Included part of Python that makes it such a productive environment. Optimization is suppose to be the last step in the coding cycle. Get it write in Python first and then recode the bits that are too slow to tolerate in C.

    People are using Python for high performance applications. It's all about good software design. If performance was an issue then LLNL wouldn't be using Python to control applications that run for days on supercomputers. Performance isn't an issue because the Python code just sets up the problems to be solved. Python assembles the standard algorithsm and calls them with the appropriate datasets. Also, the Zope server, written in Python, scales to very large web sites because it uses well placed optimization

    Java Implementation (Score:5, Interesting)
    by Detritus (11846) on Sunday February 09, @10:13AM (#5263880)
    (http://slashdot.org/)
    I'd be interested in finding out what are the causes of the problems with Java. Virtual machines don't have to be pigs. When the IBM PC was first introduced, I wrote a lot of software in Pascal using the UCSD p-System. The applications ran comfortably on machines with a 4.77 Mhz 8088, 8087 FPU and 512KB RAM. Most of the applications and operating system were compiled into p-code, which is similar to Java byte codes. The p-machine interpreter was a small resident module written in 8086 assembly language. The p-code was actually more memory efficient than the machine code produced by conventional compilers.
    Political memos (Score:4, Informative)
    by panurge (573432) on Sunday February 09, @10:16AM (#5263902)
    Being an old cynic, I suspect there are too many long words in this memo for it to have gone very far up the food chain. Who are these people and what is their access to opinion formers in management?

    Not that I'm suggesting they are wrong, I have no way of knowing either way, I just think that producing memos like this - and getting them leaked - is probably not the smartest way of getting the declared objective.

    Admission: I use Java. It isn't perfect. It uses too much memory. It isn't hugely fast. But the applications work and the amount of debugging we have had to do is a tiny fraction of what I would have expected with C++. Its suitability for a given project depends on a whole host of factors not considered in the memo, and it would not surprise me if, for some internal Sun projects, it was inappropriate in its present stage of development.

    Java defense... (Score:1)
    by FlydinSlip (531842) on Sunday February 09, @09:19AM (#5263915)
    (http://freeside.dnsalias.org/)
    While some of their points are somewhat valid (ie, the Class.getFields() returning first all the public and then later the public, protoected and private fields), but I tend to think that changes to other classes are done not to make the developers' life more difficult, but to enhance flexibility or act as work-arounds for other either very subtle or very infrequent issues. No language is perfect, and Java is still a relatively young langauge.. Once Java has aged as long as languages like C has, I'd be one to bet most of these "incoveniences" will have been resolved.. As far as Sun developers not using it, that seems highly defeatist.. If you make a product that the world has practically adopted unconditionally, and it's activly used the world around, the only way to make it better is to use it, and find and fix the issues. 'Ya can't fix what you don't know is broken... Much of Java's bloat is due to the fact that at VM statrtup, it has no idea what you're going to do with your program, either Hello World or TogetherJ, so it needs to be ready for the worst-case. There's really no avoiding this. As mentioned, Java is interpreted bytecode. The threads are tied to the system to be real system threads -- this takes some time and resources to do and do right. True, a new version of the JRE over-writes the previous versions upon install. But isn't having a backup version on the system just good sys-admining? It's just a symlink, for petes sake...
    Re:I question the validity (Score:5, Interesting)
    by spinlocked (462072) on Sunday February 09, @12:19PM (#5264586)
    (Last Journal: Friday February 13, @08:25PM)
    So I guess this could be true, but as someone who has worked with Sun before, I find it very, very hard to believe.

    I have worked at Sun and this smells very real to me. I have a friend at Sun who wrote an application in his spare time (in Java) which was officially adopted for internal use - he spent a month working with the internal applications gestapo having it re-written from scratch "to official standards". I agree with much of what the document says. Writing a complex Java application means targeting a specific JRE version, it is not at all unusual for Sun software products to install the particular JRE which they were written against (look at SunMC and the SunRay server software) - it's easier to keep patched without breaking other things.

    Until the Java developers use Solaris as their tier one development platform and API changes are controlled in the same was Solaris itself (PSARC) this will continue to be a problem.
    Re:Kiss and say goodbye to Java language!! (Score:5, Informative)
    by TheRaven64 (641858) on Sunday February 09, @10:43AM (#5264046)
    (http://theravensnest.org/ | Last Journal: Wednesday June 30, @06:39PM)
    I hope this illiterate drivel was intended as a troll, but just in case it was not:

    Forget Java man and go to PHP!

    Java is a general purpose programming language, PHP is not. PHP is a scripting language designed for server side web scripting. Ever tried writting a server in PHP? You can't, it doesn't let you accept incomming socket connections.

    PHP is 4 times faster than Java technology 'JSP' (Java server pages).
    This tallies because compiled "C" program is 4 times faster than Java.

    I'm not sure where you get your numbers from (the link you post is to a non-existent howto in the LDP), but I doubt that they are accurate. PHP is an interpreted language, C is a compiled language, Java is a hybrid (Just-In-Time compiled). C is likely to be faster than both (although a JIT language can make use of run-time profiling for optimisation, so in theory Java could run faster than compiled C code, but this is new technology so it doesn't - yet). Primitives in C are typed, in PHP they are not. This means that PHP has a lot of type checking to do even for simple variable assignments. PHP is unlikely to be faster than Java (although it may still fit your needs better in other areas).

    PHP is a very lightening fast object oriented scripting language.

    PHP is not an OO language. PHP supports a few features of OO, but not the vast majority (public / private methods, inheritence etc). PHP Classes are more equivalent to namespaces than classes.

    PHP is 100% written in "C" and there is no virtual machine as in Java.

    PHP is an interpreted language (how many times do I have to say this?). There is a virtual machine, and it interprets the PHP script. The Java VM compiles the bytecode to native code at run time (and only once, when the JRE is started in server mode). <oversimplification>

    Nothing can beat "C" language

    This is the stupidest statement I have ever heard. C does nut support dynamic strings, so only a fool or a masochist would use it for simple text manipulation tasks (ever written a CGI script in C?). C has many advantages, it's a mature language so a lot of work has gone into making it fast. For this reason it is good for low level system work. It is not the best tool for every job. If the only tool you have is C, every problem looks like an operating system...

    Java programmers will really "LOVE" PHP as PHP class is identical to Java's class keyword.

    Java programmers will loath PHP. It doesn't properly support a large number of features found in Java, because it is not a general purpose language, and it isn't even an OO language. Web developers like PHP because it's simple. For a detailed criticism of PHP look at thi paper [ukuug.org] published at the UK Unix Users' Group last year. (And possibly read my reply [sucs.org] to the criticisms made.

    The aim of java was to abstract the OS and windowing system away from the developer, and in this it succeeds quite well (although it still has speed issues and the API is baroque in the extreme in places - try creating a non-blocking port in Java if you don't believe me). PHP is an interpreted scripting language aimed at web design, which has agregated, rather than being designed. Comparing the two is a crazy as saying Mozilla is far better than Linux.

    [Nov 18, 2004] IBM Releases Object Rexx as Open Source

    A few weeks ago, IBM quietly released Object Rexx to the open source community. RexxLA — the REXX Language Association — targeting the first release of Open Object REXX for early 2005.

    The "ooRexx" project has been established on SourceForge, says Davis, and the code and documentation are being converted from IBM internal formats to open standard formats. While IBM is involved in the transfer, it does not intend to be formally involved in the project. A number of current and former IBMers have signed on to help with the project on their own time, however, including Rick McGuire, the primary architect and author of Object Rexx. IBM Fellow Mike Cowlishaw, who created the REXX language, "is very interested in seeing Open Object Rexx succeed," says Davis. "His expertise and counsel are immediately available should we need it."

    Are scripting languages the wave of the future

    ITworld.com

    ITworld.com: Does language make a difference? Your emphasis on code hygiene and expressiveness is welcome. Why, then, do we spend so much time working in C and C++? If an organization comes to you with a coherent set of requirements, and says it's willing to trust you about what language will help it best meet its goals, what advice are you likely to give?

    Robert Martin: Language certainly makes a difference; but any language can be written well.

    I don't mind C or C++. There are certainly cases where those are the best languages for the job. DSP code, if not written in assembler, is probably best written in C. Hard embedded realtime apps are well done in C++. And many Web apps or MIS/IT apps are nicely done in Java.

    However, I think there is a trend in language that will become more and more evident as the decade progresses. I think we are seeing an end to the emphasis on statically typed (type-safe) languages like C++, Java, Eiffel, Pascal, and Ada. These languages force you to declare the types of variables before you can use them.

    As this decade progresses I expect to see an ever increasing use of dynamically typed languages, such as Python, Ruby, and even Smalltalk. These languages are often referred to as "scripting languages." I think this is a gross injustice. It is these languages, and languages of their kind, that will be mainstream industrial languages in the coming years.

    Why do I think this? Because these languages are much easier to refactor. What's more, they have virtually zero compile time.

    As an industry we became enamored of type-safety in the early '80s. Many of us had been badly burned by C or other type-unsafe languages. When type-safe languages like C++, Pascal, and Ada came to the fore, we found that whole classes of errors were eliminated by the compiler. This safety came at a price. Every variable had to be declared before it was used. Every usage had to be consistent with its declaration. In essence, a kind of "dual-entry bookkeeping" was established for languages. If you wanted to use a variable (first entry) you had to declare it (second entry). This double checking gave the compiler vast power to detect inconsistency and error on the part of the programmer, but at the cost of the double entries, and of making sure that the compiler had access to the declarations.

    With the advent of agile processes like extreme programming (XP), we have come to find that unit testing is far more important than we had at first expected. In XP, we write unit tests for absolutely everything. Indeed, we write them before we write the production code that passes them. This, too, is a kind of dual-entry bookkeeping. But instead of the two entries being a declaration and a usage, the two entries are a test and the code that makes it pass.

    Again, this double-checking eliminates lots of errors, including the errors that a type-safe compiler finds. Thus, if we write unit tests in the XP way, we don't need type safety. If we don't need type safety, then its costs become very severe. It is much easier to change a program written in a dynamically typed language than it is to change a program written in a type-safe language. That cost of change becomes a great liability if type safety isn't needed.

    But there is another cost: The cost of compilation and deployment. If you want to compile a program in C++, you must give the compiler access to all the declarations it needs. These declarations are typically held in header files. Each header file containing a declaration used by the program must be compiled by the compiler. If you have a program with N modules, then to compile just one of them you may have to read in all N header files. With a little thought you'll realize that this means that compile time goes up with the square of the number of modules.

    As code size increases, compile time rises in a distinctly nonlinear manner. For C++, the knee of the curve comes at about half a million lines. Prior to that point, compiles are pretty short. After that point compile times start to stretch and can get absurdly long. I know of one company that compiles 14.5 million lines of code using 50 SPARC-20s overnight.

    This massive compile time can be mitigated by making good use of dependency management. By carefully controlling the coupling of your modules you can reduce compile time to be proportional to NlogN. But even this can be very long for large projects. Java found a way out -- sort of. In Java, declarations are compiled and stored in classfiles. To compile a module requires only that the declarations be read from the classfiles of the used classes. Transitive dependencies are not followed. Thus, Java compile times can be drastically smaller than C++ compile times.

    Still, I often run into clients who have Java compile times that run close to half an hour or more. By using good dependency management, and very fast compilers, I think this can be drastically reduced. But the problem still exists.

    Dynamically typed languages, however, have virtually zero compile time. There are no declarations to hunt down. Compilation can occur while the code is being edited. As projects get larger and larger, this ability to be instantaneously compiled will become ever more important. Finally, there is the issue of deployment. In C++ or Java, if I change a base class, I must recompile and redeploy all classfiles for classes that are derived from that base. (Java folks, you can try to get away without doing this, but you'll be sorry.) Thus, even a small change to a single class can cause large redeployment issues. In dynamically typed languages, redeployment problems are not eliminated, but they are vastly reduced. So, Bob Martin's prediction for this decade: Keep an eye on languages like Python, Ruby, and Smalltalk. They are likely to become extremely important.

    [Sept 21, 2004] XSLT and Scripting Languages

    XSLT is the only programming language standardized specifically for processing XML. Nevertheless, the XSLT specification states: "XSLT is not intended as a completely general-purpose XML transformation language." It is surely even less appropriate as a general purpose programming language. Nevertheless, some XSLT advocates have noted that more and more processing can be moved into the XSLT domain as more and more data is represented or transferred as XML.

    On the other hand, scripting languages are certainly general purpose. Most of the modern ones have features designed for programming in the large such as object orientation and exception handling. They also have quite solid XML support. They could certainly be used to do anything that would otherwise be done by XSLT. Advocates of these languages also see more and more processing moving from the world of traditional programming languages (e.g. C, C++ and Java) into scripting languages. Some claim that there is no need for XSLT at all.[XSL Considered Harmful] They ask why scripting languages should cede any part of the XML processing domain to XSLT.

    Obviously these arguments can only be both compelling if there is some substantial overlap in the problem domains of scripting languages and XSLT. This paper is intended to explore this overlap and help the reader to choose whether to learn and use one or both of these emerging technologies.

    [Sept 2, 2004] Patterns for Scripted Components

    Scripting languages are typically highly dynamic, with useful mechanisms for reflection and meta-programming. A program designed to take advantage of these features is often smaller and more efficient than one desiged in the style of a more static, strongly typed language such as Pascal, Java, C or C++. These patterns attempt to capture and describe the design decisions that lead to a successful use of scripts and scripting.

    [Sept 1, 2004] Language Options Comparison

    4. Language Simplicity

    If we include everything plus the kitchen sink, the compiler and/or interpreter would be too difficult or expensive to build and debug. The learning curve would also be greater. One should not assume that a given scripting language will be the only language a person will ever want or need. Perl and Python are more complex then they need to be because the authors had a vision for the "ultimate language" with seemingly every feature they ever envisioned. (Of course, some of this depends on one's design philosophy. For example, I tend to use databases and table API's for things that others put into arrays.)

    One way to make life easier for the compiler or interpreter (C/I) is simply to use function calls or API's for as much as possible. We will call this the function rule. The function rule says to use function/subroutine syntax to implement all operations unless you have a good reason to deviate.

    As example of an unjustifiable violation of the function rule, XBase uses the dollar operator ($) to mean "is contained in". However, we see little reason not to make this a simple built-in function, such as InStr(), meaning "in string". Thus, if x $ "abc" could instead be represented as if InStr(x, "abc"). Because the logic for parsing a function is already built into the C/I, it will add very little to C/I's burden to implement InStr; whereas the dollar-sign operator is another token that has to be parsed. (It is also harder to document and index in a manual.)

    On the other hand, functions can get annoying for some commonly-used operations. For example, math functions could be represented with just functions like Plus(), Minus(), and Divide(). However, most language builders chose to implement these using the operators +, -, and / instead. Thus, instead of using: 
        total = a + b + c + d + e 

    one would have to use this if the function rule was strictly followed: 
        total = add(a, add(b, add(c, add(d, e)))) 

    (Note that it may be possible to implement something like add(a,b,c,d,e) but this approach has other sticky issues associated with it.)

    Although functions keep the syntax simple by being a generic programmer interface, they do have stylistic limitations.

    Deviations from the function rule often indicate the "orientedness" (target audience or usage) of the language. This is where the art and politics of language design come in. For instance, in table-oriented programming, one would rather see and type table2.fieldx instead of fieldvalue(table2,"fieldx") required by some API's. Object-oriented programming uses the dot operator for yet another purpose.

    Footnote: Perhaps there is a pattern to deviations from the function rule that an implementer may want to build-in or prepare for. This could allow for custom oriented-ness to be added onto a language. This way a base language could be built, but then have certain ways to "overload" some operators so that fans of Objects, Tables, Strings, Streams, Pipes, Math, etc. can do their favorite stuff without too many unnecessary functions, parentheses, or quotes. This may be an interesting topic of research that is outside the scope of this article.

    [Aug 10, 2004]  IP scripting advantages

    The benefits of using interprocess communications instead of an embedded language aren't all obvious. In fact, the only obvious advantage is that the user can use any language which has access to the interprocess communications facilities that the application uses. The user is not restricted to the language the application developer liked best, or a language that evolved from a simple macro facility. In particular, if the problem being solved requires parts that are computationally expensive, a compiled language can be used in place of an interpreted scripting language.

    Another benefit is the ability to do concurrent processing. Since the application and the script are running on different processes, they can work in parallel if the interface has primitives that return before an operation completes, or the language has concurrent processing support. Since the language can be chosen by the user, finding one with some form of concurrent processing support should always be possible.

    As previously mentioned, sometimes an application is given scripting extensions by embedding the application in the scripting language instead of embedding a scripting language in the application. This allows scripts which control the application to be launched from other applications. The same effect can be achieved with an interprocess scripting facility if the application makes the facility public, instead of restricting it to scripts which the application launched. This simple change makes the application a server, allowing the application to be invoked by calendar programs and similar things. It also turns the scripting language into an application integration language, as multiple applications can be used from one script.

    Going back to the example of the word processor with the built-in dictionary, it would be nice to be able to use that from other applications. For instance, looking up words on a web page. This is something that can be done in a select/copy/paste manner as previously described. If the scripting language uses interprocess communications facilities and the dictionary is a server, then a script started from the web browser could have the dictionary open a window to display the definition of the word. With an imbedded scripting language, the script must launch a new copy of the dictionary for every word.

    Given the above evidence, I believe that using system interprocess communications facilities is the best solution. Yes, it costs a bit for each script, just like using a full scripting language instead of macro language costs a bit more. However, in both cases that small extra cost adds extra power and ease of use which outweighs that cost.

    On to Unix interprocess scripting up to the contents, or back to Communications.

    [ May 23, 2004] Java trends Scripting languages Builder AU Program Java C C++

    It’s all about rapid development
    No question about it: Scripting languages, such as Jython, Python, Perl, and PHP, are becoming more and more popular. Jython is actually a complete implementation of the Python programming language. It’s written in 100 percent pure Java and allows easy access to Java libraries.

    This scripting trend is being driven largely by rapid application development (RAD), a development style that is gaining disciples all the time. As marketing executives put the full-court press on IT shops to speed production, IT managers are forced to look at the most efficient ways to beat deadlines. RAD is a prime mover here.

    “You can be very clever with some scripting languages and do things you can’t do with regular Java,” said Mukund Balasubramanian, CTO of Redwood City, CA-based Infravio, a Java and Web services integrator. “If you know how to use a good scripting language, you can save a lot of time and money in development.”

    The advantages of scripting languages
    Balasubramanian said that scripting languages offer the following advantages:

    The disadvantages of scripting languages
    However, Java scripting languages do have a few disadvantages:

    “Thus, scripting languages can quicken the pace of software development to a very large extent, but must be carefully chosen for a specific application—such as dynamic Web pages or to complement a ‘real’ programming language like Jython to Java,” Balasubramanian said.

    Scripting in and around Java

    "Just because the Java API, or any other API written for Java has documentation, doesn't mean that it's always obvious how to interact with it properly. " Part one of a two-part article
    See: Two Java Scripting Languages Close-up

    When faced with a new engineering project or task, one of the first questions to cross a developer's mind is often "What language am I going to write this in?" Sometimes Java seems like overkill; we'd like to throw together something quick and dirty, and it might be nice to do interactive development. Typically in such situations, developers prefer Perl, Python, or even Tcl. However, there is a large array of new, neat scripting languages built to take advantage of the Java virtual machine that might be able to better serve your needs than one of those old workhorses. In this article, we're going to look at a few of the more interesting Java scripting languages currently available.

    What advantages are there to a language targeting the Java virtual machine? There are many advantages to the person designing the language. Most immediate is portability; the new language has instant access to all platforms for which there is a Java virtual machine. One of the biggest pluses that you'll see in many of these languages is the ability to interoperate easily with Java. If you're used to the depth and breadth of features provided by Java APIs, this benefit may be worth pursuing.

    Of course, most of these languages are somewhat off the beaten path, which is a place some developers don't want to go, and rightly so. Braving a language with a small user community that might disappear at any time is not for the faint at heart, no matter how cool the language is. For example, a very cool Java-based scripting language formerly called WebL is currently embroiled in legal battles causing it to disappear from its Web site. Not all Java-based languages are such big risks, though. For example, JPython enjoys a reasonably sized user base and has strong support from the Python community. People get paid full-time to work on it, and that probably won't change.

    How about JavaScript? It turns out that JavaScript has nothing at all to do with Java. JavaScript was originally a language by Netscape called LiveScript. When Java first came out, Netscape made a deal with Sun to change the name to JavaScript. The whole goal was to take advantage of the Java hype. Other than the name, JavaScript has nothing to do with Java, beyond the fact that they both can work in a Web browser. JavaScript doesn't use the Java Virtual Machine, or anything.

    (Ironically, Sun originally planned to deploy John Ousterhout's Tcl as the first scripting language for Java.)

    Mark McEahern wrote:

    > Could you give us a concrete example where the use of regular expressions in
    > Perl is superior to Python?

    Superior probably is too strong a word, though regex in Perl is a little easier
    to get to right out of the box.  I.e, regex is part of Perl's "builtins".

    PERL regex example:

            # No import, compile, function or object syntax.
            # Implied match is with the "current" thingy ( $_, IIRC)

           action() if /regex/ ;    # perform action() if regex match

    To match with a specific object, say, a variable you use the "=~" operator.

        action( $a) if $a =~ /regex/ ; # perform action if regex matches $a

    Note that the statement for substitute is like the "vi" command:

        $a ~ s/old/new/gi  if $a =~ /pattern1/;

            # substitue "new" for "old" in $a if $a matches pattern1
            #    g suffix for global replacement
            #    i suffix for case insensitive comparison

    I personally find some of these forms to be abominations but they are highly
    mnenomic to people who have used Unix for a long time.

    The basic regex operators are similar to Python's, though Perl adds some extras
    such as

        {n,m}    # preceeding pattern matches at least n but no more than m times

    A successful match sets a flurry of global variables:

        $& = the matched portion of the input string

        $` = everything before the match

        $' = everything after the match

    Parentheses in the regex break the matching pattern into "groups" and the
    portions of the string coresponding to each group may be accessed via:

        $1, $2, ...

    E.g.,

        s/^([^ ]* *([^ ]*)/$2 $1/;    # reverse order of 2 words

        if( /Time: (..):(..):(..)/ ){    # extract hh:mm:ss fields
            $hours = $1;
            $min = $2;
            $sec = $3;
        }

     

    Python vs. Perl, which is better to learn

    [James J. Besemer]
    > Superior probably is too strong a word, though regex in Perl is a
    > little easier to get to right out of the box.  I.e, regex is part
    > of Perl's "builtins".

    Thank you for your response.  I'm not terribly familiar with Python's re
    module, but I'll see if I can duplicate the examples you provided...

    > PERL regex example:
    >
    >         # No import, compile, function or object syntax.
    >         # Implied match is with the "current" thingy ( $_, IIRC)

    As others have remarked, I prefer the explicitness of Python's re
    module--you have to import it to use it.

    >
    >        action() if /regex/ ;    # perform action() if regex match

    Well, literally speaking, as far as I know there's no Python equivalent to
    this.  To get a sense of why, you can type "import this" into the Python
    interactive interpreter.  ;-)

    Nonetheless, you can do the equivalent fairly easily by explicitly
    specifying the string you want to match:

    import re
    s = "Explicit is better than implicit."
    pat = re.compile("licit")
    if pat.search(s):
    action()

    > To match with a specific object, say, a variable you use the "=~"
    > operator.
    >
    >     action( $a) if $a =~ /regex/ ; # perform action if regex matches $a

    I think the same Python example above covers this situation.

    > Note that the statement for substitute is like the "vi" command:
    >
    >     $a ~ s/old/new/gi  if $a =~ /pattern1/;
    >
    >         # substitue "new" for "old" in $a if $a matches pattern1
    >         #    g suffix for global replacement
    >         #    i suffix for case insensitive comparison

    You can use the re.sub() to replace count occurrences of a pattern:

    import re
    s = "Explicit is better than implicit."
    old = re.compile("e", re.IGNORECASE)
    new = "f"
    count = 1
    if pat.search(s):
        s2 = re.sub(old, new, s, count)
        print s2

    Not specifying count means replace 'em all.

    > The basic regex operators are similar to Python's, though Perl
    > adds some extras such as
    >
    >     {n,m}    # preceeding pattern matches at least n but no more
    > than m times

    Python has that too:

      http://www.python.org/doc/current/lib/re-syntax.html

    > A successful match sets a flurry of global variables:
    >
    >     $& = the matched portion of the input string
    >
    >     $` = everything before the match
    >
    >     $' = everything after the match

    Thank Crom Python doesn't do that.  As someone else mentioned, these things
    are tucked away into match objects, if you want them, rather than being
    squirted into your namespace.

    > Parentheses in the regex break the matching pattern into "groups" and the
    > portions of the string coresponding to each group may be accessed via:
    >
    >     $1, $2, ...
    >
    > E.g.,
    >
    >     s/^([^ ]* *([^ ]*)/$2 $1/;    # reverse order of 2 words

    This switches the first and last words of a sentence.  I didn't bother
    putting the period back in there or sentence-casing the new first word.

    import re
    s = "Explicit is better than implicit."
    pat = re.compile("^(\w+)(.* )(\w+)\.$")
    m = pat.search(s)
    if m:
        print "%s%s%s" % (m.groups()[2], m.groups()[1], m.groups()[0])

    >     if( /Time: (..):(..):(..)/ ){    # extract hh:mm:ss fields
    >         $hours = $1;
    >         $min = $2;
    >         $sec = $3;
    >     }

    Here's a case where Python's ability to name groups is interesting:

    import re, time
    t = time.asctime()
    pat = re.compile("(?P<hour>\d{2})\:(?P<minute>\d{2})\:(?P<second>\d{2})")
    m = pat.search(t)
    if m:
        print m.group('hour')
        print m.group('minute')
        print m.group('second')
    else:
        print "Not found."

    Cheers,

    // mark

    Python or Perl: Which is Better? by Kragen Sitaker

    (PDF) Some interesting observations but is somewhat biased toward Python.
    June 1, 2004 | Login Vol. 23, No. 3

    [Editors’ Note: Kragen Sitaker wrote this reply to the standard question, “Which is better,
    Python or Perl? And why?” He has graciously granted us permission to reprint it.]
    Python has a read-eval-print loop, with history and command-line editing,
    which Perl doesn’t.

    Python has a bigger standard library, which tends to be better designed. For example,
    it’s possible to write signal handlers that work reliably without a deep knowledge
    of the implementation (and I’m not convinced that a deep knowledge of the
    implementation is sufficient in Perl).

    time.time() and time.sleep() speak floating-point.
    open() raises exceptions when it fails.

    Every day, people post on comp.lang.perl.misc asking why their programs are failing, and it’s because some file is missing or unreadable, and they can’t tell what’s wrong because they’ve forgotten to check the return code from open(). This doesn’t happen in Python.

    os.listdir() omits “.” and “..”; they’re always there, so you can include them by writing
    [“.”, “..”] + os.listdir(), but you almost never want them there. Perl’s readdir
    doesn’t omit them, which is a very frequent source of bugs in programs that use
    readdir.
    On the other hand, Perl has a much bigger nonstandard library – CPAN – and some of
    its standard library is better designed: system() and popen() can take a list of arguments
    to avoid invoking the shell, meaning all characters are safe.
    Python makes it sort of a pain to build types that act like built-in lists or strings or dictionaries
    or files, and (as of 2.0, with “x in y” now having a meaning when y is a dict)
    it’s impossible to build something that acts both like a dictionary and like a list. It’s relatively
    easy to build something that acts like a function.
    But Perl makes it a pain to use types that act like those things, and it’s impossible to
    build types that act like functions. But you don’t need to build types that act like functions,
    because you have Scheme-style closures.

    Lightweight scripting/extension languages Posted by atai (Journeyer)

    Dec 21, 2003

    Extension languages are designed to be embedded in applications to support customization of the application behavior. Common scripting languages, like Perl and Python, are fairly "large" with powerful run-time engines and libraries and are widely available and "script" writers usually assume their stand-alone existences in the deployment environment.

    However, if one is looking for a language that's small enough so its source can be embedded in the distribution of and built as part of the application, Python and Perl may be "overweight." For the real lightweight choices there are Lua and Tinyscheme. Are there others? What are people's preferences and opinions regarding lightweight extension languages?


    Tcl!, posted 21 Dec 2003 by davidw (Master)
    Well... sure, why not? Not my fault if the answer is the same for both articles!

    In all seriousness, Tcl is pretty good for what you want, at least down to a certain point, especially if you are willing to do some hacking. For instance, in the Tcl CVS tree there is a reduced-footprint branch that was prepared for Cisco - they use it as an embedded scripting language on some of their fancier routers. ETLinux is also worth a look, although they use a custom Tcl of their own which is based on an older version of the language. Tcl is a good candidate in general for this space because of its nice C API, medium-sized footprint, and the very simple and flexible syntax (you can write control structures in Tcl itself, kind of like scheme). Looking at the negatives, Tcl is not as small as Lua (it does more), and maybe it has too many things built into the core that you would have to attempt to hack out, depending on what you want or don't want, even though there is a bit of code there already to assist you in this.

    Other possibilities that I'm aware of include elastiC and Forth, although the latter isn't really a "scripting language" in the sense that the others are. It can be extremely small, though. Ficl is a reasonably well implemented version to play with - I ported it to run on the eCos embedded operating system as a means to interact with the OS. Speaking of which, Lua runs on eCos too.

    I think, though, the conclusion that you will find is that it really depends on what you are trying to do. If you want to interact with the user, something "command oriented" like Tcl is great. If you want the smallest thing possible, Forth is probably the way to go. Lua is really fast and looks reasonably powerful. So, tell us what you need and we'll see what we can come up with.

    Not Tcl!, posted 22 Dec 2003 by etrepum (Journeyer)
    Tcl is extremely slow and has some evil syntax. How can you recommend that to someone? I've seen Lua used quite a bit for the purpose that is described in the article (i.e. stuff like Celestia, games, etc.).

    In any case, "small enough that its source can be embedded" seems to be a pretty stupid goal unless you actually have some serious memory/disk constraints such as with a cell phone, PDA, or game console. I can't imagine why one would use Perl to extend an application, but Python (from experience) or Ruby (just guessing) should be good candidates.

    Scriptix, posted 22 Dec 2003 by elanthis (Journeyer)
    I actually wrote my own scripting language for this purpose. Mainly because I write in C++, and needed a language that made extending and using pre-existing C++ classes as painless as possible.

    Scriptix website

    The language is sort of Java-like, is rather fast (altho there are plenty of cases where I can make it even faster), uses the Boehm GC for memory management (which your C++ app needs to use if you mix classes from your app with Scriptix data types), supports multiple scripted threads of executation, and so on.

    Language FUD, posted 22 Dec 2003 by davidw (Master)
    So, let's start with last things first in the anti-Tcl post. The article's author does indeed need to specify what he wants, or we can't really help him much. I'm willing to give him credit for having a reason for asking for something that doesn't require much in the way of resources for his embedding/extension language.

    Which brings us back to Tcl. It was designed to be embedded. It has an extensive C API that gives you access to all kinds of internal goodies. If you don't mind crossing that barrier, the rest of the internals are also pretty easy to get a grasp of. It's well written, well documented C code, despite the fact that it has, of course, grown over the years and there are warts here and there.

    As to its syntax, saying that it is "evil" is of course a matter of taste. Personally, I like what Tcl offers because it's simple and clean - everything is a command. Of course, it does bow to practicality, which is the reason why it's not another lisp, but I like that - they seem to have found a happy medium (at least for me) between Algol style syntax and Lisp's purity (which I find unweildy). For those interested, Tcl's rules can be summed up in one man page: http://www.tcl.tk/man/tcl8.4/TclCmd/Tcl.htm

    As far as Tcl's speed, or presumed lack thereof - well, it's not the fastest thing out there, but it can certainly compete with the likes of Ruby and PHP according to the "Great Language Shootout". In any case, "extremely slow" is certainly an overstatement.

    But to each his own - I find a lot to like in Tcl, especially when I started looking "under the hood" at the underlying C code. It's good, solid stuff with solid engineering processes behind it.

    As for the original poster, we'll have to hear more to decide what might work best in that situation...

    JavaScript, posted 22 Dec 2003 by judge (Journeyer)
    JavaScript is great. mozilla.org's js engine is very portable and appears to be quite fast. Furthermore, js is easy to integrate into existing code an, has a sane syntax and lots of people know it.
    Aparently KJS is quite good too.
    Re: Not Tcl!, posted 22 Dec 2003 by patthoyts (Master)

    The myth that Tcl is slow seems to have an unholy reluctance to go away. There is no doubt that once-upon-a-time Tcl was indeed slow. However, in common with all other modern script languages, the interpreter core now includes a byte compilation stage which speeds things up quite dramatically.

     

    Speed of execution is not however the aim of using script languages. Tcl encourages very fast development - especialy when writing graphical interfaces. Furthermore, Tcl is by far the simplest of the major script languages to extend with compiled modules. Only VBScript/JScript which can be extended using COM objects come close for easy extension (if you consider writing COM objects simple). The ability to easily extend the language means that a Tcl program never need be slow. The few bottlenecks in your application can be fixed by writing a small amout of C leaving the rest of your app easy to maintain and fast enough.

    Easy to extend.., posted 22 Dec 2003 by etrepum (Journeyer)
    You can use unmodified Objective C libraries directly from Python (via PyObjC).. far easier than COM, but you can do that too. You can also use unmodified C libraries directly from Python (via ctypes), but you do have to know the headers in order to properly call the functions.

    Currently, PyObjC only works with Apple's Objective C runtime at the moment, but GNUStep support is being worked on.

    'Lightweight' is no realistic goal, posted 23 Dec 2003 by tjansen (Journeyer)
    Keeping a language 'lightweight' is not a realistic goal. If you integrate a language into a product and it is successful, sooner or later there will be a bunch of people who are using this language a lot, many of them >8h per day. They are your most important target group and they don't care whether the language is 'lightweight', but will demand all functionality that can make their life easier. Every attempt to keep a language simple is doomed to fail, as an example look at Java's evolution from 1.0 to what is proposed for 1.5. The only approach that can work is to avoid duplication and make the syntax so powerful that you can move functionality from the core language into libraries.
    What are the goals, considerations, requirements?, posted 25 Dec 2003 by robocoder (Journeyer)
    Examples:

    Picking a language (or rolling your own) also involves guesswork and personal bias because the decision is limited by our ability to foresee future uses (or user expectations).

    A while back I looked at CSL (an embeddabled scripting language with C-like syntax) and C-Smile (ability to save compiled byte-code for later execution).

    Why I use Tcl in Altogether, posted 26 Dec 2003 by brouhaha (Journeyer)
    I use Tcl in Altogether, a microcode-level Xerox Alto simulator (work in progress). I use it for both scripting and as a debugger command language. It was pretty easy to replace my original hand-crafted argc/argv-style "parser" with Tcl.

    When I had some trouble with embedding Tcl (long since worked out), I asked some questions in the newsgroup. I was told that I was doing things "wrong". Instead of adding Tcl to an application as an extension language (as Tcl was originally intended), I was told that the model has changed, and that now the "correct" paradigm is to start with Tcl and build your application by adding onto it. I was extremely unimpressed with this concept, coming as it did from the Tcl "experts". I solved the technical problems on my own, and it's not clear to me that I can reasonably expect any degree of support from those experts.

    Because of this, and because I think Lua is a smaller and (IMNSHO) more elegant language, I started to switch from Tcl to Lua. There's actually a Lua-based branch in the Altogether Subversion repository, but it is not functional. I ran into two problems, one aesthetic and one technical in nature.

    1. Since I'm using the extension language to parse all commands the user enters, I don't like the need for parenthesis around arguments (like a function call). For instance, if the Altogether user wants to examine 37 words of memory starting at location 017020, he should type the command "examine 017020 37", not "examine(017020,37)". I want an extension language that looks like a traditional command interpreter. Tcl does that. This is the reason I rejected Scheme, Guile, and FORTH. If a Smalltalk-like extension language were readily available, I might be willing to consider that, although perhaps it would be too verbose (e.g., "memory examineAt: 017020 count: 37").
    2. The interface for adding C functions to Lua is more sophisticated than that of Tcl, exposing a stack that contains objects of various types. While in general I think that is a win, converting Altogether to use Lua was going to require rewriting all of my original command functions to use that rather than the simple argc/argv approach.

    I was one of the software engineers at Telebit (R.I.P.) working on the NetBlazer router. We used Tcl as a scripting language, though I didn't ever use it much at the time. I thought it was fine for that, but new router features started getting implemented in Tcl that IMNSHO should have been native code.

    I wasn't aware of the reduced-footprint version of Tcl mentioned by davidw, but it probably would suit Altogether just fine.

    Tcl extension vs embedding, posted 27 Dec 2003 by davidw (Master)
    There are certainly Tcl experts who will give you the advice you mention above, but there are plenty of others, myself included, who don't really agree, or at least think that there are plenty of situations where Tcl should function as an embedded language. This wiki page: http://mini.net/tcl/9303 contains some discussion amongst Tclers about the issue. I use Tcl to extend Apache, so I can't abandon that model myself, and in reality I think it works pretty well.

    I think it's a good idea to be wary of "the one true way" with anything, and I certainly disagree with it for Tcl, which is by its very nature supposed to be flexible enough to support a variety of programming paradigms.

    I agree with your desire for a very simple syntax, and recommend that you continue with Tcl if it works for you. Normally, the people on comp.lang.tcl are quite friendly - it is infact one of the few newsgroups I still find useful. A little bit of gentle explanation on your part ought to be enough to make people understand that you are not going to rewrite your program to fit their way of doing things.

    Shell syntax, posted 27 Dec 2003 by demoncrat (Journeyer)
    Not to push Scheme at you or anything (I wouldn't argue against Tcl if that's what you're comfortable with), but you can have a Scheme shell that accepts "examine 017020 37" just by omitting the outermost pair of parentheses. A guy I knew at Caltech was scripting the Magic chip layout tool that way and he found it very comfortable. You can also Lua's explicit stack management in your code using a Scheme implementation with a conservative garbage collector.
    No single path, posted 27 Dec 2003 by mx (Journeyer)
    I've embedded several scripting engines, into several types of commercial applications. Experiences overall have been mixed, and the best results came from two company-brewed little-languages. The public engines I've worked with included the Moz JS engine, Tcl/Tk, Perl, Java, and a Small-C variant.

    The JS engine was the least suitable of the bunch, and proved to be very difficult to port, mostly because it was well-optimised for 32-bit architectures (and the optimisations were not documented). We did port it, but it's been a poor experience overall. The documentation is very light, and its threading model was a thorn in our threading model (especially because it uses the netscape portable library, which added yet-another dependancy).

    Small-C was effective, but didn't really fit the users. This is an important point, and is one we missed a few times (Perl, JS, Java also didn't fit well).

    Tcl, Java and Perl are larger engines (different sizes, but all suffered similar problems). Each engine has it's own data-model, and proxy to appication code tended to be expensive, especially for our quasi-real-time systems (it was ok for user interfaces, but users didn't really want to type in code in the UI-applications). The dual-data model problem is common with public engines, unless you build your application around the data accessable from within the VM. The glue-work for Java and Perl was tedious, though we were able to automate most of it.

    Speaking of glue, one of the best engines I've seen was a Scheme variant, thought I've never used it in production code. The engine I reviewed had a simple mechanism for connecting data and callbacks, much simpler than any other I've seen. I suspect that the simpler interface may have lost some power though, but I never hit the ceiling in my short time with the engine.

    The custom little-languages I've built had the advantage of sharing a data model with the application. One of the engines was embedded in a data-collection framework, and was distinctly event based (almost to the point of escaping a procedural definition). The language was recieved well by the people who used it, and has lasted for several years (still in use). And, the implementation was under 1k lines, including all of the glue.

    The second custom language was co-developed by a coworker, and is a simple stack-based VM. The syntax is a clean XML, which allows for a simple user-interface (xml is easy to machine-generate). The script is never hand-edited (outside of unit-testing), and the users love the GUI-script builder. The engine performs well too, mainly because it's optimised for a very narrow set of tasks.

    We developed one custom-engine that was a failure from the start. It was based on the concepts of a previous in-house engine, but we tried to make it generic (and for no good reason). Generic was one of those pure engineering goals that had nothing to do with what people wanted. The engine took many months to build, and was a failure on every level (it was a defacto second-system). The XML-based script was the replacement, which brought-back the focus, though as a tiny subset of the failed engine. Smaller is better!

    I've learned that scripting languages are good for applications. While re-use is good, it usually results in using a broad-engine, which has implications in performance, and user grokability. It's important to know the people using the application, and what they really want (not just what is technically viable or desiarable). GUI-based languages seem moronic (from a geek-point-of-view), but can really simplify support problems related to scripting, and are self-evident for nearly any user. Custom lanugages can really benifit an application, as you can control the language and performance focus, which beats out re-use any day. And, custom languages don't have to be a large-effort, though they can easily become that if you lose focus.

    Here's mine, posted 31 Dec 2003 by sej (Master)
    comterp is a stand-alone scripting language with full-strength C expression syntax (minus the tertiary operator) and Lisp-like semantics. Plus a few extras like dotted attribute lists (property lists) and APL-like streaming (or vectorization). I've embedded it in a drawing editor, but there is a stand-alone MANIFEST if you want to strip it out of the ivtools source tree.
    Extend, don't embed!, posted 31 Dec 2003 by nelsonrn (Master)
    This is the wrong philosophy entirely! I will tell you exactly what will happen if you try to embed a lightweight programming language into your application. People will like your application and will use it. They will seek to extend it beyond the capabilities of the language. You will be put in the uncomfortable position of having to make your programming language heavyweight, or telling these users that they are out of luck (been there, done that).

    What you should do instead is write your program in Python, and when you find that you need more than Python can do, write an extension to Python in C. That's what you were planning to do anyway. What that will do is 1) force you to find out what part of your program REALLY needs to be written in C, and 2) make it available to other people to do cool things.

    By doing this, people will have the full extensibility of Python available to them. For example, there's a quip that every program gets extended to the point where it can send email. By extending Python, your program starts OFF able to send email. :-)

    Onyx, posted 4 Jan 2004 by trs80 (Apprentice)
    Lambda the Ultimate linked to Onyx today. It's stack based, so probably best if you're targeting ex-postscript programmers ;-). Slightly more seriously, it apparently can be configured to be as lightweight as required (you probably don't need regexes in your bootloader), and has a syntax good for data files (a feature of Lua), is threaded (via pthreads).
    A couple more interpreters for perusal, posted 10 Jan 2004 by MisterBad (Master)
    The Onyx page mentions a few more interpreters it's like. Two I thought interesting were:

    Figured I'd post them here, for review.

    [Jan 11, 2003] which is better Perl or Python More importantly, why by Kragen Sitaker

    Kragen Sitaker kragen@pobox.com
    Wed, 23 Jan 2002 03:38:01 -0500 (EST)


    Not surprisingly, both are better than the other in some ways. (Be warned, this is about 2500 words.)

    Python has a read-eval-print loop, with history and command-line editing, which Perl doesn't.

    Python has a bigger standard library, which tends to be better designed. For example:

    On the other hand, Perl has a much bigger nonstandard library --- CPAN --- and some of its standard library is better-designed:

    Python makes it sort of a pain to build types that act like built-in lists or strings or dictionaries or files, and (as of 2.0, with 'x in y' now having a meaning when y is a dict) it's impossible to build something that acts both like a dictionary and a list. It's relatively easy to build something that acts like a function.

    But Perl makes it a pain to *use* types that act like those things, and it's impossible to build types that act like files or functions. But you don't need to build types that act like functions, because you have Scheme-style closures.

    Python's syntax is far, far better.

    Except that it's indentation-sensitive, which makes it slightly harder to cut-and-paste code and offends the kind of people who unthinkingly adhere to stupid traditions for no good reason. (It might offend other kinds of people, too, but I know it offends this kind of people.)

    Perl has a C-like plethora of ways to refer to data types, which brings with it lots of confusion and dumb bugs. It also makes things like arrays of arrays and dicts of dicts slightly more confusing.

    Perl has lots of implicit conversions, which hide typing errors and silently give incorrect results. Python has almost none, which leads to slightly more verbose code (for the explicit conversions) and occasional fatal exceptions (when you forgot to convert). (Python unfortunately has some and is getting more.)

    On the other hand, Python overloads + and * to do different, though vaguely related, things for strings and numbers; Perl makes the distinction explicit, calling Python's string + and * . and x instead. Also, there is a certain variety of implicit conversion --- namely, from fixnums to bignums --- that Python doesn't yet do, but should. (Python 2.2 does it.)

    Python's variables are local by default. Perl's are global by default. Perl's policy is unbelievably stupid. Worse, in Perl, the normal ways to make variables local don't apply to filehandles and dirhandles, so you have to use special tricks for them.

    Perl can be (and, for me, always is) configured to require that all variables be declared and local. Python has no way to require variable declarations, although reading an uninitialized variable is a run-time error, not a run-time warning.

    Perl implicitly returns the value of the last expression in a routine. Python doesn't. This is a point in Python's favor most of the time, although it makes very short routines verbose. (Although Python has lambda, which lets you write those very short routines in the Perl style.)

    Perl has nested lexical scopes, which means that occasionally your variables disappear when you don't want them to, but more often, they aren't around to cause trouble when you don't want them to. They also make it really easy to write functions that return functions as Scheme-style lexical closures. In Python, writing functions that return functions is painful; you must explicitly list all of the values you want to close the inner function over, and if you want to keep callers from accidentally blowing your closure data by passing too many arguments or keyword arguments with the wrong names, you need to write a class with __init__ and __call__. Also, in Python, if you want statements in your closure, you can't write it inline --- you have to write def foo() and then refer to foo later.

    Also, Python 2.x has list comprehensions, which reduce the need for really simple inline functions (for map and filter), and also are a very nice language feature in their own right. (And they give you nested scoping, although it's kind of gross.)

    The Perl parser gives better error messages for syntax errors.

    Perl optimizes better.

    Python has an event loop in the standard library. Perl has POE, which isn't in the standard library.

    Perl has while (<>). Python doesn't, although it has fileinput.input(), which seems to be broken for interactive use. (It doesn't hand the lines to the loop until it's read 8K, and it requires you to hit ^D twice to convince it to stop reading and once more to end the loop.)

    Strings in Python are immutable and pass-by-reference, which means that passing large strings around is fast, but appending to them is slow, and it's possible to intern so that string compares are blazingly fast. Strings in Perl are mutable and pass-by-value, which means that passing large strings around is slow, but appending to them is fast, and comparing them is slow.

    Python lists don't auto-extend when you try to assign to indices off the end. Perl lists do. This is generally a point in Python's favor.

    Perl autovivifies things, so you can say things like $x->{$y}->[$z]++, which will make a hash for $x if there isn't one already, an array for $x->{$y} if there isn't one already, and an element for $x->{$y}->[$z] if there isn't one already, before incrementing it from its default value of zero. Doing this in Python is painful. However, Python allows tuples as hash/dict keys, which lessens the need for this; you can write

        if not x.has_key(y, z):
            x[y, z] = 0
        x[y, z] = x[y, z] + 1
    
    Python treats strings as sequences, so most of the list and tuple methods work on them, which makes some code much terser. You have to use substr() in Perl.

    Python requires you to use triple-quoted strings to have multiline strings. Perl has here-docs, but also lets ordinary strings cross line endings.

    Perl indicates the ends of ranges in two ways: the index one past the end of the range, and the index of the last element in the range. .. uses the second; @foo in scalar context uses the first; etc. Python consistently uses the index one past the end of the range, which is confusing for new users.

    Python has this icky (x,) syntax to create a tuple of one item. Perl doesn't have this problem.

    On the other hand, Perl has cryptocontext bugs: expressions evaluate to different, and possibly unrelated, things in scalar and list context. This rarely bites me any more, but it used to.

    Perl's context-dependency in function evaluation leads to brittleness problems, in ways that are difficult to explain; it leads to difficulties in wrapping functions, a la Emacs advice.

    Python has reasonable exception handling; you can catch just the exceptions you expect to have happen. Perl has 'die', and if you want, you can eval {} and then regex-match $@ to see if it was the exception you wanted, and if not, die $@. The usual upshot is that Perl programs that catch some exceptions usually end up catching all exceptions and continuing in the face of exceptions that should be fatal.

    On the other hand, it's still too easy to write a program that does that in Python, too, so people do.

    Python gives you backtraces when there are exceptions, from which you are more likely to be able to find the error than from Perl die messages, because they have more information; but Perl die messages are likely to tell you where the error is more quickly, for the same reason. Perl has 'croak', which lets you decide which level of the call stack to accuse of causing the error, and can give you backtraces if you want them.

    The Python syntax for referring to things in another module is terse enough that people actually use it. Perl's syntax for the same thing is uglier ($math::pi instead of math.pi) and Perl module names are longer, and so people tend to import things from the other modules into their own namespace. This makes Perl programs harder to understand.

    However, in both Perl and Python, you can specify which names can be imported from your module into someone else's namespace, but you can't specify which names can be referred to from another module (e.g. as math.pi, $math::pi). The consequence is that, in Perl programs, you can usually tell which names are internal to the module and which ones are used from other modules, and in Python programs, you usually can't. (Without looking at the other modules, that is.)

    Perl lets you trivially build dicts out of lists, which is good, because lists are easy to compute. Python doesn't, although you can write imperative loops to do the same thing.

    Perl lets you easily splice lists into other lists (functionally, in list literals); Python requires you to do that imperatively.

    You can't slice dicts in Python, although you can use 2.x listcomps to get almost the same effect: Perl's @thing{qw(foo bar baz)} becomes [thing[k] for k in 'foo bar baz'.split()]. I'm not sure whether this is better or worse; I think they're both pretty unreadable.

    Perl has 'last LABEL' and 'next LABEL'. Python doesn't. This is stupid of Python, although I don't need multi-level break often. I can get two-level break by moving the nested loops into a new function and using 'return', and two-level continue by one-level break.

    When Perl converts aggregate data types into strings (e.g. for printing), it turns any references into ugly strings. When Python does, it recursively prints what is pointed to (which fails if the structure is cyclic).

    In both Perl and Python, the rules for what counts as true and what counts as false in conditional expressions are needlessly complicated.

    In Python, loop conditions that have side effects end up needing to be hidden in functions, or you have to write an N-and-a-half-times loop. Which there's no syntactic construct for, so you have to kludge it with 'while 1:' and 'break'.

    In Python, you can iterate over multiple sets of items at once:

    for number, name in [(0, 'zero'), (1, 'one'), (2, 'two')]:
    	pass
    
    You can't do that in Perl, although you can do the equivalent if you have an iterator function which returns the tuples one at a time: while (($number, $name) = next_num_name) { }

    Python also has the zip/map(None,...) function to make this easier, and map() can take multiple sequences which it iterates over in parallel.

    Python has built-in arbitrary-precision arithmetic. Perl has it in a nonstandard library.

    Python has built-in named, default, and variadic parameters; Perl lets you do all those things yourself, which means that every Perl library that uses named parameters does it differently, and none of them have syntax as nice as Python's f(x=3, y=5) syntax.

    Perl has class methods. Python doesn't, although, unfortunately, it is adding them in 2.2.

    Perl unifies classes with modules; Python doesn't. So in Python, you can't import a class directly, the way you can in Perl; you can import it from the module it lives in, or you can import its module and get it from there. In Perl, the module is the class. On the other hand, in Python, modules are unified with files, and in Python, they aren't; this usually results in more verbosity in Perl.

    Python lets you create classes at run-time with the same ease, or lack thereof, that you can create functions. Perl doesn't. This is arguably excessive flexibility that leads to excessive cleverness and unmaintainable code.

    Perl will destruct any objects left around at program exit, possibly resulting in destructing objects that hold pointers to already-destructed objects; Python doesn't destruct them at all. Both of these approaches suck.

    Python's sort() and reverse() are in-place only, which means they don't work on immutable sequences, and often makes your code more complicated; this is moronic. Perl did the right thing here.

    Perl's split uses a regex. Python's standard split doesn't. Perl is better here. However, Python's standard split defaults to the right separator (whitespace) when you just specify a string, and Perl's split, by default, discards trailing empty fields, which Python's doesn't.

    Python's built-in comparison routines do recursive lexical comparison of similar data structures, so you can sort a list of lists or tuples straightforwardly; and you can sort records by some computed key by forming tuples of the key and the record, then sorting the tuples. If you try to sort complex data types in Perl, it will sort them by memory address.

    Python's regular expression library is easier to understand than Perl's, and uses mostly compatible syntax (although Perl keeps adding features). Perl's regular expressions return subexpressions by mutating global variables $1, $2, etc., which have their state saved and restored in hard-to-understand ways, and they don't mutate those variables if they don't match. Python's regular expression match operator returns a 'match object', which is null if the regex failed to match, or has a method to fetch numbered subexpressions of the match if it didn't fail to match.

    See also http://mail.python.org/pipermail/python-list/1999-August/009693.html http://www.amk.ca/python/writing/warts.html

    Random findings



    Etc

    FAIR USE NOTICE This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available in our efforts to advance understanding of environmental, political, human rights, economic, democracy, scientific, and social justice issues, etc. We believe this constitutes a 'fair use' of any such copyrighted material as provided for in section 107 of the US Copyright Law. In accordance with Title 17 U.S.C. Section 107, the material on this site is distributed without profit exclusivly for research and educational purposes.   If you wish to use copyrighted material from this site for purposes of your own that go beyond 'fair use', you must obtain permission from the copyright owner. 

    ABUSE: IPs or network segments from which we detect a stream of probes might be blocked for no less then 90 days. Multiple types of probes increase this period.  

    Society

    Groupthink : Two Party System as Polyarchy : Corruption of Regulators : Bureaucracies : Understanding Micromanagers and Control Freaks : Toxic Managers :   Harvard Mafia : Diplomatic Communication : Surviving a Bad Performance Review : Insufficient Retirement Funds as Immanent Problem of Neoliberal Regime : PseudoScience : Who Rules America : Neoliberalism  : The Iron Law of Oligarchy : Libertarian Philosophy

    Quotes

    War and Peace : Skeptical Finance : John Kenneth Galbraith :Talleyrand : Oscar Wilde : Otto Von Bismarck : Keynes : George Carlin : Skeptics : Propaganda  : SE quotes : Language Design and Programming Quotes : Random IT-related quotesSomerset Maugham : Marcus Aurelius : Kurt Vonnegut : Eric Hoffer : Winston Churchill : Napoleon Bonaparte : Ambrose BierceBernard Shaw : Mark Twain Quotes

    Bulletin:

    Vol 25, No.12 (December, 2013) Rational Fools vs. Efficient Crooks The efficient markets hypothesis : Political Skeptic Bulletin, 2013 : Unemployment Bulletin, 2010 :  Vol 23, No.10 (October, 2011) An observation about corporate security departments : Slightly Skeptical Euromaydan Chronicles, June 2014 : Greenspan legacy bulletin, 2008 : Vol 25, No.10 (October, 2013) Cryptolocker Trojan (Win32/Crilock.A) : Vol 25, No.08 (August, 2013) Cloud providers as intelligence collection hubs : Financial Humor Bulletin, 2010 : Inequality Bulletin, 2009 : Financial Humor Bulletin, 2008 : Copyleft Problems Bulletin, 2004 : Financial Humor Bulletin, 2011 : Energy Bulletin, 2010 : Malware Protection Bulletin, 2010 : Vol 26, No.1 (January, 2013) Object-Oriented Cult : Political Skeptic Bulletin, 2011 : Vol 23, No.11 (November, 2011) Softpanorama classification of sysadmin horror stories : Vol 25, No.05 (May, 2013) Corporate bullshit as a communication method  : Vol 25, No.06 (June, 2013) A Note on the Relationship of Brooks Law and Conway Law

    History:

    Fifty glorious years (1950-2000): the triumph of the US computer engineering : Donald Knuth : TAoCP and its Influence of Computer Science : Richard Stallman : Linus Torvalds  : Larry Wall  : John K. Ousterhout : CTSS : Multix OS Unix History : Unix shell history : VI editor : History of pipes concept : Solaris : MS DOSProgramming Languages History : PL/1 : Simula 67 : C : History of GCC developmentScripting Languages : Perl history   : OS History : Mail : DNS : SSH : CPU Instruction Sets : SPARC systems 1987-2006 : Norton Commander : Norton Utilities : Norton Ghost : Frontpage history : Malware Defense History : GNU Screen : OSS early history

    Classic books:

    The Peter Principle : Parkinson Law : 1984 : The Mythical Man-MonthHow to Solve It by George Polya : The Art of Computer Programming : The Elements of Programming Style : The Unix Hater’s Handbook : The Jargon file : The True Believer : Programming Pearls : The Good Soldier Svejk : The Power Elite

    Most popular humor pages:

    Manifest of the Softpanorama IT Slacker Society : Ten Commandments of the IT Slackers Society : Computer Humor Collection : BSD Logo Story : The Cuckoo's Egg : IT Slang : C++ Humor : ARE YOU A BBS ADDICT? : The Perl Purity Test : Object oriented programmers of all nations : Financial Humor : Financial Humor Bulletin, 2008 : Financial Humor Bulletin, 2010 : The Most Comprehensive Collection of Editor-related Humor : Programming Language Humor : Goldman Sachs related humor : Greenspan humor : C Humor : Scripting Humor : Real Programmers Humor : Web Humor : GPL-related Humor : OFM Humor : Politically Incorrect Humor : IDS Humor : "Linux Sucks" Humor : Russian Musical Humor : Best Russian Programmer Humor : Microsoft plans to buy Catholic Church : Richard Stallman Related Humor : Admin Humor : Perl-related Humor : Linus Torvalds Related humor : PseudoScience Related Humor : Networking Humor : Shell Humor : Financial Humor Bulletin, 2011 : Financial Humor Bulletin, 2012 : Financial Humor Bulletin, 2013 : Java Humor : Software Engineering Humor : Sun Solaris Related Humor : Education Humor : IBM Humor : Assembler-related Humor : VIM Humor : Computer Viruses Humor : Bright tomorrow is rescheduled to a day after tomorrow : Classic Computer Humor

    The Last but not Least


    Copyright © 1996-2016 by Dr. Nikolai Bezroukov. www.softpanorama.org was created as a service to the UN Sustainable Development Networking Programme (SDNP) in the author free time. This document is an industrial compilation designed and created exclusively for educational use and is distributed under the Softpanorama Content License.

    The site uses AdSense so you need to be aware of Google privacy policy. You you do not want to be tracked by Google please disable Javascript for this site. This site is perfectly usable without Javascript.

    Original materials copyright belong to respective owners. Quotes are made for educational purposes only in compliance with the fair use doctrine.

    FAIR USE NOTICE This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available to advance understanding of computer science, IT technology, economic, scientific, and social issues. We believe this constitutes a 'fair use' of any such copyrighted material as provided by section 107 of the US Copyright Law according to which such material can be distributed without profit exclusively for research and educational purposes.

    This is a Spartan WHYFF (We Help You For Free) site written by people for whom English is not a native language. Grammar and spelling errors should be expected. The site contain some broken links as it develops like a living tree...

    You can use PayPal to make a contribution, supporting development of this site and speed up access. In case softpanorama.org is down you can use the at softpanorama.info

    Disclaimer:

    The statements, views and opinions presented on this web page are those of the author (or referenced source) and are not endorsed by, nor do they necessarily reflect, the opinions of the author present and former employers, SDNP or any other organization the author may be associated with. We do not warrant the correctness of the information provided or its fitness for any purpose.

    Last modified: January, 24, 2014