Softpanorama

May the source be with you, but remember the KISS principle ;-)
Home Switchboard Unix Administration Red Hat TCP/IP Networks Neoliberalism Toxic Managers
(slightly skeptical) Educational society promoting "Back to basics" movement against IT overcomplexity and  bastardization of classic Unix

Scripting Bulletin, 2005

Prev Up Next

Top Visited
Switchboard
Latest
Past week
Past month

NEWS CONTENTS

Old News ;-)

[Jun 30, 2005] Gosling A closer look at Java

ZDNet

Through projects such as Groovy, Sun is talking about moving the worlds of Java and scripting languages closer together. But I confess I'm not sure how exactly programming languages are different from scripting languages such as PHP, Perl or Python.

Gosling: Your confusion is well founded. There's an awful lot of loose language. The terms tend to mean different things to different people.

When people talk about scripting languages, they often talk about things that are more toward having a developer be able to slap something together rally quickly and get a demo out the door in minutes. How fast the thing runs or how well the thing scales or how large a system you can build tend to be secondary considerations.

In the design of java, we didn't care so much about how quickly you could get the demo out the door, we cared about how quickly we could get a large, scalable system out the door. We ended up making difficult decisions. In general, scripting languages are a lot easier to design than the real programming languages.

The Java design is at two levels: the Java virtual machine and the Java language. All the hard stuff is at the JVM and below. If you can build a scripting language that targets the JVM, you get a certain amount of both properties.

So you're executing script in a JVM?
Gosling: Yeah. All the Java libraries are available to things written in Groovy. And Java applications can use Groovy. They can incorporate Groovy scriptlets.

[Jun 30, 2005] James Gosling on Java

Slashdot

Page 2 and scripting languages (Score:5, Interesting)
by MarkEst1973 (769601) on Thursday June 30, @09:59PM (#12956728)

The entire second page of the article talks about scripting languages, specifically Javascript (in browsers) and Groovy.

1. Kudos to the Groovy [codehaus.org] authors. They've even garnered James Gosling's attention. If you write Java code and consider yourself even a little bit of a forward thinker, look up Groovy. It's a very important JSR (JSR-241 specifically).

2. He talks about Javascript solely from the point of view of the browser. Yes, I agree that Javascript is predominently implemented in a browser, but it's reach can be felt everywhere. Javascript == ActionScript (Flash scripting language). Javascript == CFScript (ColdFusion scripting language). Javascript object notation == Python object notation.

But what about Javascript and Rhino's [mozilla.org] inclusion in Java 6 [sun.com]? I've been using Rhino as a server side language for a while now because Struts is way too verbose for my taste. I just want a thin glue layer between the web interface and my java components. I'm sick and tired of endless xml configuration (that means you, too, EJB!). A Rhino script on the server (with embedded Request, Response, Application, and Session objects) is the perfect glue that does not need xml configuration. (See also Groovy's Groovlets for a thin glue layer).

3. Javascript has been called Lisp in C's clothing. Javascript (via Rhino) will be included in Java 6. I also read that Java 6 will allow access to the parse trees created by the javac compiler (same link as Java 6 above).

Java is now Lisp? Paul Graham writes about 9 features [paulgraham.com] that made Lisp unique when it debuted in the 50s. Access to the parse trees is one of the most advanced features of Lisp. He argues that when a language has all 9 features (and Java today is at about #5), you've not created a new language but a dialect of Lisp.

I am a Very Big Fan of dynamic languages that can flex like a pretzel to fit my problem domain. Is Java evolving to be that pretzel?

Scripting language talk... (Score:3, Insightful)
by MrDomino (799876) <mrdomino.gmail@com> on Thursday June 30, @10:19PM (#12956857) When people talk about scripting languages, they often talk about things that are more toward having a developer be able to slap something together rally quickly and get a demo out the door in minutes. How fast the thing runs or how well the thing scales or how large a system you can build tend to be secondary considerations. ...

This is nit-picking, I know, but I was under the impression that scripting languages were actually defined by the presence of an actively-running interpreter during execution, making it possible to, e.g., construct and execute statements at runtime with things like PHP's exec() or Lua [lua.org]'s do(file|string) functions (see: http://www.lua.org/pil/8.html [lua.org] for discussion on dofile and Lua's status as a scripting language). I wasn't aware that capability for rapid prototyping or language speed had anything to do with it.

Taking that into consideration, then, would Java with JIT [wikipedia.org] qualify as an interpreted or compiled language? I'm not sure, myself---any thoughts?

That aside, a solid interview. Java looks to be pretty interesting; though in its current form it does bug the hell out of me (System.out.println()? Yeah, yeah, OO, but come on, three nested levels of scope just to get to a command line?), its progress has been impressive, and it's an innovative idea.

Java - unfulfilled promisses (Score:2)
by guacamole (24270) on Thursday June 30, @10:50PM (#12957047)

I think the biggest selling point of Java was the cross-platform compatibility. However, 10 years later, I think that it is clear that this promise was largely a fraud. Java was a perhaps a good new platform for writing enterprise applications and applications for certain consumer niches for those developers who didn't want to deal with the unsafe languages like C or C++ or those who were fooled by Sun into believing that Java is the best thing since whatever, but cross-platform compatibility for large applications still remains problematic. For example, most vendors of fairly complex java applications I have seen, not only require you to use a certain version of OS and a web browser (if that's an applet) but also they demand a certain version of the java virtual machine is used and with certain patches on some operating systems. And if you don't meet their requirements, you often run into problems. I bet a python or a perl script would have fared much better in many of those settings as far as portability is concerned.

IBM backs open-source Web software

CNET News.com

IBM is putting its corporate heft behind a popular open-source Web development technology called PHP, in a move meant to reach out to a broader set of developers.

On Friday, the tech giant announced a partnership with Zend Technologies to create a bundle called Zend Core, which includes IBM's Cloudscape-embedded database and Zend's PHP development tools. Zend sells tools built on the open-source edition of PHP and offers related services.

The two companies intend to devote programmers to make PHP work better with corporate databases and Web services protocols. IBM also plans to establish an area dedicated to PHP on its developer Web site, which will include technical resources such as white papers. Zend Core will be available as a free download in the second half of the year.

PHP, originally known as Personal Home Page, is a widely used scripting language for generating Web pages. Unlike compiled languages such as Java or C, scripting languages like PHP are easier to learn. They are generally used for simpler tasks, rather than for complex number-crunching jobs.

Big Blue's public commitment to PHP is significant because the company has the technical and marketing resources to accelerate usage of the open-source product. IBM's investments in Linux and Java, for example, were crucial to mainstream corporation adoption of those technologies.

"We've got ideas for improving things," said Rod Smith, IBM's vice president of emerging technology. "We worked on specifications in the Java community that weren't language-specific and are applicable to the PHP world."

Staying committed to Java

PHP has a vibrant open-source community around it and is often used to build applications that run on Linux. The latest enhancements to the language are designed to help Web developers better handle XML-formatted data and Web services protocols.

IBM has made significant investments in Java software and will continue to invest in Java industry standards and its WebSphere-branded line of server software and tools, Smith said. Its commitment to PHP is a way to reach out to more developers, particularly in small and medium-size businesses, he said.

Smith said that the simplicity of PHP is one of its greatest assets. Business analysts without a background in computer science, for example, could create Web pages that pull data from back-end sources, he said.

"We're in a better position to look at a language that fits people's background and what they want to accomplish, rather than saying everything has to be written in one particular language that scales from easy-to-use to

IBM Backs PHP for Web Development

Slashdot

Re:Power? Performance? Ease of Use? (Score:5, Insightful)
by RaisinBread (315323) on Friday February 25, @01:44PM (#11779920)

After moving away from Java, I couldn't be more pleased with the flexibility in PHP for web development and even shell script replacement.

It's concise (none of this System.out.println.pretty.please() funny business), the documentation is stellar, it plays nice with many different technologies, and I don't have to objectify and type-cast anything I don't want to. PHP 5 has all the object love and forced typing I need - and the great part about it is that its there if I need it. PHP also has a extension repository PEAR [php.net], and a slick templating engine, Smarty. [php.net]

Sure, it 'lends itself to coding flaws', but it also lends itself to flexible web development and very quick development cycles.

Just because you put your code monkeys in front of Visual Studio or Eclipse *does not make the code any cleaner.* You can't force people to write clean code (which IMHO is an art). More 'structured' languages might even cause dummies to write even more workaround code. And while OOP is really great, I've seen folks who objectify projects into oblivion.

Don't buy in on broad-generalizations like the parent and check PHP out. PHP is on the up, and IBM (along with many others) are noticing.

ASP vrs PHP (Score:5, Insightful)
by Spinlock_1977 (777598) <Spinlock_1977.yahoo@com> on Friday February 25, @03:27PM (#11781023)
(Last Journal: Friday February 25, @03:27PM)

I've written applications in both - and here's a difference no one talks about. When you open up MS's ASP environment, all that great GUI stuff is there and it's pretty easy to get going. Then as often happens in a development environment, you need a quick script to munge a long list of field names. Is ASP your first choice? It wasn't mine, because I couldn't find a way to get input into/out of it from the command line. So I whipped up a temporary web page with a text box to do it. More overhead than I wanted to spend for what should be a 2 minute job given an editor with macro key abilities.

Then a couple of years later I built my first app in PHP. The first thing I noticed was how easy it was to script from the command line. Since I'm not a perl junkie, it was real useful for small scripting jobs. I'd use a shell language for this, but fankly, I'd rather poke a fork in my eyes.

The next thing I noticed in PHP was I needed an modern editor (the free download doesn't come with an IDE), so I bought one from zend.com for a couple of hundred bucks. It's getting better, but like ASP, it too has no macro key ability (maybe I'm wrong and someone will tell me?), and other nits I'd pick given the chance.

But the big discovery in PHP was that all my ASP data-type problems magically went away. Hours and freaking hours I spent debugging situations where an int was returned from a DLL and ASP string'ed it, or vice versa. There were byref/byval issues I recall as well. We had to build test local harnesses for all our middle tier ASP components because these problems rendered ASP too lame for a debugging platform.

But my original point is really that PHP is useful along a continium of the problem space. Need a quick script? Need a nightly job that cleans up your app? Need web pages? PHP works well for all. ASP, from my experience, hits one for three.

Best quote from article (Score:4, Interesting)
by mgkimsal2 (200677) on Friday February 25, @04:18PM (#11781686)

One industry executive who requested not to be named said that IBM's push into PHP and scripting reflects IBM's disillusionment with the Java standardization process and the industry's inability to make Java very easy to use.

"IBM's been so fed up with Java that they've been looking for alternatives for years," the executive said. "They want people to build applications quickly that tap into IBM back-ends...and with Java, it just isn't happening."

Re:Power? Performance? Ease of Use? (Score:1)
by corecaptain (135407) on Friday February 25, @03:48PM (#11781207)

For the last several years I have been developing
web apps with java. I have experience with
everything from ATG Dynamo, Tomcat, Struts,
Spring Framework, etc.

It is long story but I coded and host a few apache/java websites for friends/family from my
house. Recently, I wanted to move these to
commercially hosted sites. While I could find
a number of tomcat hosts the cost seemed to be
roughly twice that of php hosting. Furthermore,
I just didn't feel good turning over java apps
that I knew would be more complex/expensive to maintain.

So I decided that since the apps were relatively
simple I would just re-implement them in PHP.

What I have discovered is that for these simple
sites (5-6 applications, 10-20 pages) PHP is much
more productive than java. Take one example - try to build a web application that allows for image uploading, storage, and manipulation in Java -
with PHP it was done in less than 30 lines of code
and 1 hour.

I know all the arguments against using PHP vs.
Java - but the bottom line is that the online
documentation at php.net gets right to the point
with plenty of code snippets (whens the last time
a javadoc page actually had any samples of how to
use the damn class), PHP works, you can find
thousands of cheap hosting solutions, and if you
need there is probably a high schooler down the street who you could hire to keep your site up.

As extreme programming teaches - do the simplest
thing possible that works - I think IBM is realizing that maybe for some significant % of
web apps/sites PHP might just be the simplest
thing.....

Slashdot IBM to Open Projects at SourceForge.net

On Friday, IBM said it is contributing some 30 open-source projects to SourceForge.net. IBM also said it is expanding its own developerWorks Web site with more resources including training in PHP and other popular technologies." This probably dovetails with IBM's new full on support of the PHP language.

List of projects (Score:2, Informative)
by shutdown -p now (807394) on Friday February 25, @06:36PM (#11783232)
AIX Toolbox - http://sf.net/projects/aixtoolbox/

Bluetooth ad-hoc network simulator - http://sf.net/projects/bluehoc/

Dynamic Probe Class Library - http://sf.net/projects/dpcl/

Journaled File System - http://sf.net/projects/jfs/

IBM Jikes Compiler for the Java Language - http://sf.net/projects/jikes/

Jikes RVM - http://sf.net/projects/jikesrvm/

Java POS Config Loader - http://sf.net/projects/jposloader/

Toolbox for Java/JTOpen - http://sf.net/projects/jt400/

openCryptoki - http://sf.net/projects/opencryptoki/

LTC Linux Kernel Performance Project - http://sf.net/projects/linuxperf/

LSID (Life Science Identifier) - http://sf.net/projects/lsid/

Memory Expansion Technology - http://sf.net/projects/mxt/

OpenSSH on AIX - http://sf.net/projects/openssh-aix/

Standards Based Linux Instrumentation - http://sf.net/projects/sblim/

UDDI4J Java Class Library - http://sf.net/projects/uddi4j/

Web Services Description Language for Java -
http://sf.net/projects/wsdl4j/

ACP Modem (Mwave) Driver for Linux - http://sf.net/projects/acpmodem/

International Components for Unicode - http://sf.net/projects/icu/

Dynamic Probes - http://sf.net/projects/dprobes/

TCL extension library for IBM Speech Manager Applications Programming
Interface (SMAPI) - http://sf.net/projects/tclsmapi/

TCK for JWSDL ( JWSDLTCK ) - http://sf.net/projects/jwsdltck/

(from the SourceForge post on that @ http://sourceforge.net/forum/forum.php?forum_id=44 9291)

Monad

At the Professional Developers Conference (PDC) 2003, Microsoft announced a new scripting solution code-named Monad. Monad will provide a new hybrid command shell and scripting language known currently as Microsoft Shell (MSH). As with most code-named solutions, MSH will probably be replaced by another name when the shell and language are released. Although Microsoft isn't anywhere near shipping MSH, the new tool looks like it should have a significant impact on administrators in the next few years.

Even at first glance, the early MSH beta is strikingly superior to cmd.exe as a tool. MSH is a stream-oriented shell that draws its syntax and concepts from sources as diverse as Korn shell (ksh), the VMS Digital Command ider framework extends the standard navigation commands for a file system to complex structures such as Windows Management Instrumentation (WMI), Microsoft Active Directory Service Interfaces (ADSI), and the registry. Currently, MSH runs in a console shell, but don't let that mislead you into thinking that MSH is just another text-processing shell. MSH uses a console right now because complete support for Telnet access is a design goal; what you see on screen is a representation of full-featured Windows .NET Framework objects passed through a pipeline.

Slashdot Microsoft's new CLI

Re:Very Nice (Score:5, Insightful)
by Kingpin (40003) on Friday October 31, @09:45AM (#7357707)
(http://slashdot.org/)
You get rated 'Insightful' for stating what OpenSource zealots hope. What if this shell actually knocks the socks off *sh?

What if Longhorn does indeed provide more security, not only in default settings, but more inherently in the OpenSource?

Do you think the average developer/manager at MS is dumber than your average OS participant? (This is not a tric.. Damn, I'm falling in myself..)

But really - if "we" are to compete, we will have to steal the ideas that "work" from MS camp, just as they're "stealing" "our" ideas that WORK.

Linux is narrowing the gap to MS on the desktop (albeit slowly), and MS is narrowing the gap to Unix on eg. CLI, stability and security. Their software matures too, you know..

And then there's Apple. They make fun stuff. The are not afraid to invent, and they have the money to launch stuff that the OpenSource movement cannot. I don't quite know where to place them compared to OpenSource and MS.

The difference: (Score:5, Informative)
by moogla (118134) on Friday October 31, @09:51AM (#7357777)
(http://nervalhi.net:8080/ | Last Journal: Thursday June 26, @03:16PM)
msh exploits the transparency and "reflection" abilities of the object oriented features of the OS.

Read down the article for details on how they can now do things like mount the registry as a drive and walk it like a filesystem. Yegads!

bash (or some sh-variant) would have to be adapted to know specific things about linux to compete at that feature level, but it would become non-portable.

This is what the new sysfs interface is supposed to help with. Still, bash isn't object oriented (yet). The closest thing would be like perlsh.

I think people don't give MS enough credit for where they stand even today, frankly.

Re:The difference: (Score:2, Interesting)
by Waffle Iron (339739) on Friday October 31, @10:18AM (#7358039)
Read down the article for details on how they can now do things like mount the registry as a drive and walk it like a filesystem. Yegads!

...

I think people don't give MS enough credit for where they stand even today, frankly.

Way ahead of them there. Five years ago when I got sick of the insanely overengineered registry API, I wrote my own C++ wrapper that did essentially the same thing. I'm sure thousands of others have done this as well. Glad to see that they're catching up with the rest of us.

The thing that I can't fathom is WTF didn't they make the registry look like a file system in the first place? Now when they finally get around to doing it, people think that it's some kind of rocket science. It only seems high tech because the original API was so unuseable.

Re:The difference: (Score:2)
by Gumshoe (191490) on Friday October 31, @10:22AM (#7358104)

Read down the article for details on how they can now do things like mount the registry as a drive and walk it like a filesystem. Yegads!

bash (or some sh-variant) would have to be adapted to know specific things about linux to compete at that feature level, but it would become non-portable.

This wouldn't be a feature of the shell on Linux so bash wouldn't need to be "adapted". I'm thinking /proc is a pretty close comparison to what you're talking about and that works just fine with vanilla shells.

Re:The difference: (Score:2)
by cryptochrome (303529) on Friday October 31, @12:37PM (#7359969)
(http://slashdot.org/~cryptochrome/journal/ | Last Journal: Saturday December 18, @10:17AM)
What MSH is proposing really isn't such a bad idea. It's something I've thought about before, actually. I think it wouldn't be that hard to apply principles of object orientation (or alternatively, functional programming) to the existing programs, while laying the foundation for future improvements.

In unix files are just a sequence of bytes - they have no structure. Generally all of them can be treated as text. Likewise every piece of returned data can be treated as text. This is simple - but it can also be a problem, because files are NOT the same. If there was a standardized means for knowing data structure and nature in unix (as well as ignoring it) it could allow more intelligent behavior. A program could return a standardized database table object to the shell, and instead of just outputing it as a plain data stream it could format it for easy reading. Or if that object were piped to another program, the program would know to handle it as a database table, including knowing the headers and relevant information. Even better, because pipes are actually based on lazy evaluation, they would take only the data they needed.

The other difficulty is pipes themselves - they're best suited for stringing together programs that take or return 0-1 chunks of data (which may include arrays). They aren't so good for programs that take multiple arguments.

Basically what we want to do is have programs return objects we can use instead of raw data. The solution is for all apps to return "raw" type objects by default, which still have certain basic attributes. Basic utility functions to convert the data to a particular type and integrate old style commands could

I think OS X is quite well suited for this. The existing cocoa frameworks provide a high degree of functionality and a huge set of object types ready to go. ObjC is dynamic, and its simple, powerful nested syntax is very nice. The only problem is it's quite wordy to write.

Re:The difference: (Score:1)
by phyy-nx (544808) <[email protected]> on Friday October 31, @01:17PM (#7360500)
(http://aaron.brewsters.net/)
I think people don't give MS enough credit for where they stand even today, frankly.
Agreed. One must remember, M$ has hundreds of the best and brightest operating system delvelopers newly graduated from school like MIT. They have been working tirelessly on these operating systems for YEARS as highly paid professionals in ideal development environments. This is not the M$ of 10-15 years ago.
Re:The difference: (Score:1)
by danlyke (149938) on Friday October 31, @03:55PM (#7362292)
(http://www.flutterby.com/danlyke/)
PerlFS [sourceforge.net] has allowed you to do pretty much arbitrary things on a filesystem for years. It'd be an afternoon's hacking to make your favorite "rc" file accessible this way.

I think the reason it hasn't been done yet is that everyone has acknowledged that the registry is a bad idea carried out to perfection.

Re:The difference: (Score:3, Insightful)
by evbergen (31483) on Sunday November 02, @03:46PM (#7372764)
(http://www.e-advies.nl/)
Read down the article for details on how they can now do things like mount the registry as a drive and walk it like a filesystem. Yegads!

Finally, they're starting to appreciate the unified namespace that unix has been offering since the seventies.

They're only doing it on a way to high level.

Everything should look like a file and be accessible through the same API. read(), write(), ioctl() and select() are all you fundamentally need to do with anything. Inband I/O, out of band I/O, and wait for event. What more could you want to do with any object whatsoever?

The unix model is so beautiful, too bad it isn't taken far enough, even in unix.

Microsoft has an especially long way to go if they're trying to unify all the different system objects on such a high level (the shell).

Re:This made me laugh .... (Score:4, Interesting)
by stratjakt (596332) on Friday October 31, @09:47AM (#7357733)
(Last Journal: Sunday September 29, @01:10PM)
You can do WinFS filtering through the "|" symbol. MONAD can also export natively to: HTML, XML, Excel, or plain command text in either a Table or List format.

Sure beats the hell out of using obscure grep commands to parse a blob of ascii.

And....the commandlets are developer friendly. You can make a commandlet by inheriting from the commandlet base class, and adding attribute tags to the public properties to make them parameters to the commandlet. .NET handles whether the user types "-?" or "/?", so you don't have to care anymore!

Sounds pretty easy for a developer to extend.

This is a good thing for MS to do. The slashbots are always whining about how MS takes standards and breaks them for it's own gain. Rather than taint your precious bash or perl, they started from scratch.

Nothing new except overkill (Score:5, Interesting)
by Verteiron (224042) * on Friday October 31, @09:42AM (#7357665)
(http://slashdot.org/)
From the article:

One last thing: anything can be mapped to a drive, and drives don't just have to be letters. (Ok, I lied - that was 2) The example I was shown was that the registry was mapped to a drive, and you could navigate it like any other drive, with the results being returned from the commandlet as .NET objects!

The user has been able to map a filesystem to a folder rather than a drive letter since at least Windows 2000, and I think it was possible even under NT4. Nothing new there.

The registry (along with many other things) can be mapped as part of the filesystem fairly easily, as demonstrated by this 264kB DLL file [regxplor.com].

And as for returning search results as .NET objects? This seems rather like using a baseball bat to swat a fly...

Re:Microsoft has come a long way (Score:5, Informative)
by merlin_jim (302773) <`James.McCracken' `at' `stratapult.com'> on Friday October 31, @10:50AM (#7358490)
Recently, Microsoft has actually begun to produce command line tools for system operations, controlling your services, networks, policies, and registry from the command prompt [...] and still don't provide the full set of features.

One of Microsoft's design requirements for Windows Server 2003 was that EVERYTHING can be done from the commandline, that the GUI interfaces would have NO functionality that the commandline interface does not.

The Windows .NET Server bootcamp covers the GUI and commandline versions of all the tools, plus provides a take-away reference to each utility.

But they still have a long way to go, these features are poorly documented

Here's a list of the command line utilities in Windows Server 2003:
http://msdn.microsoft.com/library/default.asp?url= /library/en-us/xpehelp/html/_server_command.asp

Searching on individual names, or typing the name with a "/?" on the command line will yield more documentation.

Here's a link to the root reference for the WMIC utilities which are a little more powerful and easily scripted than the command line utilities:

http://msdn.microsoft.com/library/default.asp?url= /library/en-us/wmisdk/wmi/using_the_wmi_command_li ne_utilities.asp

think for a moment (Score:5, Interesting)
by penguin7of9 (697383) on Friday October 31, @12:26PM (#7359825)
I don't know why more people don't actively pursue a modern language for the shell interface. sh script syntax is tortorous. So much easier and maintainable to write perl scripts. So why not use perl from the command line??

Yes, you don't know. But think for a moment: people have had Perl-like languages since the 1960's. Do you really think you or Microsoft are the first to think that using an object-oriented scripting language is a good idea?

The reason why people use sh syntax is because it is enormously effective. Try expressing something like:

find . -type f | xargs grep -il foo

in Perl or some other scripting language.

Of course, many people who complain about sh syntax really just don't know how to use it.

For interactive use by skilled users and many scripting tasks, bash/ksh is unbeatable. And for the kinds of scripts where Perl makes sense--you can simply use Perl.

This would basically trump anything msh could muster and also provide the entire universe of CPAN to the shell.

Yes, psh is a better version of what msh is trying to achieve. But, you know, even that's nowhere near good enough to dethrone bash/ksh.

Re:think for a moment (Score:2)
by Ars-Fartsica (166957) on Friday October 31, @01:01PM (#7360323)
The reason why people use sh syntax is because it is enormously effective. Try expressing something like:

find . -type f | xargs grep -il foo

in Perl or some other scripting language.

in psh:

psh% find . -type f | xargs grep -il foo

works just fine. EXCEPT I can now put this result in a perl var and do waaaay more with it than you could conceive, like apply anything in CPAN to it without losing context. It is 'conception' that is basically the problem - you don't know what it is like to use a real scripting language from the command line so its strengths are not apparent.

Re:think for a moment (Score:3, Insightful)
by penguin7of9 (697383) on Friday October 31, @11:57PM (#7365270)
in psh [...] works just fine.

People who propose systems like MSH want to iterate over files and all that good stuff using object oriented scripting features. I'm saying: that turns out not to be very useful in practice. The fact that psh happens to be able to emulate sh behavior is completely besides the point.

It is 'conception' that is basically the problem - you don't know what it is like to use a real scripting language from the command line so its strengths are not apparent.

Sure I do: I have used psh at times, and I have used Smalltalk and Lisp, which are much better at this kind of integration and scripting than even Perl or MSH.

But that kind of design of interactive shells, something that integrates full programming, has always lost out in the real world.

In fact, UNIX has a much better answer: rather than trying to force everything into the same address space, it provides facilities (environment variables, etc.) that let software written in multiple different "little languages" co-exist as if they were all part of a single, unified scripting environment.

If you want to use Perl in a script or pipe, just say "... | perl -e '...' | ...". And if you want to invoke other things from Perl, you can, obviously, use "open", "system", "`...`", and all those other facilities.

The UNIX approach is great; you should give it a try sometimes. MSH and psh, on the other hand, are Microsoft-thinking: obvious, gimmicky, and not all that good in the end.

How blind are you people? Or is it arrogance? (Score:2)
by 3seas (184403) on Friday October 31, @10:47AM (#7358458)
(http://threeseas.net/ | Last Journal: Friday January 18, @01:44PM)
- a new MS Shell..... integrated with .net and the what ever you call it standard (to be) GUI package
so you take the sum of programming concepts and datatypes and boil them down into a non-conflicting package (Programming issues). In this you also create a GUI system to provide standard GUI functionality (2nd primary UI). Interface these to a Command Line Interface (1st primary UI) and considering the 3rd UI is IPC (inter process communication abilities and somewht inherent in .net ) ------ you have the three primary UIs together and with them you you can create an autocoding environment. add to that voice to text translation .....

From what I have seen of the comments being made on /. regarding the article..... there is an enormus depth of blind ignorance regarding what MS is up to.

Or maybe its just arrogance?

Autocoding project proposal [debian.org]

Many are not going to realize they cannot see any further than following MS....when it becomes to late.

Integration of the shell and the OS (Score:2)
by alispguru (72689) <[email protected] minus berry> on Friday October 31, @11:46AM (#7359281)
(Last Journal: Thursday November 13, @03:44PM)
You can see where they're headed with this - a "shell" which can do anything that can be done in a .NET program. The Un*x equivalent would be a C interpreter as a shell, but without the C low-level orientation. This shell is essentially a .NET interpreter.

The description reminded me of Lisp machines, which have this level of command-line support because they run Lisp all the way down to the bare metal, so the command-line expression language is exactly the same as the OS implementation language.

Re:Integration of the shell and the OS (Score:1)
by multi io (640409) on Friday October 31, @02:46PM (#7361555)
You can see where they're headed with this - a "shell" which can do anything that can be done in a .NET program. The Un*x equivalent would be a C interpreter as a shell, but without the C low-level orientation. This shell is essentially a .NET interpreter

If look have a look at this [gotdotnet.com] , you'll see that this is specifically *not* a generic .NET interpreter. You can't just script any arbitrary .NET classes/objects. Instead, you have to write special classes ("cmdlets") which you may then use in your scripts. One reason for this design seems to be that these commands can support special typed streams called "pipelines", so you can combine several commands using the "|" operator on the command line in a *x shell-like manner.

Rob Pike Responds

Slashdot

What's really interesting is how you think about accessing your data. File systems and databases provide different ways of organizing data to help find structure and meaning in what you've stored, but they're not the only approaches possible. Moreover, the structure they provide is really for one purpose: to simplify accessing it. Once you realize it's the access, not the structure, that matters, the whole debate changes character.

One of the big insights in the last few years, through work by the internet search engines but also tools like Udi Manber's glimpse, is that data with no meaningful structure can still be very powerful if the tools to help you search the data are good. In fact, structure can be bad if the structure you have doesn't fit the problem you're trying to solve today, regardless of how well it fit the problem you were solving yesterday. So I don't much care any more how my data is stored; what matters is how to retrieve the relevant pieces when I need them.

Grep was the definitive Unix tool early on; now we have tools that could be characterized as `grep my machine' and `grep the Internet'. GMail, Google's mail product, takes that idea and applies it to mail: don't bother organizing your mail messages; just put them away for searching later. It's quite liberating if you can let go your old file-and-folder-oriented mentality. Expect more liberation as searching replaces structure as the way to handle data.

... ... ...

One odd detail that I think was vital to how the group functioned was a result of the first Unix being run on a clunky minicomputer with terminals in the machine room. People working on the system congregated in the room - to use the computer, you pretty much had to be there. (This idea didn't seem odd back then; it was a natural evolution of the old hour-at-a-time way of booking machines like the IBM 7090.) The folks liked working that way, so when the machine was moved to a different room from the terminals, even when it was possible to connect from your private office, there was still a `Unix room' with a bunch of terminals where people would congregate, code, design, and just hang out. (The coffee machine was there too.) The Unix room still exists, and it may be the greatest cultural reason for the success of Unix as a technology. More groups could profit from its lesson, but it's really hard to add a Unix-room-like space to an existing organization. You need the culture to encourage people not to hide in their offices, you need a way of using systems that makes a public machine a viable place to work - typically by storing the data somewhere other than the 'desktop' - and you need people like Ken and Dennis (and Brian Kernighan and Doug McIlroy and Mike Lesk and Stu Feldman and Greg Chesson and ...) hanging out in the room, but if you can make it work, it's magical.

When I first started at the Labs, I spent most of my time in the Unix room. The buzz was palpable; the education unparalleled.

(And speaking of Doug, he's the unsung hero of Unix. He was manager of the group that produced it and a huge creative force in the group, but he's almost unknown in the Unix community. He invented a couple of things you might have heard of: pipes and - get this - macros. Well, someone had to do it and that someone was Doug. As Ken once said when we were talking one day in the Unix room, "There's no one smarter than Doug.")

... ... ...

The future does indeed seem to have an OO hue. It may have bearing on Unix, but I doubt it; Unix in all its variants has become so important as the operating system of the internet that whatever the Java applications and desktop dances may lead to, Unix will still be pushing the packets around for a quite a while.

On a related topic, let me say that I'm not much of a fan of object-oriented design. I've seen some beautiful stuff done with OO, and I've even done some OO stuff myself, but it's just one way to approach a problem. For some problems, it's an ideal way; for others, it's not such a good fit.

Here's an analogy. If you want to make some physical artifact, you might decide to build it purely in wood because you like the way the grain of the wood adds to the beauty of the object. In fact many of the most beautiful things in the world are made of wood. But wood is not ideal for everything. No amount of beauty of the grain can make wood conduct electricity, or support a skyscraper, or absorb huge amounts of energy without breaking. Sometimes you need metal or plastic or synthetic materials; more often you need a wide range of materials to build something of lasting value. Don't let the fact that you love wood blind you to the problems wood has as a material, or to the possibilities offered by other materials.

The promoters of object-oriented design sometimes sound like master woodworkers waiting for the beauty of the physical block of wood to reveal itself before they begin to work. "Oh, look; if I turn the wood this way, the grain flows along the angle of the seat at just the right angle, see?" Great, nice chair. But will you notice the grain when you're sitting on it? And what about next time? Sometimes the thing that needs to be made is not hiding in any block of wood.

OO is great for problems where an interface applies naturally to a wide range of types, not so good for managing polymorphism (the machinations to get collections into OO languages are astounding to watch and can be hellish to work with), and remarkably ill-suited for network computing. That's why I reserve the right to match the language to the problem, and even - often - to coordinate software written in several languages towards solving a single problem.

It's that last point - different languages for different subproblems - that sometimes seems lost to the OO crowd. In a typical working day I probably use a half dozen languages - C, C++, Java, Python, Awk, Shell - and many more little languages you don't usually even think of as languages - regular expressions, Makefiles, shell wildcards, arithmetic, logic, statistics, calculus - the list goes on.

Does object-oriented design have much to say to Unix? Sure, but no more than functions or concurrency or databases or pattern matching or little languages or....

Regardless of what I think, though, OO design is the way people are taught to think about computing these days. I guess that's OK - the work does seem to get done, after all - but I wish the view was a little broader.

... ... ...

One tool for one job? - by sczimme
Given the nature of current operating systems and applications, do you think the idea of "one tool doing one job well" has been abandoned? If so, do you think a return to this model would help bring some innovation back to software development?

(It's easier to toss a small, single-purpose app and start over than it is to toss a large, feature-laden app and start over.)

Pike:
Those days are dead and gone and the eulogy was delivered by Perl.

... ... ...

Ken Thompson and I started Plan 9 as an answer to that question. The major things we saw wrong with Unix when we started talking about what would become Plan 9, back around 1985, all stemmed from the appearance of a network. As a stand-alone system, Unix was pretty good. But when you networked Unix machines together, you got a network of stand-alone systems instead of a seamless, integrated networked system. Instead of one big file system, one user community, one secure setup uniting your network of machines, you had a hodgepodge of workarounds to Unix's fundamental design decision that each machine is self-sufficient.

Nothing's really changed today. The workarounds have become smoother and some of the things we can do with networks of Unix machines are pretty impressive, but when ssh is the foundation of your security architecture, you know things aren't working as they should.

Looking at things from a lower altitude:

I didn't use Unix at all, really, from about 1990 until 2002, when I joined Google. (I worked entirely on Plan 9, which I still believe does a pretty good job of solving those fundamental problems.) I was surprised when I came back to Unix how many of even the little things that were annoying in 1990 continue to annoy today. In 1975, when the argument vector had to live in a 512-byte-block, the 6th Edition system would often complain, 'arg list too long'. But today, when machines have gigabytes of memory, I still see that silly message far too often. The argument list is now limited somewhere north of 100K on the Linux machines I use at work, but come on people, dynamic memory allocation is a done deal!

I started keeping a list of these annoyances but it got too long and depressing so I just learned to live with them again. We really are using a 1970s era operating system well past its sell-by date. We get a lot done, and we have fun, but let's face it, the fundamental design of Unix is older than many of the readers of Slashdot, while lots of different, great ideas about computing and networks have been developed in the last 30 years. Using Unix is the computing equivalent of listening only to music by David Cassidy

... ... ...

At the risk of contradicting my last answer a little, let me ask you back: Does the kernel matter any more? I don't think it does. They're all the same at some level. I don't care nearly as much as I used to about the what the kernel does; it's so easy to emulate your way back to a familiar state.

Applications - web browsers, MP3 players, games, all that jazz - and networks are where the action is today, and aside from irritating little incompatibilities, the kernel has become a commodity. Almost all the programs I care about can run above Windows, Unix, Plan 9, and on PCs, Macs, palmtops and more. And that, of course, is why these all have a POSIX interface: so they can support those applications.

And then there's the standard network protocols to glue things together. It's all a uniform sea of interoperability (and bugs).

I think the future lies in new hardware as much as in new software. A generation from now machines will be so much more portable than they are now, so much more powerful, so much more interactive that we haven't begun to think about the changes they will bring. This may be the biggest threat to Microsoft: the PC, the desktop, the laptop, will all go the way of the slide rule. As one example, when flexible organic semiconductor displays roll out in a few years, the transformation in how and where people use computers and other devices will be amazing. It's going to be a wild ride.

not the Rob I (don't) know (Score:5, Funny)
by DrSkwid (118965) on Monday October 18, @12:26PM (#10556931)
(http://maht.dotgeek.org/ | Last Journal: Monday September 15, @01:30PM)

object-oriented design is the roman numerals of computing.

-- Rob Pike

and seeing as he mentioned perl :

> To me perl is the triumph of utalitarianism.

So are cockroaches. So is `sendmail'.

-- jwz [http://groups.google.com/groups?selm=33F4D777.7BF 84EA3%40netscape.com]

Re:Emacs or vi (Score:5, Funny)
by harlows_monkeys (106428) on Monday October 18, @02:36PM (#10558073)
(http://www.tzs.net/)
Great answer to that question: Neither, he wrote his own (twice!), and wrote papers about the products

Once upon a time, at Caltech High Energy Physics, where Rob Pike had worked before going to Bell, two programmers (me and Karl Heuer) were bitching about existing editors, each claiming he could do better. That night, it got to the "oh yeah...prove it!" stage, and both sat down to write editors. By morning, they were each using their respective editors on themselves. Norman Wilson at cithep had kept in contact with Pike, and told him of this spate of editor hacking. Note that this was well before Pike did his jim and sam stuff.

Pike wrote back something like this: "writing a screen editor is fun and easy and makes them feel important. Tell them to work on something useful".

We were quite amused when we found out that Rob went on to write editors, instead of sticking to "something useful"!

Re:The Unix Room (Score:4, Insightful)
by nthomas (10354) <[email protected]> on Monday October 18, @01:12PM (#10557351)
(http://www.cise.ufl.edu/~nthomas)
I was particularly struck by the story of the Unix Room where all the Unix people hung out.

Fascinating.

I was at Columbia University last week for a meeting sponsored by the local ACM chapter [columbia.edu] and LXNY [lxny.org], the speaker was Stephen Bourne [eldoradoventures.com] (he who is sh [wikipedia.org]).

At some point during his excellent talk on the history of Unix and his place in it, someone asked what he thought was the reason for the success of the operating system, and without hesitating, he talked about the room where all the terminals were located (he never specifically referred to it as the "Unix room" though) and how when you released software it was used immediately by those in the room and if something broke, you were called "idiot" (and probably worse) by your peers -- it was in your best interest to make sure you didn't put out junk as you really didn't have that dilution of responsibility that engineers have in a large corporation where the design team is in one wing of the building, the coders are in another, and the testers in yet another location, etc.

It was a great speech, anyone who hasn't seen Dr. Bourne speak should do so, he is an excellent source of insight into the early years of Unix and software engineering in general. He is now working for a venture capital firm and roughly a third of his talk was spent talking about that, it's a testament to his great speaking skills that most of the people in the room didn't lose interest when he switched topics like that (I'm convinced that most hackers suffer from ADD).

Thomas

He didn't say why... (Score:4, Interesting)
by argent (18001) <[email protected]> on Monday October 18, @03:05PM (#10558306)
(http://www.scarydevil.com/~peter/)
Perl hardly refutes it, when in the previous question he gave a laundry list of tools and Perl wasn't on it... and Awk was.

I really think he was evading the answer.

The real answer is that you need a framwork that lets you connect the tools together easily before you can use a software tools approach. For the command line era, that framework was the UNIX shell. For the GUI era, there really hasn't been a popular framwork that's also portable. AREXX, Plan 9, Applescript, these seem to be the best frameworks I've seen so far, and they're all isolated to ghettoes... we're still waiting for the GUI equivalent of the UNIX shell.

One tool for one job? (Score:5, Funny)
by Samrobb (12731) on Monday October 18, @01:19PM (#10557415)
(http://www.pghgeeks.org/ | Last Journal: Friday October 22, @04:12PM)

Those days are dead and gone and the eulogy was delivered by Perl.

Hey! Perl still adheres to the "one tool for one job" metaphor.

It's just that Perl's "one job" seems to be defined as "replace all the other tools"...

Plan 9, Unix may not have it, but another OS does (Score:3, Interesting)
by Chris_Keene (87914) on Monday October 18, @01:37PM (#10557591)
(http://www.nostuff.org/)
"Instead of one big file system, one user community, one secure setup uniting your network of machines, you had a hodgepodge of workarounds to Unix's fundamental design decision that each machine is self-sufficient."

I hate to say this, but doesn't Windows 2000/2003 server, Active Directory (and Novell NDS etc) do a lot of this. One set of users, a network of machines (without being reliant on one master machine*), and one security model. Maybe not quite there on 'one big file system', though can basically be achieved with a bit of setting up.

(* I haven't manafged a Windows domain for a few years, seem to remember 2k had a PDC-like machine as such, but also with backup servers - ready to take over).

Yes, you are nearly right. (Score:2, Informative)
by Medievalist (16032) on Monday October 18, @02:00PM (#10558251)
Novell NDS pioneered the functional and useful PC implementation of this idea with NDS, which was created by hacking the bejabbers out of a genealogical database created by the Mormons (AKA the Church of Jesus Christ of Latter Day Saints).

NDS inherited limitations from the Mormon theology (for example: the concept of multiple roots is anathema in a genealogical database designed to relate all the descendants of Adam. Thus NDS could not handle multiple roots and separate trees had to be merged in order to inhabit the same database).

The pioneering theoretical work was the X500 project, but the programmers on that project were such an arrogant bunch of egotists that they pissed off nearly everyone they came into contact with (for example, with their annoying insistence that only X500 could be called "the Directory" (capital D) and that all the world's existing documentation must be revised to remove references to directories (small d) where the word "folder" could be used instead.

Although the X500 project did produce a software implementation, nearly everyone hated it (although mostly because so many people had been thoroughly antagonized by the attitudes of the X500 gurus) and thought it was too bloated. LDAP was born, in reaction to this situation, as a means of communicating directory information from any arbitrary database - it let you do the important parts of X500 without having to run their code.

Active Directory, Microsoft's entry, is a poor stepchild of NDS that uses LDAP (with purposely arcane schema) and incorporates MIT's Kerberos (a good thing) and embodies Microsoft's policy of "embrace and extend" (a bad thing). It is in many ways an inefficient and poorly structured system, but it functions reasonably well in an all-MS environment.

So, yes, you can do this rather nicely and efficiently under Novell (in which case you have to fight the battle of Microsoft compatibility since Novell is permanently locked in a death struggle with MS's server group) or very sloppily and inefficiently under Microsoft (see Andrew Tridgell's many commentaries on the shortcomings of the CIFS and SMB pseudo-standards).

You can use OpenLDAP and Samba to get pretty much what you are talking about without using either MS or Novell in the server room. MS still rules the desktop, though, because they have the lusr mindshare (it's cheaper to hire people who already know MS than train people to use something else).

this MUST go into fortune (Score:1)
by nazsco (695026) on Monday October 18, @03:58PM (#10558836)
(Last Journal: Thursday February 12, @03:06PM)
"We get a lot done, and we have fun, but let's face it, the fundamental design of Unix is older than many of the readers of Slashdot"
-- Rob Pike

"Using Unix is the computing equivalent of listening only to music by David Cassidy."
-- Rob Pike

Re: Object Oriented Programming (Score:1)
by Slipped_Disk (532132) on Monday October 18, @05:25PM (#10559509)
(http://www.bsd-box.net/ | Last Journal: Tuesday August 19, @08:02PM)
Quote:
OO is great for problems where an interface applies naturally to a wide range of types, not so good for managing polymorphism (the machinations to get collections into OO languages are astounding to watch and can be hellish to work with), and remarkably ill-suited for network computing.
---

Perl solves half of this problem (collections are relatively easy to throw together) - Unfortunately the other half (making it Network Computing Friendly) is still a royal pain in the posterior.

While I'm not a huge fan of OOP, perhaps future research will conquer this gap and make it somewhat more paletable.

[Dec 29, 2004] Windows admin made easy By Jon Udell

October 29, 2004 | InfoWorld

Jeffrey Snover thought so, too. He's the architect of Monad, aka MSH (Microsoft (Profile, Products, Articles) Shell), the radical new Windows command shell first shown at the Professional Developers Conference last fall.

A meeting with Snover piqued my interest, as did his video demonstration on MSDN's Channel 9. This week, I finally got a chance to give this new tool a try. You don't need Longhorn, by the way. I'm running MSH on Windows XP (Overview, Articles, Company) with the 2.0 version of the .Net Framework that comes with the Visual Studio 2005 beta. My first reaction? MSH rocks. System administration on Windows, and ultimately everywhere, will be forever changed for the better.

At its core, MSH is an object pipeline. Unix, of course, invented the pipelining concept. But in Unix-like systems -- including Linux and OS X -- the data that's passed from one command to the next is weakly structured ASCII text. When you've got smart, self-describing objects flowing through that pipeline, it's a whole new ball game.

Recently, for example, I wrote a typical management script for a Windows 2003 Server. Its job was to filter a subset of running processes by name and then kill any of those exceeding a memory threshold. Using Python in conjunction with Mark Russinovich's PsTools suite, I did this in the time-honored way: List the processes, split the output into columns, associate the first and third columns with process name and memory use, and then finally identify and kill memory hogs.

Here's the drill with MSH. You start by typing the command get-process. As does Unix's ps command, get-process prints a bunch of process statistics on the console. But under the covers, it emits a sequence of System.Diagnostics.Process objects. You can look up that class in the .Net Framework documentation, but it's easier to just ask an instance of the object to display its methods and properties. If there are too many to view conveniently on the console, pipe the information to an HTML file, a .Net grid control, or an Excel worksheet.

When you've identified the properties of interest, it's trivial to isolate them, filter them, and pipe the filtered subset to the stop-process command. With a pipeline full of .Net objects, it's all oddly familiar yet dramatically easier.

MSH is quirky, complex, delightful, and utterly addictive. You can, for example, convert objects to and from XML so that programs that don't natively speak .Net can have a crack at them. There's SQL-like sorting and grouping. You write ad hoc extensions in a built-in scripting language that feels vaguely Perlish. For more permanent extensions, called cmdlets, you use .Net languages.

With MSH, Windows system administration manages to be both fun and productive. And the story will only improve as the .Net Framework continues to enfold Windows' management APIs. Competitors take note: Windows is about to convert one of its great weaknesses into a strength.

Codename MONAD

In one of the most overlooked cool things at the PDC (in my opinion, anyway), the new Command Shell that will be in Longhorn blew me away when I saw it. I walked up to the booth asking if unix-like file aliases would be in the new shell, and was given a demo by the team that had my mind racing.

First off, file aliases are possible. WinFS type queries are possible through new commands called "commandlets" that you can write. Similar to the unix pipe, you can do this with MSH (Microsoft shell / codename MONAD) as well. Query results are actually .NET objects, so you can do things like (Don't quote me on the syntax; I'm working from memory here):

$p = get/process FileName
$p[5].ToString()
foreach ($p) { $p.ToString() }

A rather simple example, but consider that you can do this from the command line!

You can do WinFS filtering through the "|" symbol. MONAD can also export natively to: HTML, XML, Excel, or plain command text in either a Table or List format.

And....the commandlets are developer friendly. You can make a commandlet by inheriting from the commandlet base class, and adding attribute tags to the public properties to make them parameters to the commandlet. .NET handles whether the user types "-?" or "/?", so you don't have to care anymore!

I was all set to post that I had attended DOS's funeral after the keynote on Monday, but I wasn't prepared for what was being created to replace it.

One last thing: anything can be mapped to a drive, and drives don't just have to be letters. (Ok, I lied - that was 2) The example I was shown was that the registry was mapped to a drive, and you could navigate it like any other drive, with the results being returned from the commandlet as .NET objects!

# re: Codename "MONAD" 10/31/2003 10:17 AM TomH

I've been doing the euqivalent on Windows and Solaris with TkCon/tclsh for years.

# re: Codename "MONAD" 10/31/2003 10:19 AM Batgar

"Good artists copy, great artists steal"
-- Steve Jobs paraphrasing Andy Warhol

MSFT is not blind. They see OS X 10.2 (and OS X 10.3), Mandrake 9.x, RedHat 9.x, and they know where the bar is set.

MSFT has something that even Apple doesn't have: $40 billion cash.

MSFT can afford to create ANYTHING. The Longhorn command line is only the beginning. What little we have seen and heard of Longhorn is only the beginning (ex. WinFS). Remember: MSFT has been planning the major features in Longhorn since 1993 (ex. Cairo). MSFT is going to release Longhorn in direct response to the threats posed by Apple and the Open Source community. MSFT doesn't respond to percieved threats with a "fly swatter attack" they pull out the 80 megaton nuke and say "Come Get Some!".

Never underestimate Microsoft. The .NET technology suite should remind you that, when threatened, MSFT brings out the heavy stuff against that threat.

Coming Soon to Windows The Microsoft Shell (MSH)

With the advent of a new Command Line Interface (CLI) entitled MSH, Microsoft has greatly simplified server administration. Systems administrators can now learn about .NET classes while developers can easily extend the functionality by adding "MONAD" commandlets and providers; this is known as the "Glide Path". This article will show the syntax usage for the CLI as well as how to create basic commandlets.

See also Navigating Objects within the Microsoft Shell (MSH)



Etc

Society

Groupthink : Two Party System as Polyarchy : Corruption of Regulators : Bureaucracies : Understanding Micromanagers and Control Freaks : Toxic Managers :   Harvard Mafia : Diplomatic Communication : Surviving a Bad Performance Review : Insufficient Retirement Funds as Immanent Problem of Neoliberal Regime : PseudoScience : Who Rules America : Neoliberalism  : The Iron Law of Oligarchy : Libertarian Philosophy

Quotes

War and Peace : Skeptical Finance : John Kenneth Galbraith :Talleyrand : Oscar Wilde : Otto Von Bismarck : Keynes : George Carlin : Skeptics : Propaganda  : SE quotes : Language Design and Programming Quotes : Random IT-related quotesSomerset Maugham : Marcus Aurelius : Kurt Vonnegut : Eric Hoffer : Winston Churchill : Napoleon Bonaparte : Ambrose BierceBernard Shaw : Mark Twain Quotes

Bulletin:

Vol 25, No.12 (December, 2013) Rational Fools vs. Efficient Crooks The efficient markets hypothesis : Political Skeptic Bulletin, 2013 : Unemployment Bulletin, 2010 :  Vol 23, No.10 (October, 2011) An observation about corporate security departments : Slightly Skeptical Euromaydan Chronicles, June 2014 : Greenspan legacy bulletin, 2008 : Vol 25, No.10 (October, 2013) Cryptolocker Trojan (Win32/Crilock.A) : Vol 25, No.08 (August, 2013) Cloud providers as intelligence collection hubs : Financial Humor Bulletin, 2010 : Inequality Bulletin, 2009 : Financial Humor Bulletin, 2008 : Copyleft Problems Bulletin, 2004 : Financial Humor Bulletin, 2011 : Energy Bulletin, 2010 : Malware Protection Bulletin, 2010 : Vol 26, No.1 (January, 2013) Object-Oriented Cult : Political Skeptic Bulletin, 2011 : Vol 23, No.11 (November, 2011) Softpanorama classification of sysadmin horror stories : Vol 25, No.05 (May, 2013) Corporate bullshit as a communication method  : Vol 25, No.06 (June, 2013) A Note on the Relationship of Brooks Law and Conway Law

History:

Fifty glorious years (1950-2000): the triumph of the US computer engineering : Donald Knuth : TAoCP and its Influence of Computer Science : Richard Stallman : Linus Torvalds  : Larry Wall  : John K. Ousterhout : CTSS : Multix OS Unix History : Unix shell history : VI editor : History of pipes concept : Solaris : MS DOSProgramming Languages History : PL/1 : Simula 67 : C : History of GCC developmentScripting Languages : Perl history   : OS History : Mail : DNS : SSH : CPU Instruction Sets : SPARC systems 1987-2006 : Norton Commander : Norton Utilities : Norton Ghost : Frontpage history : Malware Defense History : GNU Screen : OSS early history

Classic books:

The Peter Principle : Parkinson Law : 1984 : The Mythical Man-MonthHow to Solve It by George Polya : The Art of Computer Programming : The Elements of Programming Style : The Unix Hater’s Handbook : The Jargon file : The True Believer : Programming Pearls : The Good Soldier Svejk : The Power Elite

Most popular humor pages:

Manifest of the Softpanorama IT Slacker Society : Ten Commandments of the IT Slackers Society : Computer Humor Collection : BSD Logo Story : The Cuckoo's Egg : IT Slang : C++ Humor : ARE YOU A BBS ADDICT? : The Perl Purity Test : Object oriented programmers of all nations : Financial Humor : Financial Humor Bulletin, 2008 : Financial Humor Bulletin, 2010 : The Most Comprehensive Collection of Editor-related Humor : Programming Language Humor : Goldman Sachs related humor : Greenspan humor : C Humor : Scripting Humor : Real Programmers Humor : Web Humor : GPL-related Humor : OFM Humor : Politically Incorrect Humor : IDS Humor : "Linux Sucks" Humor : Russian Musical Humor : Best Russian Programmer Humor : Microsoft plans to buy Catholic Church : Richard Stallman Related Humor : Admin Humor : Perl-related Humor : Linus Torvalds Related humor : PseudoScience Related Humor : Networking Humor : Shell Humor : Financial Humor Bulletin, 2011 : Financial Humor Bulletin, 2012 : Financial Humor Bulletin, 2013 : Java Humor : Software Engineering Humor : Sun Solaris Related Humor : Education Humor : IBM Humor : Assembler-related Humor : VIM Humor : Computer Viruses Humor : Bright tomorrow is rescheduled to a day after tomorrow : Classic Computer Humor

The Last but not Least Technology is dominated by two types of people: those who understand what they do not manage and those who manage what they do not understand ~Archibald Putt. Ph.D


Copyright © 1996-2021 by Softpanorama Society. www.softpanorama.org was initially created as a service to the (now defunct) UN Sustainable Development Networking Programme (SDNP) without any remuneration. This document is an industrial compilation designed and created exclusively for educational use and is distributed under the Softpanorama Content License. Original materials copyright belong to respective owners. Quotes are made for educational purposes only in compliance with the fair use doctrine.

FAIR USE NOTICE This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available to advance understanding of computer science, IT technology, economic, scientific, and social issues. We believe this constitutes a 'fair use' of any such copyrighted material as provided by section 107 of the US Copyright Law according to which such material can be distributed without profit exclusively for research and educational purposes.

This is a Spartan WHYFF (We Help You For Free) site written by people for whom English is not a native language. Grammar and spelling errors should be expected. The site contain some broken links as it develops like a living tree...

You can use PayPal to to buy a cup of coffee for authors of this site

Disclaimer:

The statements, views and opinions presented on this web page are those of the author (or referenced source) and are not endorsed by, nor do they necessarily reflect, the opinions of the Softpanorama society. We do not warrant the correctness of the information provided or its fitness for any purpose. The site uses AdSense so you need to be aware of Google privacy policy. You you do not want to be tracked by Google please disable Javascript for this site. This site is perfectly usable without Javascript.

Last modified: March, 12, 2019