Softpanorama

May the source be with you, but remember the KISS principle ;-)
Home Switchboard Unix Administration Red Hat TCP/IP Networks Neoliberalism Toxic Managers
(slightly skeptical) Educational society promoting "Back to basics" movement against IT overcomplexity and  bastardization of classic Unix

Python Debugging

News See Also

Best Python books for system administrators

Recommended Links Recommended Papers Tutorials
Algorithms Coroutines Braces Problem IDE environments Jython  
Debugging in Python Tips Quotes   Random Findings Etc

Introduction

The module pdb ( Source code: Lib/pdb.py ) defines an interactive debugger for Python programs. It supports setting (conditional) breakpoints and single stepping at the source line level, inspection of stack frames, source code listing, and evaluation of arbitrary Python code in the context of any stack frame. It also supports post-mortem debugging and can be called under program control.

The debugger is extensible — it is actually defined as the class Pdb. This is currently undocumented but easily understood by reading the source. The extension interface uses the modules bdb and cmd.

The debugger’s prompt is (Pdb). Typical usage to run a program under control of the debugger is:

>>> import pdb
>>> import mymodule
>>> pdb.run('mymodule.test()')
> <string>(0)?()
(Pdb) continue
> <string>(1)?()
(Pdb) continue
NameError: 'spam'
> <string>(1)?()
(Pdb)

pdb.py can also be loaded via option -m, which is the preferable method. For example:

python -m pdb myscript.py

When invoked as a script, pdb will automatically enter post-mortem debugging if the program being debugged exits abnormally. After post-mortem debugging (or after normal exit of the program), pdb will restart the program. Automatic restarting preserves pdb’s state (such as breakpoints) and in most cases is more useful than quitting the debugger upon program’s exit.

Dynamic invocation of the debugger from a running program

The typical usage to break into the debugger from a running program is to insert

import pdb; pdb.set_trace()

at the location you want to break into the debugger. You can then step through the code following this statement, and continue running without the debugger using the c command.

The typical usage to inspect a crashed program is:

>>> import pdb
>>> import mymodule
>>> mymodule.test()
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "./mymodule.py", line 4, in test
    test2()
  File "./mymodule.py", line 3, in test2
    print spam
NameError: spam
>>> pdb.pm()
> ./mymodule.py(3)test2()
-> print spam
(Pdb)

The module defines the following functions; each enters the debugger in a slightly different way:

pdb.run(statement[, globals[, locals]])
Execute the statement (given as a string) under debugger control. The debugger prompt appears before any code is executed; you can set breakpoints and type continue, or you can step through the statement using step or next (all these commands are explained below). The optional globals and locals arguments specify the environment in which the code is executed; by default the dictionary of the module __main__ is used. (See the explanation of the exec statement or the eval() built-in function.)
pdb.runeval(expression[, globals[, locals]])
Evaluate the expression (given as a string) under debugger control. When runeval() returns, it returns the value of the expression. Otherwise this function is similar to run().
pdb.runcall(function[, argument, ...])
Call the function (a function or method object, not a string) with the given arguments. When runcall() returns, it returns whatever the function call returned. The debugger prompt appears as soon as the function is entered.
pdb.set_trace()
Enter the debugger at the calling stack frame. This is useful to hard-code a breakpoint at a given point in a program, even if the code is not otherwise being debugged (e.g. when an assertion fails).
pdb.post_mortem([traceback])
Enter post-mortem debugging of the given traceback object. If no traceback is given, it uses the one of the exception that is currently being handled (an exception must be being handled if the default is to be used).
pdb.pm()
Enter post-mortem debugging of the traceback found in sys.last_traceback.

The run* functions and set_trace() are aliases for instantiating the Pdb class and calling the method of the same name. If you want to access further features, you have to do this yourself:

class pdb.Pdb(completekey='tab', stdin=None, stdout=None, skip=None)
Pdb is the debugger class.

The completekey, stdin and stdout arguments are passed to the underlying cmd.Cmd class; see the description there.

The skip argument, if given, must be an iterable of glob-style module name patterns. The debugger will not step into frames that originate in a module that matches one of these patterns. [1]

Example call to enable tracing with skip:

import pdb; pdb.Pdb(skip=['django.*']).set_trace()
New in version 2.7: The skip argument.
run(statement[, globals[, locals]])
runeval(expression[, globals[, locals]])
runcall(function[, argument, ...])
set_trace()
See the documentation for the functions explained above.

Debugger Commands

NOTE:

The debugger recognizes the following commands. Most commands can be abbreviated to one or two letters; e.g. h(elp) means that either h or help can be used to enter the help command (but not he or hel, nor H or Help or HELP). Arguments to commands must be separated by whitespace (spaces or tabs). Optional arguments are enclosed in square brackets ([]) in the command syntax; the square brackets must not be typed. Alternatives in the command syntax are separated by a vertical bar (|).

Entering a blank line repeats the last command entered. Exception: if the last command was a list command, the next 11 lines are listed.

Commands that the debugger doesn’t recognize are assumed to be Python statements and are executed in the context of the program being debugged.

Python statements can also be prefixed with an exclamation point (!). This is a powerful way to inspect the program being debugged; it is even possible to change a variable or call a function. When an exception occurs in such a statement, the exception name is printed but the debugger’s state is not changed.

Multiple commands may be entered on a single line, separated by ;;. (A single ; is not used as it is the separator for multiple commands in a line that is passed to the Python parser.) No intelligence is applied to separating the commands; the input is split at the first ;; pair, even if it is in the middle of a quoted string.

The debugger supports aliases. Aliases can have parameters which allows one a certain level of adaptability to the context under examination.

If a file .pdbrc exists in the user’s home directory or in the current directory, it is read in and executed as if it had been typed at the debugger prompt. This is particularly useful for aliases. If both files exist, the one in the home directory is read first and aliases defined there can be overridden by the local file.

h(elp) [command]
Without argument, print the list of available commands. With a command as argument, print help about that command. help pdb displays the full documentation file; if the environment variable PAGER is defined, the file is piped through that command instead. Since the command argument must be an identifier, help exec must be entered to get help on the ! command.
w(here)
Print a stack trace, with the most recent frame at the bottom. An arrow indicates the current frame, which determines the context of most commands.
d(own)
Move the current frame one level down in the stack trace (to a newer frame).
u(p)
Move the current frame one level up in the stack trace (to an older frame).
b(reak) [[filename:]lineno | function[, condition]]

With a lineno argument, set a break there in the current file. With a function argument, set a break at the first executable statement within that function. The line number may be prefixed with a filename and a colon, to specify a breakpoint in another file (probably one that hasn’t been loaded yet). The file is searched on sys.path. Note that each breakpoint is assigned a number to which all the other breakpoint commands refer.

If a second argument is present, it is an expression which must evaluate to true before the breakpoint is honored.

Without argument, list all breaks, including for each breakpoint, the number of times that breakpoint has been hit, the current ignore count, and the associated condition if any.

tbreak [[filename:]lineno | function[, condition]]
Temporary breakpoint, which is removed automatically when it is first hit. The arguments are the same as break.
cl(ear) [filename:lineno | bpnumber [bpnumber …]]
With a filename:lineno argument, clear all the breakpoints at this line. With a space separated list of breakpoint numbers, clear those breakpoints. Without argument, clear all breaks (but first ask confirmation).
disable [bpnumber [bpnumber …]]
Disables the breakpoints given as a space separated list of breakpoint numbers. Disabling a breakpoint means it cannot cause the program to stop execution, but unlike clearing a breakpoint, it remains in the list of breakpoints and can be (re-)enabled.
enable [bpnumber [bpnumber …]]
Enables the breakpoints specified.
ignore bpnumber [count]
Sets the ignore count for the given breakpoint number. If count is omitted, the ignore count is set to 0. A breakpoint becomes active when the ignore count is zero. When non-zero, the count is decremented each time the breakpoint is reached and the breakpoint is not disabled and any associated condition evaluates to true.
condition bpnumber [condition]
Condition is an expression which must evaluate to true before the breakpoint is honored. If condition is absent, any existing condition is removed; i.e., the breakpoint is made unconditional.
commands [bpnumber]

Specify a list of commands for breakpoint number bpnumber. The commands themselves appear on the following lines. Type a line containing just ‘end’ to terminate the commands. An example:

(Pdb) commands 1
(com) print some_variable
(com) end
(Pdb)

To remove all commands from a breakpoint, type commands and follow it immediately with end; that is, give no commands.

With no bpnumber argument, commands refers to the last breakpoint set.

You can use breakpoint commands to start your program up again. Simply use the continue command, or step, or any other command that resumes execution.

Specifying any command resuming execution (currently continue, step, next, return, jump, quit and their abbreviations) terminates the command list (as if that command was immediately followed by end). This is because any time you resume execution (even with a simple next or step), you may encounter another breakpoint—which could have its own command list, leading to ambiguities about which list to execute.

If you use the ‘silent’ command in the command list, the usual message about stopping at a breakpoint is not printed. This may be desirable for breakpoints that are to print a specific message and then continue. If none of the other commands print anything, you see no sign that the breakpoint was reached.

s(tep)
Execute the current line, stop at the first possible occasion (either in a function that is called or on the next line in the current function).
n(ext)
Continue execution until the next line in the current function is reached or it returns. (The difference between next and step is that step stops inside a called function, while next executes called functions at (nearly) full speed, only stopping at the next line in the current function.)
unt(il)

Continue execution until the line with the line number greater than the current one is reached or when returning from current frame.

r(eturn)
Continue execution until the current function returns.
c(ont(inue))
Continue execution, only stop when a breakpoint is encountered.
j(ump) lineno

Set the next line that will be executed. Only available in the bottom-most frame. This lets you jump back and execute code again, or jump forward to skip code that you don’t want to run.

It should be noted that not all jumps are allowed — for instance it is not possible to jump into the middle of a for loop or out of a finally clause.

l(ist) [first[, last]]
List source code for the current file. Without arguments, list 11 lines around the current line or continue the previous listing. With one argument, list 11 lines around at that line. With two arguments, list the given range; if the second argument is less than the first, it is interpreted as a count.
a(rgs)
Print the argument list of the current function.
p  expression

Evaluate the expression in the current context and print its value.

Note

print can also be used, but is not a debugger command — this executes the Python print statement.

pp  expression
Like the p command, except the value of the expression is pretty-printed using the pprint module.
alias [name [command]]

Creates an alias called name that executes command. The command must not be enclosed in quotes. Replaceable parameters can be indicated by %1, %2, and so on, while %* is replaced by all the parameters. If no command is given, the current alias for name is shown. If no arguments are given, all aliases are listed.

Aliases may be nested and can contain anything that can be legally typed at the pdb prompt. Note that internal pdb commands can be overridden by aliases. Such a command is then hidden until the alias is removed. Aliasing is recursively applied to the first word of the command line; all other words in the line are left alone.

As an example, here are two useful aliases (especially when placed in the .pdbrc file):

#Print instance variables (usage "pi classInst")
alias pi for k in %1.__dict__.keys(): print "%1.",k,"=",%1.__dict__[k]
#Print instance variables in self
alias ps pi self
unalias name
Deletes the specified alias.
[!]statement

Execute the (one-line) statement in the context of the current stack frame. The exclamation point can be omitted unless the first word of the statement resembles a debugger command. To set a global variable, you can prefix the assignment command with a global command on the same line, e.g.:

(Pdb) global list_options; list_options = ['-l']
(Pdb)
run  [args …]

Restart the debugged Python program. If an argument is supplied, it is split with “shlex” and the result is used as the new sys.argv. History, breakpoints, actions and debugger options are preserved. “restart” is an alias for “run”.

New in version 2.6.
q(uit)
Quit from the debugger. The program being executed is aborted.

NOTES:

Whether a frame is considered to originate in a certain module is determined by the __name__ in the frame globals.

Top Visited
Switchboard
Latest
Past week
Past month

NEWS CONTENTS

Old News ;-)

[Oct 05, 2020] Version 0.8 uploaded

Changes since version 0.7

More correct translation of array assignments. Some non-obvious bugs in translation were fixed. Now you need to specify PERL5LIB variable pointing it to the directory with modules to run the program. Global variable now are initialized after main sub to undef value to create a global namespace. Previously this was done incorrectly. Simple installer for Python programmers who do not know much Perl added: the problem proved to be useful as a help for understanding Perl scripts by Python programmers.

Changes in pre_pythonizer.pl

Unlike previous versions, the current version by default does not create main subroutine out of statement found on nesting level zero, as it introduces some errors. You need specify option -m to create it.

NOTE: All Python statements on nesting level zero should starts from the beginning of the line which is ugly, but you can enclose them in the dummy if statement

if True: 

to create the artificial nesting level 1

[Sep 12, 2020] Persistent syntax errors such as missing semicolon, missing closing brace and unclosed literal.

Sep 12, 2020 | www.sciencedirect.com

https://www.sciencedirect.com/science/article/abs/pii/0096055178900413

A statistical analysis of syntax errors - ScienceDirect

For example, approximately one-fourth of all original syntax errors in the Pascal sample were
missing semicolons or use of comma in place of semicolon 4) indicates that this type of error
is quite infrequent (80o) and hence needn't be of as great a concern to recovery pro

[PDF] Error log analysis in C programming language courses [BOOK] Programming languages JJ Horning - 1979 - books.google.com to note that over 14% of the faults occurring in topps programs during the second half of the
experiment were still semicolon faults (compared to 1% for toppsii), and that missing semicolons
were about Every decision takes time, and provides an opportunity for error n assessment of locally least-cost error recovery SO Anderson, RC Backhouse , EH Bugge - The Computer , 1983 - academic.oup.com sym = semicolon in the former, one is anticipating the possibility of a missing semicolon ; in contrast,
a missing comma is 13, p. 229) if sy = semicolon then insymbol else begin error (14); if sy = comma
then insymbol end Both conditional statements accept semicolons but the The role of systematic errors in developmental studies of programming language learners J Segal, K Ahmad , M Rogers - Journal of Educational , 1992 - journals.sagepub.com Errors were classified by their surface characteristics into single token ( missing gathered from
the students, was that they would experience considerable difficulties with using semicolons ,
and that the specific rule of ALGOL 68 syntax concerning the role of the semicolon as a Cited by 9 Related articles Follow set error recovery C Stirling - Software: Practice and Experience, 1985 - Wiley Online Library Some accounts of the recovery scheme mention and make use of non- systematic changes to
their recursive descent parsers in order to improve In the former he anticipates the possibility of
a missing semicolon whereas in the latter he does not anticipate a missing comma A first look at novice compilation behaviour using BlueJ MC Jadud - Computer Science Education, 2005 - Taylor & Francis or mark themselves present from weeks previous they may have missed -- either way change
programmer behaviour -- perhaps encouraging them to make fewer " missing semicolon " errors ,
or be or perhaps highlight places where semicolons should be when they are missing Making programming more conversational A Repenning - 2011 IEEE Symposium on Visual Languages , 2011 - ieeexplore.ieee.org Miss one semicolon in a C program and the program may no longer work at all Similar to code
auto-completion approaches, these kinds of visual programming environments prevent syntactic
programming mistakes such as missing semicolons or typos

[Aug 27, 2020] The chapter on debugging from Think Python book

Aug 27, 2020 | www.reddit.com

4 days ago

I'd also highly recommend this chapter on debugging from Think Python book: https://greenteapress.com/thinkpython2/html/thinkpython2021.html

and this article: https://jvns.ca/blog/2019/06/23/a-few-debugging-resources/

TIMEBACK

Ends 08/31/2020. Save an extra $10 on all departments, including groceries, personal care, health & nutrition, home essentials, beauty, kitchenware, and electronics by using this Walmart coupon. Minimum spend of $50 required.

$10 Coupon - Walmart Promo Codes 2020 - The Wall Street Journal

[Aug 27, 2020] Rubber Duck Debugging

Notable quotes:
"... Original Credit ..."
Aug 27, 2020 | rubberduckdebugging.com

The rubber duck debugging method is as follows:

  1. Beg, borrow, steal, buy, fabricate or otherwise obtain a rubber duck (bathtub variety).
  2. Place rubber duck on desk and inform it you are just going to go over some code with it, if that's all right.
  3. Explain to the duck what your code is supposed to do, and then go into detail and explain your code line by line.
  4. At some point you will tell the duck what you are doing next and then realise that that is not in fact what you are actually doing. The duck will sit there serenely, happy in the knowledge that it has helped you on your way.

Note : In a pinch a coworker might be able to substitute for the duck, however, it is often preferred to confide mistakes to the duck instead of your coworker.

Original Credit : ~Andy from lists.ethernal.org

FAQs 4 days ago

I'd also highly recommend this chapter on debugging from Think Python book: https://greenteapress.com/thinkpython2/html/thinkpython2021.html

and this article: https://jvns.ca/blog/2019/06/23/a-few-debugging-resources/

[Aug 15, 2020] Facebook open-sources a static analyzer for Python code - Help Net Security

Aug 15, 2020 | www.helpnetsecurity.com

Facebook has open-sourced Pysa (Python Static Analyzer), a tool that looks at how data flows through the code and helps developers prevent data flowing into places it shouldn't.

... "Pysa performs iterative rounds of analysis to build summaries to determine which functions return data from a source and which functions have parameters that eventually reach a sink. If Pysa finds that a source eventually connects to a sink, it reports an issue."

You can get Pysa from here and you can use a number of already developed definitions to help it find security issues.

[Aug 11, 2020] Python debugger: Stepping into a function that you have called interactively

Jan 01, 2008 | stackoverflow.com

Ask Question Asked 11 years, 9 months ago Active 5 years, 4 months ago Viewed 28k times

https://tpc.googlesyndication.com/safeframe/1-0-37/html/container.html Report this ad


, 2008-10-23 05:35:32

Python is quite cool, but unfortunately, its debugger is not as good as perl -d.

One thing that I do very commonly when experimenting with code is to call a function from within the debugger, and step into that function, like so:

# NOTE THAT THIS PROGRAM EXITS IMMEDIATELY WITHOUT CALLING FOO()
~> cat -n /tmp/show_perl.pl
1  #!/usr/local/bin/perl
2
3  sub foo {
4      print "hi\n";
5      print "bye\n";
6  }
7
8  exit 0;

~> perl -d /tmp/show_perl.pl

Loading DB routines from perl5db.pl version 1.28
Editor support available.

Enter h or `h h' for help, or `man perldebug' for more help.

main::(/tmp/show_perl.pl:8):    exit 0;

# MAGIC HAPPENS HERE -- I AM STEPPING INTO A FUNCTION THAT I AM CALLING INTERACTIVELY
  DB<1> s foo() 
main::((eval 6)[/usr/local/lib/perl5/5.8.6/perl5db.pl:628]:3):
3:      foo();


  DB<<2>> s
main::foo(/tmp/show_perl.pl:4):     print "hi\n";


  DB<<2>> n
hi
main::foo(/tmp/show_perl.pl:5):     print "bye\n";


  DB<<2>> n
bye


  DB<2> n
Debugged program terminated.  Use q to quit or R to restart,
  use O inhibit_exit to avoid stopping after program termination,
  h q, h R or h O to get additional info.  

  DB<2> q

This is incredibly useful when trying to step through a function's handling of various different inputs to figure out why it fails. However, it does not seem to work in either pdb or pydb (I'd show an equivalent python example to the one above but it results in a large exception stack dump).

So my question is twofold:

  1. Am I missing something?
  2. Is there a python debugger that would indeed let me do this?

Obviously I could put the calls in the code myself, but I love working interactively, eg. not having to start from scratch when I want to try calling with a slightly different set of arguments.

Dunatotatos ,

can you please accept the last answer? – sureshvv Apr 15 '15 at 6:30

, 2008-10-23 05:42:00

And I've answered my own question! It's the "debug" command in pydb:

~> cat -n /tmp/test_python.py
     1  #!/usr/local/bin/python
     2
     3  def foo():
     4      print "hi"
     5      print "bye"
     6
     7  exit(0)
     8

~> pydb /tmp/test_python.py
(/tmp/test_python.py:7):  <module>
7 exit(0)


(Pydb) debug foo()
ENTERING RECURSIVE DEBUGGER
------------------------Call level 11
(/tmp/test_python.py:3):  foo
3 def foo():

((Pydb)) s
(/tmp/test_python.py:4):  foo
4     print "hi"

((Pydb)) s
hi
(/tmp/test_python.py:5):  foo
5     print "bye"


((Pydb)) s
bye
------------------------Return from level 11 (<type 'NoneType'>)
----------------------Return from level 10 (<type 'NoneType'>)
LEAVING RECURSIVE DEBUGGER
(/tmp/test_python.py:7):  <module>

,

add a comment

https://tpc.googlesyndication.com/safeframe/1-0-37/html/container.html Report this ad

Simon ,

You can interactively debug a function with pdb as well, provided the script you want to debug does not exit() at the end:

$ cat test.py
#!/usr/bin/python

def foo(f, g):
        h = f+g
        print h
        return 2*f

To debug, start an interactive python session and import pdb:

$ python
Python 2.5.1 (r251:54869, Apr 18 2007, 22:08:04) 
[GCC 4.0.1 (Apple Computer, Inc. build 5367)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import pdb
>>> import test
>>> pdb.runcall(test.foo, 1, 2)
> /Users/simon/Desktop/test.py(4)foo()
-> h = f+g
(Pdb) n
> /Users/simon/Desktop/test.py(5)foo()
-> print h
(Pdb)

The pdb module comes with python and is documented in the modules docs at http://docs.python.org/modindex.html

> ,

add a comment

Jerub , 2008-10-23 05:46:25

There is a python debugger that is part of the core distribution of python called 'pdb'. I rarely use it myself, but find it useful sometimes.

Given this program:

def foo():
    a = 0
    print "hi"

    a += 1

    print "bye"

foo()

Here is a session debugging it:

$ python /usr/lib/python2.5/pdb.py /var/tmp/pdbtest.py         ~
> /var/tmp/pdbtest.py(2)<module>()
-> def foo():
(Pdb) s
> /var/tmp/pdbtest.py(10)<module>()
-> foo()
(Pdb) s
--Call--
> /var/tmp/pdbtest.py(2)foo()
-> def foo():
(Pdb) s
> /var/tmp/pdbtest.py(3)foo()
-> a = 0
(Pdb) s
> /var/tmp/pdbtest.py(4)foo()
-> print "hi"
(Pdb) print a
0
(Pdb) s
hi
> /var/tmp/pdbtest.py(6)foo()
-> a += 1
(Pdb) s
> /var/tmp/pdbtest.py(8)foo()
-> print "bye"
(Pdb) print a
1
(Pdb) s
bye
--Return--
> /var/tmp/pdbtest.py(8)foo()->None
-> print "bye"
(Pdb) s
--Return--
> /var/tmp/pdbtest.py(10)<module>()->None
-> foo()
(Pdb) s

> ,

add a comment

https://tpc.googlesyndication.com/safeframe/1-0-37/html/container.html Report this ad

Alex Coventry , 2008-10-23 10:55:53

For interactive work on code I'm developing, I usually find it more efficient to set a programmatic "break point" in the code itself with pdb.set_trace . This makes it easir to break on the program's state deep in a a loop, too: if <state>: pdb.set_trace()

Daryl Spitzer ,

See also stackoverflow.com/questions/150375/Daryl Spitzer Oct 23 '08 at 16:23

Jeremy Cantrell , 2008-10-23 17:07:50

If you're more familiar with a GUI debugger, there's winpdb ('win' in this case does not refer to Windows). I actually use it on Linux.

On debian/ubuntu:

sudo aptitude install winpdb

Then just put this in your code where you want it to break:

import rpdb2; rpdb2.start_embedded_debugger_interactive_password()

Then start winpdb and attach to your running script.

Adam Ryczkowski ,

Thanks. I wasn't aware of that! – Ben Oct 23 '13 at 15:02

[Nov 08, 2019] Pylint: Making your Python code consistent

Oct 21, 2019 | opensource.com

Pylint is your friend when you want to avoid arguing about code complexity.

Moshe Zadka (Community Moderator) Feed 30 up 1 comment

Get the highlights in your inbox every week.

flake8 and black will take care of "local" style: where the newlines occur, how comments are formatted, or find issues like commented out code or bad practices in log formatting.

Pylint is extremely aggressive by default. It will offer strong opinions on everything from checking if declared interfaces are actually implemented to opportunities to refactor duplicate code, which can be a lot to a new user. One way of introducing it gently to a project, or a team, is to start by turning all checkers off, and then enabling checkers one by one. This is especially useful if you already use flake8, black, and mypy : Pylint has quite a few checkers that overlap in functionality.

More Python Resources

However, one of the things unique to Pylint is the ability to enforce higher-level issues: for example, number of lines in a function, or number of methods in a class.

These numbers might be different from project to project and can depend on the development team's preferences. However, once the team comes to an agreement about the parameters, it is useful to enforce those parameters using an automated tool. This is where Pylint shines.

Configuring Pylint

In order to start with an empty configuration, start your .pylintrc with

[MESSAGES CONTROL]

disable=all

This disables all Pylint messages. Since many of them are redundant, this makes sense. In Pylint, a message is a specific kind of warning.

You can check that all messages have been turned off by running pylint :

$ pylint <my package>

In general, it is not a great idea to add parameters to the pylint command-line: the best place to configure your pylint is the .pylintrc . In order to have it do something useful, we need to enable some messages.

In order to enable messages, add to your .pylintrc , under the [MESSAGES CONTROL] .

enable = <message>,

...

For the "messages" (what Pylint calls different kinds of warnings) that look useful. Some of my favorites include too-many-lines , too-many-arguments , and too-many-branches . All of those limit complexity of modules or functions, and serve as an objective check, without a human nitpicker needed, for code complexity measurement.

A checker is a source of messages : every message belongs to exactly one checker. Many of the most useful messages are under the design checker . The default numbers are usually good, but tweaking the maximums is straightfoward: we can add a section called DESIGN in the .pylintrc .

[ DESIGN ]

max-args=7

max-locals=15

Another good source of useful messages is the refactoring checker. Some of my favorite messages to enable there are consider-using-dict-comprehension , stop-iteration-return (which looks for generators which use raise StopIteration when return is the correct way to stop the iteration). and chained-comparison , which will suggest using syntax like 1 <= x < 5 rather than the less obvious 1 <= x && x > 5

Finally, an expensive checker, in terms of performance, but highly useful, is similarities . It is designed to enforce "Don't Repeat Yourself" (the DRY principle) by explicitly looking for copy-paste between different parts of the code. It only has one message to enable: duplicate-code . The default "minimum similarity lines" is set to 4 . It is possible to set it to a different value using the .pylintrc .

[ SIMILARITIES ]

min-similarity-lines=3

Pylint makes code reviews easy

If you are sick of code reviews where you point out that a class is too complicated, or that two different functions are basically the same, add Pylint to your Continuous Integration configuration, and only have the arguments about complexity guidelines for your project once .

[Nov 07, 2017] pdb the Python Debugger

Sep 03, 2017 | docs.python.org
26.2. pdb ! The Python Debugger

Source code: Lib/pdb.py


The module pdb defines an interactive source code debugger for Python programs. It supports setting (conditional) breakpoints and single stepping at the source line level, inspection of stack frames, source code listing, and evaluation of arbitrary Python code in the context of any stack frame. It also supports post-mortem debugging and can be called under program control.

The debugger is extensible ! it is actually defined as the class Pdb . This is currently undocumented but easily understood by reading the source. The extension interface uses the modules bdb and cmd .

The debugger's prompt is (Pdb) . Typical usage to run a program under control of the debugger is:

>>> import pdb

>>> import mymodule

>>> pdb.run('mymodule.test()')

> <string>(0)?()

(Pdb) continue

> <string>(1)?()

(Pdb) continue

NameError: 'spam'

> <string>(1)?()

(Pdb)

pdb.py can also be invoked as a script to debug other scripts. For example:

python -m pdb myscript.py

When invoked as a script, pdb will automatically enter post-mortem debugging if the program being debugged exits abnormally. After post-mortem debugging (or after normal exit of the program), pdb will restart the program. Automatic restarting preserves pdb's state (such as breakpoints) and in most cases is more useful than quitting the debugger upon program's exit.

New in version 2.4: Restarting post-mortem behavior added.

The typical usage to break into the debugger from a running program is to insert

import pdb; pdb.set_trace()

at the location you want to break into the debugger. You can then step through the code following this statement, and continue running without the debugger using the command.

The typical usage to inspect a crashed program is:

>>> import pdb

>>> import mymodule

>>> mymodule.test()

Traceback (most recent call last):

  File "<stdin>", line 1, in <module>

  File "./mymodule.py", line 4, in test

    test2()

  File "./mymodule.py", line 3, in test2

    print spam

NameError: spam

>>> pdb.pm()

> ./mymodule.py(3)test2()

-> print spam

(Pdb)

The module defines the following functions; each enters the debugger in a slightly different way:

pdb. run statement , globals locals ]]
Execute the statement (given as a string) under debugger control. The debugger prompt appears before any code is executed; you can set breakpoints and type continue , or you can step through the statement using step or next (all these commands are explained below). The optional globals and locals arguments specify the environment in which the code is executed; by default the dictionary of the module __main__ is used. (See the explanation of the exec statement or the eval() built-in function.)
pdb. runeval expression , globals locals ]]
Evaluate the expression (given as a string) under debugger control. When runeval() returns, it returns the value of the expression. Otherwise this function is similar to run() .
pdb. runcall function , argument ,
Call the function (a function or method object, not a string) with the given arguments. When runcall() returns, it returns whatever the function call returned. The debugger prompt appears as soon as the function is entered.
pdb. set_trace ()
Enter the debugger at the calling stack frame. This is useful to hard-code a breakpoint at a given point in a program, even if the code is not otherwise being debugged (e.g. when an assertion fails).
pdb. post_mortem traceback
Enter post-mortem debugging of the given traceback object. If no traceback is given, it uses the one of the exception that is currently being handled (an exception must be being handled if the default is to be used).
pdb. pm ()
Enter post-mortem debugging of the traceback found in sys.last_traceback .

The run* functions and set_trace() are aliases for instantiating the Pdb class and calling the method of the same name. If you want to access further features, you have to do this yourself:

class pdb. Pdb completekey='tab' , stdin=None , stdout=None , skip=None
Pdb is the debugger class.

The completekey , stdin and stdout arguments are passed to the underlying cmd.Cmd class; see the description there.

The skip argument, if given, must be an iterable of glob-style module name patterns. The debugger will not step into frames that originate in a module that matches one of these patterns. [1]

Example call to enable tracing with skip :

import pdb; pdb.Pdb(skip=['django.*']).set_trace()

New in version 2.7: The skip argument.
run statement , globals locals ]]
runeval expression , globals locals ]]
runcall function , argument ,
set_trace ()
See the documentation for the functions explained above.
26.3. Debugger Commands

The debugger recognizes the following commands. Most commands can be abbreviated to one or two letters; e.g. h(elp) means that either or help can be used to enter the help command (but not he or hel , nor or Help or HELP ). Arguments to commands must be separated by whitespace (spaces or tabs). Optional arguments are enclosed in square brackets ( [] ) in the command syntax; the square brackets must not be typed. Alternatives in the command syntax are separated by a vertical bar ( ).

Entering a blank line repeats the last command entered. Exception: if the last command was a list command, the next 11 lines are listed.

Commands that the debugger doesn't recognize are assumed to be Python statements and are executed in the context of the program being debugged. Python statements can also be prefixed with an exclamation point ( ). This is a powerful way to inspect the program being debugged; it is even possible to change a variable or call a function. When an exception occurs in such a statement, the exception name is printed but the debugger's state is not changed.

Multiple commands may be entered on a single line, separated by ;; . (A single is not used as it is the separator for multiple commands in a line that is passed to the Python parser.) No intelligence is applied to separating the commands; the input is split at the first ;; pair, even if it is in the middle of a quoted string.

The debugger supports aliases. Aliases can have parameters which allows one a certain level of adaptability to the context under examination.

If a file .pdbrc exists in the user's home directory or in the current directory, it is read in and executed as if it had been typed at the debugger prompt. This is particularly useful for aliases. If both files exist, the one in the home directory is read first and aliases defined there can be overridden by the local file.

h(elp) [ command ]
Without argument, print the list of available commands. With a command as argument, print help about that command. help pdb displays the full documentation file; if the environment variable PAGER is defined, the file is piped through that command instead. Since the command argument must be an identifier, help exec must be entered to get help on the command.
w(here)
Print a stack trace, with the most recent frame at the bottom. An arrow indicates the current frame, which determines the context of most commands.
d(own)
Move the current frame one level down in the stack trace (to a newer frame).
u(p)
Move the current frame one level up in the stack trace (to an older frame).
b(reak) [[ filename :] lineno | function [, condition ]]

With a lineno argument, set a break there in the current file. With a function argument, set a break at the first executable statement within that function. The line number may be prefixed with a filename and a colon, to specify a breakpoint in another file (probably one that hasn't been loaded yet). The file is searched on sys.path . Note that each breakpoint is assigned a number to which all the other breakpoint commands refer.

If a second argument is present, it is an expression which must evaluate to true before the breakpoint is honored.

Without argument, list all breaks, including for each breakpoint, the number of times that breakpoint has been hit, the current ignore count, and the associated condition if any.

tbreak [[ filename :] lineno | function [, condition ]]
Temporary breakpoint, which is removed automatically when it is first hit. The arguments are the same as break.
cl(ear) [ filename:lineno | bpnumber [ bpnumber ]]
With a filename:lineno argument, clear all the breakpoints at this line. With a space separated list of breakpoint numbers, clear those breakpoints. Without argument, clear all breaks (but first ask confirmation).
disable [ bpnumber [ bpnumber ]]
Disables the breakpoints given as a space separated list of breakpoint numbers. Disabling a breakpoint means it cannot cause the program to stop execution, but unlike clearing a breakpoint, it remains in the list of breakpoints and can be (re-)enabled.
enable [ bpnumber [ bpnumber ]]
Enables the breakpoints specified.
ignore bpnumber [ count ]
Sets the ignore count for the given breakpoint number. If count is omitted, the ignore count is set to 0. A breakpoint becomes active when the ignore count is zero. When non-zero, the count is decremented each time the breakpoint is reached and the breakpoint is not disabled and any associated condition evaluates to true.
condition bpnumber [ condition ]
Condition is an expression which must evaluate to true before the breakpoint is honored. If condition is absent, any existing condition is removed; i.e., the breakpoint is made unconditional.
commands [ bpnumber ]

Specify a list of commands for breakpoint number bpnumber . The commands themselves appear on the following lines. Type a line containing just 'end' to terminate the commands. An example:

(Pdb) commands 1

(com) print some_variable

(com) end

(Pdb)

To remove all commands from a breakpoint, type commands and follow it immediately with end; that is, give no commands.

With no bpnumber argument, commands refers to the last breakpoint set.

You can use breakpoint commands to start your program up again. Simply use the continue command, or step, or any other command that resumes execution.

Specifying any command resuming execution (currently continue, step, next, return, jump, quit and their abbreviations) terminates the command list (as if that command was immediately followed by end). This is because any time you resume execution (even with a simple next or step), you may encounter another breakpoint!which could have its own command list, leading to ambiguities about which list to execute.

If you use the 'silent' command in the command list, the usual message about stopping at a breakpoint is not printed. This may be desirable for breakpoints that are to print a specific message and then continue. If none of the other commands print anything, you see no sign that the breakpoint was reached.

New in version 2.5.
s(tep)
Execute the current line, stop at the first possible occasion (either in a function that is called or on the next line in the current function).
n(ext)
Continue execution until the next line in the current function is reached or it returns. (The difference between next and step is that step stops inside a called function, while next executes called functions at (nearly) full speed, only stopping at the next line in the current function.)
unt(il)

Continue execution until the line with the line number greater than the current one is reached or when returning from current frame.

New in version 2.6.
r(eturn)
Continue execution until the current function returns.
c(ont(inue))
Continue execution, only stop when a breakpoint is encountered.
j(ump) lineno

Set the next line that will be executed. Only available in the bottom-most frame. This lets you jump back and execute code again, or jump forward to skip code that you don't want to run.

It should be noted that not all jumps are allowed ! for instance it is not possible to jump into the middle of a for loop or out of a finally clause.

l(ist) [ first [, last ]]
List source code for the current file. Without arguments, list 11 lines around the current line or continue the previous listing. With one argument, list 11 lines around at that line. With two arguments, list the given range; if the second argument is less than the first, it is interpreted as a count.
a(rgs)
Print the argument list of the current function.
p expression

Evaluate the expression in the current context and print its value.

Note

print can also be used, but is not a debugger command ! this executes the Python print statement.

pp expression
Like the command, except the value of the expression is pretty-printed using the pprint module.
alias [ name [command]]

Creates an alias called name that executes command . The command must not be enclosed in quotes. Replaceable parameters can be indicated by %1 , %2 , and so on, while %* is replaced by all the parameters. If no command is given, the current alias for name is shown. If no arguments are given, all aliases are listed.

Aliases may be nested and can contain anything that can be legally typed at the pdb prompt. Note that internal pdb commands can be overridden by aliases. Such a command is then hidden until the alias is removed. Aliasing is recursively applied to the first word of the command line; all other words in the line are left alone.

As an example, here are two useful aliases (especially when placed in the .pdbrc file):

#Print instance variables (usage "pi classInst")

alias pi for k in %1.__dict__.keys(): print "%1.",k,"=",%1.__dict__[k]

#Print instance variables in self

alias ps pi self

unalias name
Deletes the specified alias.
[!] statement

Execute the (one-line) statement in the context of the current stack frame. The exclamation point can be omitted unless the first word of the statement resembles a debugger command. To set a global variable, you can prefix the assignment command with a global command on the same line, e.g.:

(Pdb) global list_options; list_options = ['-l']

(Pdb)

run [ args ]

Restart the debugged Python program. If an argument is supplied, it is split with "shlex" and the result is used as the new sys.argv. History, breakpoints, actions and debugger options are preserved. "restart" is an alias for "run".

New in version 2.6.
q(uit)
Quit from the debugger. The program being executed is aborted.

Footnotes

[1] Whether a frame is considered to originate in a certain module is determined by the __name__ in the frame globals.

[Sep 03, 2017] Python Conquers The Universe

Notable quotes:
"... If you press ENTER without entering anything, pdb will re-execute the last command that you gave it. ..."
"... When you use "s" to step into subroutines, you will often find yourself trapped in a subroutine. You have examined the code that you're interested in, but now you have to step through a lot of uninteresting code in the subroutine. ..."
"... In this situation, what you'd like to be able to do is just to skip ahead to the end of the subroutine. That is, you want to do something like the "c" ("continue") command does, but you want just to continue to the end of the subroutine, and then resume your stepping through the code. ..."
"... You can do it. The command to do it is "r" (for "return" or, better, "continue until return"). If you are in a subroutine and you enter the "r" command at the (Pdb) prompt, pdb will continue executing until the end of the subroutine. At that point ! the point when it is ready to return to the calling routine ! it will stop and show the (Pdb) prompt again, and you can resume stepping through your code. ..."
developers.slashdot.org
Debugging in Python Posted on 2009/09/10 by Steve Ferg As a programmer, one of the first things that you need for serious program development is a debugger.

Python has a debugger, which is available as a module called pdb (for "Python DeBugger", naturally!). Unfortunately, most discussions of pdb are not very useful to a Python newbie ! most are very terse and simply rehash the description of pdb in the Python library reference manual . The discussion that I have found most accessible is in the first four pages of Chapter 27 of the Python 2.1 Bible .

So here is my own personal gentle introduction to using pdb. It assumes that you are not using any IDE ! that you're coding Python with a text editor and running your Python programs from the command line.

Some Other Debugger Resources Getting started ! pdb.set_trace()

To start, I'll show you the very simplest way to use the Python debugger.

1. Let's start with a simple program, epdb1.py.

# epdb1.py -- experiment with the Python debugger, pdb

a = "aaa"

b = "bbb"

c = "ccc"

final = a + b + c

print final

2. Insert the following statement at the beginning of your Python program. This statement imports the Python debugger module, pdb.

import pdb

3. Now find a spot where you would like tracing to begin, and insert the following code:

pdb.set_trace()

So now your program looks like this.

# epdb1.py -- experiment with the Python debugger, pdb

import pdb

a = "aaa"

pdb.set_trace()

b = "bbb"

c = "ccc"

final = a + b + c

print final

4. Now run your program from the command line as you usually do, which will probably look something like this:

PROMPT> python epdb1.py

When your program encounters the line with pdb.set_trace() it will start tracing. That is, it will (1) stop, (2) display the "current statement" (that is, the line that will execute next) and (3) wait for your input. You will see the pdb prompt, which looks like this:

(Pdb)
Execute the next statement with "n" (next)

At the (Pdb) prompt, press the lower-case letter "n" (for "next") on your keyboard, and then press the ENTER key. This will tell pdb to execute the current statement. Keep doing this ! pressing "n", then ENTER.

Eventually you will come to the end of your program, and it will terminate and return you to the normal command prompt.

Congratulations! You've just done your first debugging run!

Repeating the last debugging command with ENTER

This time, do the same thing as you did before. Start your program running. At the (Pdb) prompt, press the lower-case letter "n" (for "next") on your keyboard, and then press the ENTER key.

But this time, after the first time that you press "n" and then ENTER, don't do it any more. Instead, when you see the (Pdb) prompt, just press ENTER. You will notice that pdb continues, just as if you had pressed "n". So this is Handy Tip #1:

If you press ENTER without entering anything, pdb will re-execute the last command that you gave it.

In this case, the command was "n", so you could just keep stepping through the program by pressing ENTER.

Notice that as you passed the last line (the line with the "print" statement), it was executed and you saw the output of the print statement ("aaabbbccc") displayed on your screen.

Quitting it all with "q" (quit)

The debugger can do all sorts of things, some of which you may find totally mystifying. So the most important thing to learn now ! before you learn anything else ! is how to quit debugging!

It is easy. When you see the (Pdb) prompt, just press "q" (for "quit") and the ENTER key. Pdb will quit and you will be back at your command prompt. Try it, and see how it works.

Printing the value of variables with "p" (print)

The most useful thing you can do at the (Pdb) prompt is to print the value of a variable. Here's how to do it.

When you see the (Pdb) prompt, enter "p" (for "print") followed by the name of the variable you want to print. And of course, you end by pressing the ENTER key.

Note that you can print multiple variables, by separating their names with commas (just as in a regular Python "print" statement). For example, you can print the value of the variables a, b, and c this way:

p a, b, c
When does pdb display a line?

Suppose you have progressed through the program until you see the line

final = a + b + c

and you give pdb the command

p final

You will get a NameError exception. This is because, although you are seeing the line, it has not yet executed. So the final variable has not yet been created.

Now press "n" and ENTER to continue and execute the line. Then try the "p final" command again. This time, when you give the command "p final", pdb will print the value of final , which is "aaabbbccc".

Turning off the (Pdb) prompt with "c" (continue)

You probably noticed that the "q" command got you out of pdb in a very crude way ! basically, by crashing the program.

If you wish simply to stop debugging, but to let the program continue running, then you want to use the "c" (for "continue") command at the (Pdb) prompt. This will cause your program to continue running normally, without pausing for debugging. It may run to completion. Or, if the pdb.set_trace() statement was inside a loop, you may encounter it again, and the (Pdb) debugging prompt will appear once more.

Seeing where you are with "l" (list)

As you are debugging, there is a lot of stuff being written to the screen, and it gets really hard to get a feeling for where you are in your program. That's where the "l" (for "list") command comes in. (Note that it is a lower-case "L", not the numeral "one" or the capital letter "I".)

"l" shows you, on the screen, the general area of your program's souce code that you are executing. By default, it lists 11 (eleven) lines of code. The line of code that you are about to execute (the "current line") is right in the middle, and there is a little arrow "–>" that points to it.

So a typical interaction with pdb might go like this

Stepping into subroutines with "s" (step into)

Eventually, you will need to debug larger programs ! programs that use subroutines. And sometimes, the problem that you're trying to find will lie buried in a subroutine. Consider the following program.

# epdb2.py -- experiment with the Python debugger, pdb

import pdb



def combine(s1,s2):      # define subroutine combine, which...

    s3 = s1 + s2 + s1    # sandwiches s2 between copies of s1, ...

    s3 = '"' + s3 +'"'   # encloses it in double quotes,...

    return s3            # and returns it.



a = "aaa"

pdb.set_trace()

b = "bbb"

c = "ccc"

final = combine(a,b)

print final

As you move through your programs by using the "n" command at the (Pdb) prompt, you will find that when you encounter a statement that invokes a subroutine ! the final = combine(a,b) statement, for example ! pdb treats it no differently than any other statement. That is, the statement is executed and you move on to the next statement ! in this case, to print final.

But suppose you suspect that there is a problem in a subroutine. In our case, suppose you suspect that there is a problem in the combine subroutine. What you want ! when you encounter the final = combine(a,b) statement ! is some way to step into the combine subroutine, and to continue your debugging inside it.

Well, you can do that too. Do it with the "s" (for "step into") command.

When you execute statements that do not involve function calls, "n" and "s" do the same thing ! move on to the next statement. But when you execute statements that invoke functions, "s", unlike "n", will step into the subroutine. In our case, if you executed the

final = combine(a,b)

statement using "s", then the next statement that pdb would show you would be the first statement in the combine subroutine:

def combine(s1,s2):

and you will continue debugging from there.

Continuing but just to the end of the current subroutine with "r" (return)

When you use "s" to step into subroutines, you will often find yourself trapped in a subroutine. You have examined the code that you're interested in, but now you have to step through a lot of uninteresting code in the subroutine.

In this situation, what you'd like to be able to do is just to skip ahead to the end of the subroutine. That is, you want to do something like the "c" ("continue") command does, but you want just to continue to the end of the subroutine, and then resume your stepping through the code.

You can do it. The command to do it is "r" (for "return" or, better, "continue until return"). If you are in a subroutine and you enter the "r" command at the (Pdb) prompt, pdb will continue executing until the end of the subroutine. At that point ! the point when it is ready to return to the calling routine ! it will stop and show the (Pdb) prompt again, and you can resume stepping through your code.

You can do anything at all at the (Pdb) prompt

Sometimes you will be in the following situation ! You think you've discovered the problem. The statement that was assigning a value of, say, "aaa" to variable var1 was wrong, and was causing your program to blow up. It should have been assigning the value "bbb" to var1.

at least, you're pretty sure that was the problem

What you'd really like to be able to do, now that you've located the problem, is to assign "bbb" to var1, and see if your program now runs to completion without bombing.

It can be done!

One of the nice things about the (Pdb) prompt is that you can do anything at it ! you can enter any command that you like at the (Pdb) prompt. So you can, for instance, enter this command at the (Pdb) prompt.

(Pdb) var1 = "bbb"

You can then continue to step through the program. Or you could be adventurous ! use "c" to turn off debugging, and see if your program will end without bombing!

but be a little careful!

[Thanks to Dick Morris for the information in this section.]

Since you can do anything at all at the (Pdb) prompt, you might decide to try setting the variable b to a new value, say "BBB", this way:

(Pdb) b = "BBB"

If you do, pdb produces a strange error message about being unable to find an object named '= "BBB" '. Why???

What happens is that pdb attempts to execute the pdb command for setting and listing breakpoints (a command that we haven't discussed). It interprets the rest of the line as an argument to the command, and can't find the object that (it thinks) is being referred to. So it produces an error message.

So how can we assign a new value to ? The trick is to start the command with an exclamation point (!).

(Pdb)!b = "BBB"

An exclamation point tells pdb that what follows is a Python statement, not a pdb command.

The End

Well, that's all for now. There are a number of topics that I haven't mentioned, such as help, aliases, and breakpoints. For information about them, try the online reference for pdb commands on the Python documentation web site. In addition, I recommend Jeremy Jones' article Interactive Debugging in Python in O'Reilly's Python DevCenter.

I hope that this introduction to pdb has been enough to get you up and running fairly quickly and painlessly. Good luck!

!Steve Ferg

Sep 03, 2017 | pythonconquerstheuniverse.wordpress.com

[Dec 26, 2016] Python 3.6 Released

Dec 26, 2016 | developers.slashdot.org
(python.org) 166

Posted by EditorDavid on Saturday December 24, 2016 @10:34AM from the batteries-included dept.

On Friday, more than a year after Python 3.5, core developers Elvis Pranskevichus and Yury Selivanov announced the release of version 3.6 .

An anonymous reader writes:

InfoWorld describes the changes as async in more places, speed and memory usage improvements, and pluggable support for JITs, tracers, and debuggers. "Python 3.6 also provides support for DTrace and SystemTap, brings a secrets module to the standard library [to generate authentication tokens], introduces new string and number formats, and adds type annotations for variables. It also gives us easier methods to customize the creation of subclasses."

You can read Slashdot's interview with Python creator Guido van Rossum from 2013.

I also remember an interview this July where Perl creator Larry Wall called Python " a pretty okay first language , with a tendency towards style enforcement, monoculture, and group-think...

more interested in giving you one adequate way to do something than it is in giving you a workshop that you, the programmer, get to choose the best tool from."

Anyone want to share their thoughts today about the future of Python?

[Dec 12, 2015] 11 New Open Source Development Tools

Datamation

YAPF

Short for "Yet Another Python Formatter," YAPF reformats Python code so that it conforms to the style guide and looks good. It's a Google-owned project. Operating System: OS Independent

Python Execute Unix - Linux Command Examples

The os.system has many problems and subprocess is a much better way to executing unix command. The syntax is:
import subprocess
subprocess.call("command1")
subprocess.call(["command1", "arg1", "arg2"]) 

In this example, execute the date command:

import subprocess
subprocess.call("date") 

Sample outputs:

Sat Nov 10 00:59:42 IST 2012
0

You can pass the argument using the following syntax i.e run ls -l /etc/resolv.conf command:

import subprocess
subprocess.call(["ls", "-l", "/etc/resolv.conf"])
 

Sample outputs:

<-rw-r--r-- 1 root root 157 Nov  7 15:06 /etc/resolv.conf
0

[Feb 09, 2014] build - How to package Python as RPM for install into -opt

Stack Overflow

Q: How to create a binary RPM package out of Python 2.7.2 sources for installation into a non-standard prefix such as /opt/python27?

Assume the following builds correctly.

wget http://python.org/ftp/python/2.7.2/Python-2.7.2.tgz
tar zxvf Python-2.7.2.tgz
cd Python-2.7.2
./configure --prefix=/opt/python27 --enable-shared
make
make test
sudo make install

Instead of the last command I'd like to build a binary RPM.

A:

RPMs are built using rpmbuild from a .spec file. As an example, look at python.spec from Fedora.

If you don't need to build from sources then try rpm's --relocate switch on a pre-built RPM for your distribution:

rpm -i --relocate /usr=/opt/python27 python-2.7.rpm

Python 2.7 RPMs - Gitorious

Python 2.7 RPMs

Port of the Fedora 15 Python 2.7 RPM and some of the related stack to build on RHEL 5 & 6 (and derivatives such as CentOS). Can be installed in parallel to the system Python packages.

[Feb 09, 2014] Building and Installing Python 2.7 RPMs on CentOS 5.7 by Nathan Milford

blog.milford.io
I was asked today to install Python 2.7 on a CentOS based node and I thought I'd take this oportunity to add a companion article to my Python 2.6 article.

We're all well aware that CentOS is pretty backwards when it comes to having the latest and greatest sotware packages and is particularly finicky when it comes to Python since so much of RHEL depends on it.

As a rule, I refuse to rush in and install anything in production that isn't in a manageable package format such as RPM. I need to be able to predictably reproduce software installs across a large number of nodes.

The following steps will not clobber your default Python 2.4 install and will keep both CentOS and your developers happy.

So, here we go.

Install the dependancies.

sudo yum -y install rpmdevtools tk-devel tcl-devel expat-devel db4-devel \
                    gdbm-devel sqlite-devel bzip2-devel openssl-devel \
                    ncurses-devel readline-devel

Setup you RPM build envirnoment.

rpmdev-setuptree

Grab my spec file.

wget https://raw.github.com/nmilford/specfiles/master/python-2.7/python27-2.7.2.spec \
     -O ~/rpmbuild/SPECS/python27-2.7.2.spec 
wget http://www.python.org/ftp/python/2.7.2/Python-2.7.2.tar.bz2 \
     -O ~/rpmbuild/SOURCES/Python-2.7.2.tar.bz2

Build RPM. (FYI, the QA_RPATHS variable tells the rpmbuild to skip some file path errors).

QA_RPATHS=$[ 0x0001|0x0010 ] rpmbuild -bb ~/rpmbuild/SPECS/python-2.7.2.spec

Install the RPMs.

sudo rpm -Uvh ~/rpmbuild/RPMS/x86_64/python27*.rpm

Now on to the the setuptools.

Grab my spec file.

wget https://raw.github.com/nmilford/specfiles/master/python-2.7/python27-setuptools-0.6c11.spec \
     -O ~/rpmbuild/SPECS/python27-setuptools-0.6c11.spec 

Grab the source.

wget http://pypi.python.org/packages/source/s/setuptools/setuptools-0.6c11.tar.gz \
     -O ~/rpmbuild/SOURCES/setuptools-0.6c11.tar.gz

Build the RPMs.

rpmbuild -bb ~/rpmbuild/SPECS/python27-setuptools-0.6c11.spec

Install the RPMs.

sudo rpm -Uvh ~/rpmbuild/RPMS/noarch/python27-setuptools-0.6c11-milford.noarch.rpm

Now, we'll install MySQL-python as an example.

Grab the mysql-dev package

yum -y install mysql-devel

Grab, build and install the MySQL-python package.

curl http://superb-sea2.dl.sourceforge.net/project/mysql-python/mysql-python/1.2.3/MySQL-python-1.2.3.tar.gz | tar zxv
cd MySQL-python-1.2.3
python2.7 setup.py build
python2.7 setup.py install

Like with the previous Python 2.6 article, note how I called the script explicitly using the following python binary: /usr/bin/python2.7

Now we're good to give it the old test thus:

python2.7 -c "import MySQLdb"

If it doesn't puke out some error message, you're all set.

Happy pythoning.

[Oct 21, 2012] Last File Manager freshmeat.net

Written in Python. The last version is LFM 2.3 dated May 2011. Codebase is below 10K lines.
21 May 2011

Lfm is a curses-based file manager for the Unix console written in Python

Python 2.5 or later is required now. PowerCLI was added, an advanced command line interface with completion, persistent history, variable substitution, and many other useful features.

Persistent history in all forms was added. Lots of improvements were made and bugs were fixed

[Apr 04, 2011] Scripting the Linux desktop, Part 1 Basics by Paul Ferrill

Jan 18, 2011 | developerWorks

Developing applications for the Linux desktop typically requires some type of graphical user interface (GUI) framework to build on. Options include GTK+ for the GNOME desktop and Qt for the K Desktop Environment (KDE). Both platforms offer everything a developer needs to build a GUI application, including libraries and layout tools to create the windows users see. This article shows you how to build desktop productivity applications based on the screenlets widget toolkit (see Resources for a link).

A number of existing applications would fit in the desktop productivity category, including GNOME Do and Tomboy. These applications typically allow users to interact with them directly from the desktop through either a special key combination or by dragging and dropping from another application such as Mozilla Firefox. Tomboy functions as a desktop note-taking tool that supports dropping text from other windows.

Getting started with screenlets

You need to install a few things to get started developing screenlets. First, install the screenlets package using either the Ubuntu Software Center or the command line. In the Ubuntu Software Center, type screenlets in the Search box. You should see two options for the main package and a separate installation for the documentation.

Python and Ubuntu

You program screenlets using Python. The basic installation of Ubuntu 10.04 has Python version 2.6 installed, as many utilities depend on it. You may need additional libraries depending on your application's requirements. For the purpose of this article, I installed and tested everything on Ubuntu version 10.04.

Next, download the test screenlet's source from the screenlets.org site. The test screenlet resides in the src/share/screenlets/Test folder and uses Cairo and GTK, which you also need to install. The entire source code for the test program is in the TestScreenlet.py file. Open this file in your favorite editor to see the basic structure of a screenlet.

Python is highly object oriented and as such uses the class keyword to define an object. In this example, the class is named TestScreenlet and has a number of methods defined. In TestScreenlet.py, note the following code at line 42:

def __init__(self, **keyword_args):
Python uses the leading and trailing double underscore (__) notation to identify system functions with predefined behaviors. In this case, the __init__ function is for all intents and purposes the constructor for the class and contains any number of initialization steps to be executed on the creation of a new instance of the object. By convention, the first argument of every class method is a reference to the current instance of the class and is named self. This behavior makes it easy to use self to reference methods and properties of the instance it is in:
self.theme_name = "default"
The screenlets framework defines several naming conventions and standards, as outlined on screenlets.org's developer's page (see Resources for a link). There's a link to the source code for the screenlets package along with the application programming interface (API) documentation. Looking at the code also gives you insight into what each function does with the calling arguments and what it returns.

Writing a simple screenlet

The basic components of a screenlet include an icon file, the source code file, and a themes folder. The themes folder contains additional folders for different themes. You'll find a sample template at screenlets.org with the required files and folders to help you get started.

For this first example, use the template provided to create a basic "Hello World" application. The code for this basic application is shown in Listing 1.

Listing 1. Python code for the Hello World screenlet
	
#!/usr/bin/env python

import screenlets

class HelloWorldScreenlet(screenlets.Screenlet):
    __name__ = 'HelloWorld'
    __version__ = '0.1'
    __author__ = 'John Doe'
    __desc__ = 'Simple Hello World Screenlet'
    
    def __init__(self, **kwargs):
        # Customize the width and height.
        screenlets.Screenlet.__init__(self, width=180, height=50, **kwargs)
    
    def on_draw(self, ctx):
        # Change the color to white and fill the screenlet.
        ctx.set_source_rgb(255, 255, 255)
        self.draw_rectangle(ctx, 0, 0, self.width, self.height)

        # Change the color to black and write the message.
        ctx.set_source_rgb(0, 0, 0)
        text = 'Hello World!'
        self.draw_text(ctx, text, 10, 10, "Sans 9" , 20, self.width)


if __name__ == "__main__":
    import screenlets.session
    screenlets.session.create_session(HelloWorldScreenlet)

Each application must import the screenlets framework and create a new session. There are a few other minimal requirements, including any initialization steps along with a basic draw function to present the widget on screen. The TestScreenlet.py example has an __init__ method that initializes the object. In this case, you see a single line with a call to the screenlet's __init__ method, which sets the initial width and height of the window to be created for this application.

The only other function you need for this application is the on_draw method. This routine sets the background color of the box to white and draws a rectangle with the dimensions defined earlier. It sets the text color to black and the source text to "Hello World!" and then draws the text. ...

Reusing code in a more complex screenlet

One nice thing about writing screenlets is the ability to reuse code from other applications. Code reuse opens a world of possibilities with the wide range of open source projects based on the Python language. Every screenlet has the same basic structure but with more methods defined to handle different behaviors. Listing 2 shows a sample application named TimeTrackerScreenlet.

Listing 2. Python code for the Time Tracker screenlet

	
#!/usr/bin/env python

import screenlets
import cairo
import datetime

class TimeTrackerScreenlet(screenlets.Screenlet):
	__name__ = 'TimeTrackerScreenlet'
	__version__ = '0.1'
	__author__ = 'John Doe'
	__desc__ = 'A basic time tracker screenlet.'
	
	theme_dir = 'themes/default'
	image = 'start.png'

	def __init__(self, **keyword_args):
		screenlets.Screenlet.__init__(self, width=250, height=50, **keyword_args)
		self.add_default_menuitems()
		self.y = 25
		self.theme_name = 'default'
		self.on = False
		self.started = None

	def on_draw(self, ctx):
		self.draw_scaled_image(ctx, 0, 0, self.theme_dir + '/' + 
		self.image, self.width, self.height)
		
	def on_mouse_down(self, event):
		if self.on:
			self.started = datetime.datetime.now()
			self.image = 'stop.png'
			self.on = False
		else:
			if self.started:
				length = datetime.datetime.now() - self.started
				screenlets.show_message(None, '%s seconds' % 
				length.seconds, 'Time')
				self.started = None
			self.image = 'start.png'
			self.on = True

	def on_draw_shape(self, ctx):
		self.on_draw(ctx)
		ctx.rectangle(0, 0, self.width, self.height)
		ctx.fill()
	

if __name__ == "__main__":
	import screenlets.session
	screenlets.session.create_session(TimeTrackerScreenlet)
This example introduces a few more concepts that you need to understand before you start building anything useful. All screenlet applications have the ability to respond to specific user actions or events such as mouse clicks or drag-and-drop operations. In this example, the mouse down event is used as a trigger to change the state of your icon. When the screenlet runs, the start.png image is displayed. Clicking the image changes it to stop.png and records the time started in self.started. Clicking the stop image changes the image back to start.png and displays the amount of time elapsed since the first start image was clicked.

Responding to events is another key capability that makes it possible to build any number of different applications. Although this example only uses the mouse_down event, you can use the same approach for other events generated either by the screenlets framework or by a system event such as a timer. The second concept introduced here is persistent state. Because your application is running continuously, waiting for an event to trigger some action, it is able to keep track of items in memory, such as the time the start image was clicked. You could also save information to disk for later retrieval, if necessary.

Automating tasks with screenlets

Now that you have the general idea behind developing screenlets, let's put all together. Most users these days use a Really Simple Syndication (RSS) reader to read blogs and news feeds. For this last example, you're going to build a configurable screenlet that monitors specific feeds for keywords and displays any hits in a text box. The results will be clickable links to open the post in your default Web browser. Listing 3 shows the source code for the RSS Search screenlet.

Listing 3. Python code for the RSS Search screenlet
	
#!/usr/bin/env python

from screenlets.options import StringOption, IntOption, ListOption
import xml.dom.minidom
import webbrowser
import screenlets
import urllib2
import gobject
import pango
import cairo

class RSSSearchScreenlet(screenlets.Screenlet):
    __name__ = 'RSSSearch'
    __version__ = '0.1'
    __author__ = 'John Doe'
    __desc__ = 'An RSS search screenlet.'
    
    topic = 'Windows Phone 7'
    feeds = ['http://www.engadget.com/rss.xml',
             'http://feeds.gawker.com/gizmodo/full']
    interval = 10
    
    __items = []
    __mousesel = 0
    __selected = None
    
    def __init__(self, **kwargs):
        # Customize the width and height.
        screenlets.Screenlet.__init__(self, width=250, height=300, **kwargs)
        self.y = 25
        
    def on_init(self):
        # Add options.
        self.add_options_group('Search Options',
                               'RSS feeds to search and topic to search for.')
        self.add_option(StringOption('Search Options',
            'topic',
            self.topic,
            'Topic',
            'Topic to search feeds for.'))
        self.add_option(ListOption('Search Options',
                                   'feeds',
                                   self.feeds,
                                   'RSS Feeds',
                                   'A list of feeds to search for a topic.'))
        self.add_option(IntOption('Search Options',
                                  'interval',
                                  self.interval,
                                  'Update Interval',
                                  'How frequently to update (in seconds)'))

        self.update()
        
    def update(self):
        """Search selected feeds and update results."""
        
        self.__items = []

        # Go through each feed.
        for feed_url in self.feeds:
            
            # Load the raw feed and find all item elements.
            raw = urllib2.urlopen(feed_url).read()
            dom = xml.dom.minidom.parseString(raw)
            items = dom.getElementsByTagName('item')
            
            for item in items:
                
                # Find the title and make sure it matches the topic.
                title = item.getElementsByTagName('title')[0].firstChild.data
                if self.topic.lower() not in title.lower(): continue
                
                # Shorten the title to 30 characters.
                if len(title) > 30: title = title[:27]+'...'
                
                # Find the link and save the item.
                link = item.getElementsByTagName('link')[0].firstChild.data
                self.__items.append((title, link))

        self.redraw_canvas()

        # Set to update again after self.interval.
        self.__timeout = gobject.timeout_add(self.interval * 1000, self.update)
        
    def on_draw(self, ctx):
        """Called every time the screenlet is drawn to the screen."""
        
        # Draw the background (a gradient).
        gradient = cairo.LinearGradient(0, self.height * 2, 0, 0)
        gradient.add_color_stop_rgba(1, 1, 1, 1, 1)
        gradient.add_color_stop_rgba(0.7, 1, 1, 1, 0.75)
        ctx.set_source(gradient)
        self.draw_rectangle_advanced (ctx, 0, 0, self.width - 20,
                                      self.height - 20,
                                      rounded_angles=(5, 5, 5, 5),
                                      fill=True, border_size=1,
                                      border_color=(0, 0, 0, 0.25),
                                      shadow_size=10,
                                      shadow_color=(0, 0, 0, 0.25))
        
        # Make sure we have a pango layout initialized and updated.
        if self.p_layout == None :
            self.p_layout = ctx.create_layout()
        else:
            ctx.update_layout(self.p_layout)
            
        # Configure fonts.
        p_fdesc = pango.FontDescription()
        p_fdesc.set_family("Free Sans")
        p_fdesc.set_size(10 * pango.SCALE)
        self.p_layout.set_font_description(p_fdesc)

        # Display our text.
        pos = [20, 20]
        ctx.set_source_rgb(0, 0, 0)
        x = 0
        self.__selected = None
        for item in self.__items:
            ctx.save()
            ctx.translate(*pos)
            
            # Find if the current item is under the mouse.
            if self.__mousesel == x and self.mouse_is_over:
                ctx.set_source_rgb(0, 0, 0.5)
                self.__selected = item[1]
            else:
                ctx.set_source_rgb(0, 0, 0)
            
            self.p_layout.set_markup('%s' % item[0])
            ctx.show_layout(self.p_layout)
            pos[1] += 20
            ctx.restore()
            x += 1

    def on_draw_shape(self, ctx):
        ctx.rectangle(0, 0, self.width, self.height)
        ctx.fill()
        
    def on_mouse_move(self, event):
        """Called whenever the mouse moves over the screenlet."""
        
        x = event.x / self.scale
        y = event.y / self.scale
        self.__mousesel = int((y -10 )/ (20)) -1
        self.redraw_canvas()
    
    def on_mouse_down(self, event):
        """Called when the mouse is clicked."""
        
        if self.__selected and self.mouse_is_over:
            webbrowser.open_new(self.__selected)


if __name__ == "__main__":
    import screenlets.session
    screenlets.session.create_session(RSSSearchScreenlet)

Building on the concepts of the first two examples, this screenlet uses a number of new concepts, including the config page. In the on_init routine, three options are added for the user to specify: a list of RSS feeds to track, a topic of interest to search for, and an update interval. The update routine then uses all of these when it runs.

Python is a great language for this type of task. The standard library includes everything you need to load the Extensible Markup Language (XML) from an RSS feed into a searchable list. In Python, this takes just three lines of code:

raw = urllib2.urlopen(feed_url).read()
dom = xml.dom.minidom.parseString(raw)
items = dom.getElementsByTagName('item')
The libraries used in these three lines include urllib2 and xml. In the first line, the entire contents found at the feed_url address are read into the string raw. Next, because you know that this string contains XML, you use the Python XML library dom.minidom.parseString method to create a document object made up of node objects.

Finally, you create a list of element objects corresponding to the individual XML elements named item. You can then iterate over this list to search for your target topic. Python has a very elegant way of iterating over a list of items using the for keyword, as in this code snippet:

for item in items:
    # Find the title and make sure it matches the topic.
    title = item.getElementsByTagName('title')[0].firstChild.data
    if self.topic.lower() not in title.lower(): continue

Each item matching your criteria is added to the currently displayed list, which is associated with this instance of the screenlet. Using this approach makes it possible to have multiple instances of the same screenlet running, each configured to search for different topics. The final part of the update function redraws the text with the updated list and fires off a new update timer based on the interval on the config page. By default, the timer fires every 10 seconds, although you could change that to anything you want. The timer mechanism comes from the gobject library, which is a part of the GTK framework.

This application expands the on_draw method quite heavily to accommodate your new functionality. Both the Cairo and Pango libraries make it possible to create some of the effects used in the text window. Using a gradient gives the background of the widget a nice look along with rounded angles and semi-transparency. Using Pango for layout adds a number of functions for saving and restoring the current context easily. It also provides a way to generate scalable fonts based on the current size of the screenlet.

The trickiest part in the on_draw method is handling when a user hovers over an item in the list. Using the for" keyword, you iterate over the items in the screenlet to see whether the user is hovering over that particular item. If so, you set the selected property and change the color to provide visual feedback. You also use a bit of markup to set the link property to bold-probably not the most elegant or efficient way to deal with the problem, but it works. When a user clicks one of the links in the box, a Web browser is launched with the target URL. You can see this functionality in the on_mouse_down function. Python and its libraries make it possible to launch the default web browser to display the desired page with a single line of code. Figure 2 shows an example of this screenlet.

[Dec 25, 2010] Linux Developers choose Python as Best Programming Language and ...

Such polls mainly reflect what industry is using, no so much the quality of the language. In other poll the best Linux distribution is Ubuntu, which is probably the most primitive among the major distributions available.
According to Linux Journal readers, Python is both the best programming language and the best scripting language out there.

This year, more than 12,000 developers on weighed in on what tools are helping them work and play as part of the Linux Journal's 2010 Readers' Choice Award - and it came as no surprise to those of us at ActiveState that Python came out on top as both the Best Scripting Language (beating out PHP, bash, PERL and Ruby) - and for the second straight year, Python also won as the Best Programming Language, once again edging out C++, Java, C and Perl for the honors.

At ActiveState, we continue see a steady stream of ActivePython Community Edition downloads, more enterprise deployments of ActivePython Business Edition, and a steady increase in the number of enterprise-ready Python packages in our PyPM Index that are being used by our customers over a wide range of verticals including high-tech, financial services, healthcare, and aerospace companies. Python has matured into an enterprise-class programming language that continues to nuture it's scripting world roots. We're happy to see Python get the recognition that it so justly deserves!

[Dec 25, 2010] Russ's Notes on Python

"I'm not entirely sure why Python has never caught on with me as a language to use on a regular basis. Certainly, one of the things that always bugs me is the lack of good integrated documentation support like POD (although apparently reStructured Text is slowly becoming that), but that's not the whole story. I suspect a lot is just that I'm very familiar with Perl and with its standard and supporting library, and it takes me longer to do anything in Python. But the language just feels slightly more awkward, and I never have gotten comfortable with the way that it uses exceptions for all error reporting. "

On the subject of C program indentation: In My Egotistical Opinion, most people's C programs should be indented six feet downward and covered with dirt.

- Blair P. Houghton

Introduction

Around the beginning of April, 2001, I finally decided to do something about the feeling I'd had for some time that I'd like to learn a few new programming languages. I started by looking at Python. These are my notes on the process.

Non-religious comments are welcome. Please don't send me advocacy.

I chose Python as a language to try (over a few other choices like Objective Caml or Common Lisp) mostly because it's less of a departure from the languages that I'm already comfortable with. In particular, it's really quite a bit like Perl. I picked this time to start since I had an idea for an initial program to try writing in Python, a program that I probably would normally write in Perl. I needed a program to help me manage releases of the various software package that I maintain, something to put a new version on an ftp site, update a series of web pages, generate a change log in a nice form for the web, and a few other similar things.

I started by reading the Python tutorial off www.python.org, pretty much straight through. I did keep an interactive Python process running while I did, but I didn't type in many of the examples; the results were explained quite well in the tutorial, and I generally don't need to do things myself to understand them. The tutorial is exceptionally well-written; after finishing reading it straight through (which took me an evening) and skimming the library reference, I felt I had a pretty good grasp on the language.

Things that immediately jumped out at me that I liked a lot:

There were a few things that I immediately didn't like, after having just read the tutorial:

There were also a couple of things that I immediately missed from other languages:

2001-04-08

Over the next few days, I started reading the language manual straight through, as well as poking around more parts of the language reference and writing some code. I started with a function to find the RCS keywords and version string in a file and from that extract the version and the last modified date (things that would need to be modified on the web page for that program). I really had a lot of fun with this.

The Python standard documentation is excellent. I mean truly superb. I can't really compare it to Perl (the other language that has truly excellent standard documentation), since I know Perl so well that I can't evaluate its documentation from the perspective of the beginner, but Python's tutorial eased me into the language beautifully and the language manual is well-written, understandable, and enjoyable to read. The library reference is well-organized and internally consistent, and I never had much trouble finding things. And they're available in info format as well as web pages, which is a major advantage for me; info is easier for me to read straight through, and web pages are easier for me to browse.

The language proved rather fun to write. Regex handling is a bit clunky since it's not a language built-in, but I was expecting that and I don't really mind it. The syntax is fun, and XEmacs python-mode does an excellent job handling highlighting and indentation. I was able to put together that little function and wrap a test around it fairly quickly (in a couple of hours while on the train, taking a lot of breaks to absorb the language reference manual or poke around in the library reference for the best way of doing something).

That's where I am at the moment. More as I find time to do more....

2001-05-04

I've finished my first Python program, after having gotten distracted by a variety of other things. It wasn't the program I originally started writing, since the problem of releasing a new version of a software package ended up being more complicated than I expected. (In particular, generating the documentation looks like it's going to be tricky.) I did get the code to extract version numbers and dates written, though, and then for another project (automatically generating man pages from scripts with embedded POD when installing them into our site-wide software installation) I needed that same code. So I wrote that program in Python and tested it and it works fine.

The lack of a way to safely execute a program without going through the shell is really bothering me. It was also the source of one of the three bugs in the first pass at my first program; I passed a multiword string to pod2man and forgot to protect it from the shell. What I'm currently doing is still fragile in the presence of single quotes in the string, which is another reason why I much prefer Perl's safe system() function. I feel like I must be missing something; something that fundamental couldn't possibly fail to be present in a scripting language.

A second bug in that program highlights another significant difference from Perl that I'm finding a little strange to deal with, namely the lack of equivalence between numbers and strings. My program had a dictionary of section titles, keyed by the section numbers, and I was using the plain number as the dictionary key. When I tried to look up a title in the dictionary, however, I used as the key a string taken from the end of the output filename, and 1 didn't match "1". It took me a while to track that down. (Admittedly, the problem was really laziness on my part; given the existence of such section numbers as "1m" and "3f", I should have used strings as the dictionary keys in the first place.)

The third bug, for the record, was attempting to use a Perl-like construct to read a file (while line = file.readline():). I see that Python 2.1 has the solution I really want in the form of xreadlines, but in the meantime that was easy enough to recode into a test and a break in the middle of the loop.

The lack of a standard documentation format like Perl's POD is bothering me and I'm not sure what to do about it. I want to put the documentation (preferrably in POD, but I'm willing to learn something else that's reasonably simple) into the same file as the script so that it gets updated when the script does and doesn't get lost in the directory. This apparently is just an unsolved problem, unless I'm missing some great link to an embedded documentation technique (and I quite possibly am). Current best idea is to put a long triple-quoted string at the end of my script containing POD. Ugh.

I took a brief look at the standard getopt library (although I didn't end up using it), and was a little disappointed; one of the features that I really liked about Perl's Getopt::Long was its ability to just stuff either the arguments to options or boolean values into variables directly, without needing something like the long case statement that's a standard feature of main() in many C programs. Looks like Python's getopt is much closer to C's, and requires something quite a bit like that case statement.

Oh, and while the documentation is still excellent, I've started noticing a gap in it when it comes to the core language (not the standard library; the documentation there is great). The language reference manual is an excellent reference manual, complete with clear syntax descriptions, but is a little much if one just wants to figure out how to do something. I wasn't sure of the syntax of the while statement, and the language reference was a little heavier than was helpful. I find myself returning to the tutorial to find things like this, and it has about the right level of explanation, but the problem with that is that the tutorial is laid out as a tutorial and isn't as easy to use as a reference. (For example, the while statement isn't listed in the table of contents, because it was introduced in an earlier section with a more general title.)

I need to get the info pages installed on my desktop machine so that I can look things up in the index easily; right now, I'm still using the documentation on the web.

2001-11-13

I've unfortunately not had very much time to work on this, as one can tell from the date.

Aahz pointed out a way to execute a program without going through the shell, namely os.spawnv(). That works, although the documentation is extremely poor. (Even in Python 2.1, it refers me to the Visual C++ Runtime Library documentation for information on what spawnv does, which is of course absurd.) At least the magic constants that it needs are relatively intuitive. Unfortunately, spawnv doesn't search the user's PATH for a command, and there's nothing like spawnvp. Sigh.

There's really no excuse for this being quite this hard. Executing a command without going through the shell is an extremely basic function that should be easily available in any scripting language without jumping through these sorts of hoops.

But this at least gave me a bit of experience in writing some more Python (a function to search the PATH to find a command), and the syntax is still very nice and convenient. I'm bouncing all over the tutorial and library reference to remember how to do things, but usually my first guesses are right.

I see that Debian doesn't have the info pages, only the HTML documentation. That's rather annoying, but workable. I now have the HTML documentation for Python 2.1 on local disk on my laptop.

2002-07-20

I've now written a couple of real Python programs (in addition to the simple little thing to generate man pages by running pod2man). You can find them (cvs2xhtml and cl2xhtml) with my web tools. They're not particularly pretty, but they work, and I now have some more experience writing simple procedural Python code. I still haven't done anything interesting with objects. Comments on the code are welcome. Don't expect too much.

There are a few other documentation methods for Python, but they seem primarily aimed at documenting modules and objects rather than documenting scripts. Pydoc in particular looks like it would be nice for API documentation but doesn't really do anything for end-user program documentation. Accordingly, I've given up for the time being on finding a more "native" approach and am just documenting my Python programs the way that I document most things, by writing embedded POD. I've yet to find a better documentation method; everything else seems to either be far too complicated and author-unfriendly to really write directly in (like DocBook) or can't generate Unix man pages, which I consider to be a requirement.

The Python documentation remains excellent, if scattered. I've sometimes spent a lot of time searching through the documentation to find the right module to do something, and questions of basic syntax are fairly hard to resolve (the tutorial is readable but not organized as a reference, and the language reference is too dense to provide a quick answer).

2004-03-03

My first major Python application is complete and working (although I'm not yet using it as much as I want to be using it). That's Tasker, a web-based to-do list manager written as a Python CGI script that calls a Python module.

I've now dealt with the Python module building tools, which are quite nice (nicer in some ways than Perl's Makefile.PL system with some more built-in functionality, although less mature in a few ways). Python's handling of the local module library is clearly less mature than Perl, and Debian's Python packages don't handle locally installed modules nearly as well as they should, but overall it was a rather positive experience. Built-in support for generating RPMs is very interesting, since eventually I'd like to provide .deb and RPM packages for all my software.

I played with some OO design for this application and ended up being fairly happy with how Python handled things. I'm not very happy with my object layout, but that's my problem, not Python's. The object system definitely feels far smoother and more comfortable to me than Perl's, although I can still write OO code faster in Perl because I'm more familiar with it. There's none of the $self hash nonsense for instance variables, though, which is quite nice.

The CGI modules for Python, and in particular the cgitb module for displaying exceptions nicely in the browser while debugging CGI applications, are absolutely excellent. I was highly impressed, and other than some confusion about the best way to retrieve POST data that was resolved after reading the documentation more closely, I found those modules very easy to work. The cgitb module is a beautiful, beautiful thing and by itself makes me want to use Python for all future CGI programming.

I still get caught all the time by the lack of interchangability of strings and numbers and I feel like I'm casting things all the time. I appreciate some of the benefits of stronger typing, but this one seems to get in my way more often than it helps.

I'm also still really annoyed at the lack of good documentation for the parts of the language that aren't considered part of the library. If I want documentation on how print works, I have only the tutorial and the detailed language standard, the former of which is not organized for reference and the latter of which is far too hard to understand. This is a gaping hole in the documentation that I really wish someone would fix. Thankfully, it only affects a small handful of things, like control flow constructs and the print statement, so I don't hit this very often, but whenever I do it's extremely frustrating.

I've given up on documentation for scripts and am just including a large POD section at the end of the script, since this seems to be the only option that will generate good man pages and good web pages. I'm not sure what to do about documentation for the module; there seem to be a variety of different proposals but nothing that I can really just use.

Oh, and one last point on documentation: the distutils documentation needs some work. Thankfully I found some really good additional documentation on the PyPI web site that explained a lot more about how to write a setup.py script.

2010-03-25

Six years later, I still find Python an interesting language, but I never got sufficiently absorbed by it for it to be part of my standard toolkit.

I've subsequently gotten some additional experience with extending Python through incorporating an extension written by Thomas Kula into the remctl distribution. The C interface is relatively nice and more comfortable than Perl, particularly since it doesn't involve a pseudo-C that is run through a preprocessor. It's a bit more comfortable to read and write.

Python's installation facilities, on the other hand, are poor. The distutils equivalent of Perl's ExtUtils::MakeMaker is considerably worse, despite ExtUtils::MakeMaker being old and crufty and strange. (I haven't compared it with Module::Build.) The interface is vaguely similar, but I had to apply all sorts of hacks to get the Python extension to build properly inside a Debian packaging framework, and integrating it with a larger package requires doing Autoconf substitution on a ton of different files. It was somewhat easier to avoid embedding RPATH into the module, but I'd still much rather work with Perl's facilities.

Similarly, while the test suite code has some interesting features (I'm using the core unittest framework), it's clearly inferior to Perl's Test::More support library and TAP protocol. I'm, of course, a known fan of Perl's TAP testing protocol (I even wrote my own implementation in C), but that's because it's well-designed, full-featured, and very useful. The Python unittest framework, by comparison, is awkward to use, has significantly inferior reporting capabilities, makes it harder to understand what test failed and isolate the failure, and requires a lot of digging around to understand how it works. I do like the use of decorators to handle skipping tests, and there are some interesting OO ideas around test setup and teardown, but the whole thing is more awkward than it should be.

I'm not entirely sure why Python has never caught on with me as a language to use on a regular basis. Certainly, one of the things that always bugs me is the lack of good integrated documentation support like POD (although apparently reStructured Text is slowly becoming that), but that's not the whole story. I suspect a lot is just that I'm very familiar with Perl and with its standard and supporting library, and it takes me longer to do anything in Python. But the language just feels slightly more awkward, and I never have gotten comfortable with the way that it uses exceptions for all error reporting.

I may get lured back into it again at some point, though, since Python 3.0 seems to have some very interesting features and it remains popular with people who know lots of programming languages. I want to give it another serious look with a few more test projects at some point in the future.

[Aug 18, 2009] THE NEXT SNAKE BY RAINER GRIMM

What do Python 2.x programmers need to know about Python 3?

With the latest major Python release, creator Guido van Rossum saw the opportunity to tidy up his famous scripting language. What is different about Python 3.0? In this article, I offer some highlights for Python programmers who are thinking about making the switch to 3.x.

Read full article as PDF "

[Jun 20, 2009] A Python Client/Server Tutorial by Phillip Watts

June 16, 2009

There can be many reasons why you might need a client/server application. For a simple example, purchasing for a small retail chain might need up to the minute stock levels on a central server. The point-of-sale application in the stores would then need to post inventory transactions to the central server in real-time.

This application can easily be coded in Python with performance levels of thousands of transactions per second on a desktop PC. Simple sample programs for the server and client sides are listed below, with discussions following.

[Jul 8, 2008] Python Call Graph 0.5.1 by Gerald Kaszuba

About: pycallgraph is a Python library that creates call graphs for Python programs.

Changes: The "pycg" command line tool was renamed to "pycallgraph" due to naming conflicts with other packages.

[Jun 23, 2008] freshmeat.net Project details for cfv

About:
cfv is a utility to both test and create .sfv (Simple File Verify), .csv, .crc, .md5(sfv style), md5sum, BSD md5, sha1sum, and .torrent checksum verification files. It also includes test-only support for .par and .par2 files. These files are commonly used to ensure the correct retrieval or storage of data.

Release focus: Major bugfixes

Changes:
Help output is printed to stdout under non-error conditions. A mmap file descriptor leak in Python 2.4.2 was worked around. The different module layout of BitTorrent 5.x is supported. A "struct integer overflow masking is deprecated" warning was fixed. The --private_torrent flag was added. A bug was worked around in 64-bit Python version 2.5 and later which causes checksums of files larger than 4GB to be incorrectly calculated when using mmap.

[Jun 20, 2008] BitRock Download Web Stacks

BitRock Web Stacks provide you with the easiest way to install and run the LAMP platform in a variety of Linux distributions. BitRock Web Stacks are free to download and use under the terms of the Apache License 2.0. To learn more about our licensing policies, click here.

You can find up-to-date WAMP, LAMP and MAMP stacks at the BitNami open source website. In addition to those, you will find freely available application stacks for popular open source software such as Joomla!, Drupal, Mediawiki and Roller. Just like BitRock Web Stacks, they include everything you need to run the software and come packaged in a fast, easy to use installer.

BitRock Web Stacks contain several open source tools and libraries. Please be sure that you read and comply with all of the applicable licenses. If you are a MySQL Network subscriber (or would like to purchase a subscription) and want to use a version of LAMPStack that contains the MySQL Certified binaries, please send an email to [email protected].

For further information, including supported platforms, component versions, documentation, and support, please visit our solutions section.

[Mar 12, 2008] Terminator - Multiple GNOME terminals in one window

Rewrite of screen in Python?

This is a project to produce an efficient way of filling a large area of screen space with terminals. This is done by splitting the window into a resizeable grid of terminals. As such, you can produce a very flexible arrangements of terminals for different tasks.

Read me

Terminator 0.8.1
by Chris Jones <[email protected]>

This is a little python script to give me lots of terminals in a single window, saving me valuable laptop screen space otherwise wasted on window decorations and not quite being able to fill the screen with terminals.

Right now it will open a single window with one terminal and it will (to some degree) mirror the settings of your default gnome-terminal profile in gconf. Eventually this will be extended and improved to offer profile selection per-terminal, configuration thereof and the ability to alter the number of terminals and save meta-profiles.

You can create more terminals by right clicking on one and choosing to split it vertically or horizontally. You can get rid of a terminal by right clicking on it and choosing Close. ctrl-shift-o and ctrl-shift-e will also effect the splitting.

ctrl-shift-n and ctrl-shift-p will shift focus to the next/previous terminal respectively, and ctrl-shift-w will close the current terminal and ctrl-shift-q the current window

Ask questions at: https://answers.launchpad.net/terminator/
Please report all bugs to https://bugs.launchpad.net/terminator/+filebug

It's quite shamelessly based on code in the vte-demo.py from the vte widget package, and on the gedit terminal plugin (which was fantastically useful).

vte-demo.py is not my code and is copyright its original author. While it does not contain any specific licensing information in it, the VTE package appears to be licenced under LGPL v2.

the gedit terminal plugin is part of the gedit-plugins package, which is licenced under GPL v2 or later.

I am thus licensing Terminator as GPL v2 only.

Cristian Grada provided the icon under the same licence.

Python and the Programmer

Python and the Programmer
A Conversation with Bruce Eckel, Part I
by Bill Venners
Jun 2, 2003
Summary
Bruce Eckel talks with Bill Venners about why he feels Python is "about him," how minimizing clutter improves productivity, and the relationship between backwards compatibility and programmer pain.

Bruce Eckel wrote the best-selling books Thinking in C++ and Thinking in Java, but for the past several years he's preferred to think in Python. Two years ago, Eckel gave a keynote address at the 9th International Python Conference entitled "Why I love Python." He presented ten reasons he loves programming in Python in "top ten list" style, starting with ten and ending with one.

In this interview, which is being published in weekly installments, I ask Bruce Eckel about each of these ten points. In this installment, Bruce Eckel explains why he feels Python is "about him," how minimizing clutter improves productivity, and the relationship between backwards compatibility and programmer pain.

Bill Venners: In the introduction to your "Why I Love Python" keynote, you said what you love the most is "Python is about you." How is Python about you?

Bruce Eckel: With every other language I've had to deal with, it's always felt like the designers were saying, "Yes, we're trying to make your life easier with this language, but these other things are more important." With Python, it has always felt like the designers were saying, "We're trying to make your life easier, and that's it. Making your life easier is the thing that we're not compromising on."

For example, the designers of C++ certainly attempted to make the programmer's life easier, but always made compromises for performance and backwards compatibility. If you ever had a complaint about the way C++ worked, the answer was performance and backwards compatibility.

Bill Venners: What compromises do you see in Java? James Gosling did try to make programmers more productive by eliminating memory bugs.

Bruce Eckel: Sure. I also think that Java's consistency of error handling helped programmer productivity. C++ introduced exception handling, but that was just one of many ways to handle errors in C++. At one time, I thought that Java's checked exceptions were helpful, but I've modified my view on that. (See Resources.)

It seems the compromise in Java is marketing. They had to rush Java out to market. If they had taken a little more time and implemented design by contract, or even just assertions, or any number of other features, it would have been better for the programmer. If they had done design and code reviews, they would have found all sorts of silliness. And I suppose the way Java is marketed is probably what rubs me the wrong way about it. We can say, "Oh, but we don't like this feature," and the answer is, "Yes, but, marketing dictates that it be this way."

Maybe the compromises in C++ were for marketing reasons too. Although choosing to be efficient and backwards compatible with C was done to sell C++ to techies, it was still to sell it to somebody.

I feel Python was designed for the person who is actually doing the programming, to maximize their productivity. And that just makes me feel warm and fuzzy all over. I feel nobody is going to be telling me, "Oh yeah, you have to jump through all these hoops for one reason or another." When you have the experience of really being able to be as productive as possible, then you start to get pissed off at other languages. You think, "Gee, I've been wasting my time with these other languages."

Number 10: Reduced Clutter

Bill Venners: In your keynote, you gave ten reasons you love Python. Number ten was reduced clutter. What did you mean by reduced clutter?

Bruce Eckel: They say you can hold seven plus or minus two pieces of information in your mind. I can't remember how to open files in Java. I've written chapters on it. I've done it a bunch of times, but it's too many steps. And when I actually analyze it, I realize these are just silly design decisions that they made. Even if they insisted on using the Decorator pattern in java.io, they should have had a convenience constructor for opening files simply. Because we open files all the time, but nobody can remember how. It is too much information to hold in your mind.

The other issue is the effect of an interruption. If you are really deep into doing something and you have an interruption, it's quite a number of minutes before you can get back into that deeply focused state. With programming, imagine you're flowing along. You're thinking, "I know this, and I know this, and I know this," and you are putting things together. And then all of a sudden you run into something like, "I have to open a file and read in the lines." All the clutter in the code you have to write to do that in Java can interrupt the flow of your work.

Another number that used to be bandied about is that programmers can produce an average of ten working lines of code per day. Say I open up a file and read in all the lines. In Java, I've probably already used up my ten working lines of code for that day. In Python, I can do it in one line. I can say, "for line in file('filename').readlines():," and then I'm ready to process the lines. And I can remember that one liner off the top of my head, so I can just really flow with that.

Python's minimal clutter also helps when I'm reading somebody else's code. I'm not tripping over verbose syntax and idioms. "Oh I see. Opening the file. Reading the lines." I can grok it. It's very similar to the design patterns in that you have a much denser form of communication. Also, because blocks are denoted by indentation in Python, indentation is uniform in Python programs. And indentation is meaningful to us as readers. So because we have consistent code formatting, I can read somebody else's code and I'm not constantly tripping over, "Oh, I see. They're putting their curly braces here or there." I don't have to think about that.

Number 9: Not Backwards Compatible in Exchange for Pain

Bill Venners: In your keynote, your ninth reason for loving Python was, "Not backwards compatible in exchange for pain." Could you speak a bit about that?

Bruce Eckel: That's primarily directed at C++. To some degree you could say it refers to Java because Java was derived primarily from C++. But C++ in particular was backwards compatible with C, and that justified lots of language issues. On one hand, that backwards compatibility was a great benefit, because C programmers could easily migrate to C++. It was a comfortable place for C programmers to go. But on the other hand, all the features that were compromised for backwards compatibility was the great drawback of C++.

Python isn't backwards compatible with anything, except itself. But even so, the Python designers have actually modified some fundamental things in order to fix the language in places they decided were broken. I've always heard from Sun that backwards compatibility is job one. And so even though stuff is broken in Java, they're not going to fix it, because they don't want to risk breaking code. Not breaking code always sounds good, but it also means we're going to be in pain as programmers.

One fundamental change they made in Python, for example, was "type class unification." In earlier versions, some of Python's primitive types were not first class objects with first class characteristics. Numbers, for example, were special cases like they are in Java. But that's been modified so now I can inherit from integer if I want to. Or I can inherit from the modified dictionary class. That couldn't be done before. After a while it began to be clear that it was a mistake, so they fixed it.

Now in C++ or Java, they'd say, "Oh well, too bad." But in Python, they looked at two issues. One, they were not breaking anybody's existing world, because anyone could simply choose to not upgrade. I think that could be an attitude taken by Java as well. And two, it seemed relatively easy to fix the broken code, and the improvement seemed worth the code-fixing work. I find that attitude so refreshing, compared to the languages I'd used before where they said, "Oh, it's broken. We made a mistake, but you'll have to live with it. You'll have to live with our mistakes."

Next Week

Come back Monday, June 9 for Part I of a conversation with Java's creator James Gosling. I am now staggering the publication of several interviews at once, to give the reader variety. The next installment of this interview with Bruce Eckel will appear on Monday, June 23. If you'd like to receive a brief weekly email announcing new articles at Artima.com, please subscribe to the Artima Newsletter.

Talk Back!

Have an opinion about programmer productivity, backwards compatibility, or breaking code versus programmer pain. Discuss this article in the News & Ideas Forum topic, Python and the Programmer.

Resources

[Apr 3, 2007] Charming Python Python elegance and warts, Part 1

Generators as not-quite-sequences

Over several versions, Python has hugely enhanced its "laziness." For several versions, we have had generators defined with the yield statement in a function body. But along the way we also got the itertools modules to combine and create various types of iterators. We have the iter() built-in function to turn many sequence-like objects into iterators. With Python 2.4, we got generator expressions, and with 2.5 we will get enhanced generators that make writing coroutines easier. Moreover, more and more Python objects have become iterators or iterator-like; for example, what used to require the .xreadlines() method or before that the xreadlines module, is now simply the default behavior of open() to read files.

Similarly, looping through a dict lazily used to require the .iterkeys() method; now it is just the default for key in dct behavior. Functions like xrange() are a bit "special" in being generator-like, but neither quite a real iterator (no .next() method), nor a realized list like range() returns. However, enumerate() returns a true generator, and usually does what you had earlier wanted xrange() for. And itertools.count() is another lazy call that does almost the same thing as xrange(), but as a full-fledged iterator.

Python is strongly moving towards lazily constructing sequence-like objects; and overall this is an excellent direction. Lazy pseudo-sequences both save memory space and speed up operations (especially when dealing with very large sequence-like "things").

The problem is that Python still has a schizoaffective condition when it comes to deciding what the differences and similarities between "hard" sequences and iterators are. The troublesome part of this is that it really violates Python's idea of "duck typing": the ability to use a given object for a purpose just as long as it has the right behaviors, but not necessarily any inheritance or type restriction. The various things that are iterators or iterator-like sometimes act sequence-like, but other times do not; conversely, sequences often act iterator-like, but not always. Outside of those steeped in Python arcana, what does what is not obvious.

Divergences

The main point of similarity is that everything that is sequence- or iterator-like lets you loop over it, whether using a for loop, a list comprehension, or a generator comprehension. Past that, divergences occur. The most important of these differences is that sequences can be indexed, and directly sliced, while iterators cannot. In fact, indexing into a sequence is probably the most common thing you ever do with a sequence -- why on earth does it fall down so badly on iterators? For example:


Listing 9. Sequence-like and iterator-like things
>>> r = range(10)

>>> i = iter(r)

>>> x = xrange(10)

>>> g = itertools.takewhile(lambda n: n<10, itertools.count())

#...etc...

For all of these, you can use for n in thing. In fact, if you "concretize" any of them with list(thing), you wind up with exactly the same result. But if you wish to obtain a specific item -- or a slice of a few items -- you need to start caring about the exact type of thing. For example:


Listing 10. When indexing succeeds and fails
>>> r[4]
4

>>> i[4]
TypeError: unindexable object

With enough contortions, you can get an item for every type of sequence/iterator. One way is to loop until you get there. Another hackish combination might be something like:


Listing 11. Contortions to obtain an index
>>> thing, temp = itertools.tee(thing)

>>> zip(temp, '.'*5)[-1][0]
4

The pre-call to itertools.tee() preserves the original iterator. For a slice, you might use the itertools.islice() function, wrapped up in contortions.


Listing 12. Contortions to obtain a slice
>>> r[4:9:2]
[4, 6, 8]

>>> list(itertools.islice(r,4,9,2))  # works for iterators
[4, 6, 8]

A class wrapper

You might combine these techniques into a class wrapper for convenience, using some magic methods:


Listing 13. Making iterators indexable
>>> class Indexable(object):
...     def __init__(self, it):
...         self.it = it
...     def __getitem__(self, x):
...         self.it, temp = itertools.tee(self.it)
...         if type(x) is slice:
...             return list(itertools.islice(self.it, x.start, x.stop, x.step))
...         else:
...             return zip(temp, range(x+1))[-1][0]
...     def __iter__(self):
...         self.it, temp = itertools.tee(self.it)
...         return temp
...

>>> integers = Indexable(itertools.count())

>>> integers[4]
4
>>> integers[4:9:2]
[4, 6, 8]
      

So with some effort, you can coax an object to behave like both a sequence and an iterator. But this much effort should really not be necessary; indexing and slicing should "just work" whether a concrete sequence or a iterator is involved.

Notice that the Indexable class wrapper is still not as flexible as might be desirable. The main problem is that we create a new copy of the iterator every time. A better approach would be to cache the head of the sequence when we slice it, then use that cached head for future access of elements already examined. Of course, there is a trade-off between memory used and the speed penalty of running through the iterator. Nonetheless, the best thing would be if Python itself would do all of this "behind the scenes" -- the behavior might be fine-tuned somehow by "power users," but average programmers should not have to think about any of this.

In the next installment in this series, I'll discuss accessing methods using attribute syntax.

[Oct 26, 2006] ONLamp.com -- What's New in Python 2.5

It's clear that Python is under pressure from Ruby :-)

It's hard to believe Python is more than 15 years old already. While that may seem old for a programming language, in the case of Python it means the language is mature. In spite of its age, the newest versions of Python are powerful, providing everything you would expect from a modern programming language.

This article provides a rundown of the new and important features of Python 2.5. I assume that you're familiar with Python and aren't looking for an introductory tutorial, although in some cases I do introduce some of the material, such as generators.

[Sep 30, 2006] Python 2.5 Release We are pleased to announce the release of Python 2.5 (FINAL), the final, production release of Python 2.5, on September 19th, 2006.

with open('/etc/passwd', 'r') as f:
    for line in f:
        print line
        ... more processing code ...
why to stray from mainstream C-style is unlear to me. When developing computer language syntax, natural language imitation should not be the priority - also being different for the sake of being different is so very early 90s
cout << ( a==b ? "first option" : "second option" )

[Sept 20, 2006] Python 101 cheat sheet

[01 Feb 2000] Python columnist Evelyn Mitchell brings you a quick reference and learning tools for newbies who want to get to know the language. Print it, keep it close at hand, and get down to programming!

[Jul 27, 2006] eWeek/Microsoft Ships Python on .Net Darryl K. Taft

Microsoft has shipped the release candidate for IronPython 1.0 on its CodePlex community source site.

In a July 25 blog post, S. "Soma" Somasegar, corporate vice president of Microsoft's developer division, praised the team for getting to a release candidate for a dynamic language that runs on the Microsoft CLI (Common Language Infrastructure). Microsoft designed the CLI to support a variety of programming languages. Indeed, "one of the great features of the .Net framework is the Common Language Infrastructure," Somasegar said.

"IronPython is a project that implements the dynamic object-oriented Python language on top of the CLI," Somasegar said. IronPython is both well-integrated with the .Net Framework and is a true implementation of the Python language, he said.

And ".Net integration means that this rich programming framework is available to Python developers and that they can interoperate with other .Net languages and tools," Somasegar said. "All of Python's dynamic features like an interactive interpreter, dynamically modifying objects and even metaclasses are available. IronPython also leverages the CLI to achieve good performance, running up to 1.5 times faster than the standard C-based Python implementation on the standard Pystone benchmark."

Click here to read an eWEEK interview with Python creator Guido van Rossum.

Moreover, the download of the release candidate for IronPython 1.0 "includes a tutorial which gives .Net programmers a great way to get started with Python and Python programmers a great way to get started with .Net," Somasegar said.

Somasegar said he finds it "exciting to see that the Visual Studio SDK [software development kit] team has used the IronPython project as a chance to show language developers how they can build support for their language into Visual Studio. They have created a sample, with source, that shows some of the basics required for integrating into the IDE including the project system, debugger, interactive console, IntelliSense and even the Windows forms designer. "

IronPython is the creation of Jim Hugunin, a developer on the Microsoft CLR (Common Language Runtime) team. Hugunin joined Microsoft in 2004.

In a statement written in July 2004, Hugunin said: "My plan was to do a little work and then write a short pithy article called, 'Why .Net is a terrible platform for dynamic languages.' My plans changed when I found the CLR to be an excellent target for the highly dynamic Python language. Since then I've spent much of my spare time working on the development of IronPython."

However, Hugunin said he grew frustrated with the slow pace of progress he could make by working on the project only in his spare time, so he decided to join Microsoft.

IronPython is governed by Microsoft's Shared Source license.

[Feb 20, 2006] freshmeat.net Project details for Meld

Meld is a GNOME 2 visual diff and merge tool. It integrates especially well with CVS. The diff viewer lets you edit files in place (diffs update dynamically), and a middle column shows detailed changes and allows merges. The margins show location of changes for easy browsing, and it also features a tabbed interface that allows you to open many diffs at once.

Information about CWM - TimBL's Closed World Machine

CWM is a popular Semantic Web program that can do the following tasks:-

CWM was written in Python from 2000-10 onwards by Tim Berners-Lee and Dan Connolly of the W3C.

This resource is provided so that people can use CWM, find out what it does (documentation used to be sparse), and perhaps even contribute to its development.

What's new in Python 2.4

New or upgraded built-ins

Wing IDE for Python Python IDS that includes source browser and editor. The editor supports folding

Reference Manual Wing IDE Version 1.1.4 Pythod IDS that includes source broser and editor.

What's New in Python 2.2 -- generators is a very interesting feature of Python 2.2 that is essentially a co-routine.

Generators are another new feature, one that interacts with the introduction of iterators.

You're doubtless familiar with how function calls work in Python or C. When you call a function, it gets a private namespace where its local variables are created. When the function reaches a return statement, the local variables are destroyed and the resulting value is returned to the caller. A later call to the same function will get a fresh new set of local variables. But, what if the local variables weren't thrown away on exiting a function? What if you could later resume the function where it left off? This is what generators provide; they can be thought of as resumable functions.

Here's the simplest example of a generator function:

def generate_ints(N):
    for i in range(N):
        yield i

A new keyword, yield, was introduced for generators. Any function containing a yield statement is a generator function; this is detected by Python's bytecode compiler which compiles the function specially as a result. Because a new keyword was introduced, generators must be explicitly enabled in a module by including a from __future__ import generators statement near the top of the module's source code. In Python 2.3 this statement will become unnecessary.

When you call a generator function, it doesn't return a single value; instead it returns a generator object that supports the iterator protocol. On executing the yield statement, the generator outputs the value of i, similar to a return statement. The big difference between yield and a return statement is that on reaching a yield the generator's state of execution is suspended and local variables are preserved. On the next call to the generator's .next() method, the function will resume executing immediately after the yield statement. (For complicated reasons, the yield statement isn't allowed inside the try block of a try...finally statement; read PEP 255 for a full explanation of the interaction between yield and exceptions.)

Here's a sample usage of the generate_ints generator:

>>> gen = generate_ints(3)
>>> gen
<generator object at 0x8117f90>
>>> gen.next()
0
>>> gen.next()
1
>>> gen.next()
2
>>> gen.next()
Traceback (most recent call last):
  File "<stdin>", line 1, in ?
  File "<stdin>", line 2, in generate_ints
StopIteration

You could equally write for i in generate_ints(5), or a,b,c = generate_ints(3).

Inside a generator function, the return statement can only be used without a value, and signals the end of the procession of values; afterwards the generator cannot return any further values. return with a value, such as return 5, is a syntax error inside a generator function. The end of the generator's results can also be indicated by raising StopIteration manually, or by just letting the flow of execution fall off the bottom of the function.

You could achieve the effect of generators manually by writing your own class and storing all the local variables of the generator as instance variables. For example, returning a list of integers could be done by setting self.count to 0, and having the next() method increment self.count and return it. However, for a moderately complicated generator, writing a corresponding class would be much messier. Lib/test/test_generators.py contains a number of more interesting examples. The simplest one implements an in-order traversal of a tree using generators recursively.

# A recursive generator that generates Tree leaves in in-order.
def inorder(t):
    if t:
        for x in inorder(t.left):
            yield x
        yield t.label
        for x in inorder(t.right):
            yield x

Two other examples in Lib/test/test_generators.py produce solutions for the N-Queens problem (placing queens on an chess board so that no queen threatens another) and the Knight's Tour (a route that takes a knight to every square of an chessboard without visiting any square twice).

The idea of generators comes from other programming languages, especially Icon (http://www.cs.arizona.edu/icon/), where the idea of generators is central. In Icon, every expression and function call behaves like a generator. One example from ``An Overview of the Icon Programming Language'' at http://www.cs.arizona.edu/icon/docs/ipd266.htm gives an idea of what this looks like:

sentence := "Store it in the neighboring harbor"
if (i := find("or", sentence)) > 5 then write(i)

In Icon the find() function returns the indexes at which the substring ``or'' is found: 3, 23, 33. In the if statement, i is first assigned a value of 3, but 3 is less than 5, so the comparison fails, and Icon retries it with the second value of 23. 23 is greater than 5, so the comparison now succeeds, and the code prints the value 23 to the screen.

Python doesn't go nearly as far as Icon in adopting generators as a central concept. Generators are considered a new part of the core Python language, but learning or using them isn't compulsory; if they don't solve any problems that you have, feel free to ignore them. One novel feature of Python's interface as compared to Icon's is that a generator's state is represented as a concrete object (the iterator) that can be passed around to other functions or stored in a data structure.

See Also:

PEP 255, Simple Generators
Written by Neil Schemenauer, Tim Peters, Magnus Lie Hetland. Implemented mostly by Neil Schemenauer and Tim Peters, with other fixes from the Python Labs crew.

Dive Into Python Dive Into Python is a free Python book for experienced programmers.

You can read the book online, or download it in a variety of formats. It is also available in multiple languages.

Seventh International Python Conference Papers

Applications I: The Internet

Optimizing Python

Extending and Compiling

Applications II: Science and Simulation

Slashdot: What Makes a Powerful Programming Language

by NeuroMorphus on Monday February 11, @07:26PM (#2991161)
(User #463324 Info | http://slashdot.org/)
You also have my Python vote. Python is not only a powerful language on its own, but it's also a good glue language, pulling C++ together to speed up particular modules, or using Jython for a Java perspective(IMHO, I like programming in Jython better than Java itself:)

Python is also good for server-side scripting as well as operating system scripts. Not to mention that Python has lots of support including, Twisted Matrix [twistedmatrix.com], which is an event-based framework for internet applications that include a web server, a telnet server, a multiplayer RPG engine, a generic client and server for remote oject access, and API's for creating new protocols and services.

If you want GUI design, you have wxWindows, wxPython for wxGTK, Tk, PyQt, PyGTK, etc.

I would tell your boss to got with Python; You'll end up using it down the road anyway ;)

Consider Python carefully (Score:3, Informative)
by pclminion on Monday February 11, @09:17PM (#2991850)
(User #145572 Info)
I can definitely speak for the elegance of Python; I've used it to build some pretty large-scale AI projects, among about a zillion other things.

If you're looking for the weirder OO features like operator overloading, you'll find them in Python, but the calisthenics you have to go through to do it might make you think twice about using it.

The only real drawback to Python is the execution speed. One of the AI projects I previously mentioned worked great (and I completed it probably 10 times faster than I would have had I been using C/C++), but it ran very very slowly. The cause of the problem is clear -- no static typing, reference-count garbage collection, a stack-based VM (I ain't knockin' it! It's just hard to optimize)...

If you are planning to do anything compute-intensive, maybe Python is not the right choice. It's always possible to break out to C/C++ with the Python API (to do your compute-intensive tasks), but there are drawbacks: over time, the Python C-API will probably drift, and you'll have to keep tweaking your native code to keep it working. Also, the inner workings of Python, especially refcounts, can be boggling and the source of bugs and memory leaks that can be fantastically hard to track down.

If you consider Python, then great. Just keep these points in mind.

Python for you and "Clean C" for Cliff (Score:1)
by Nice2Cats on Tuesday February 12, @04:56AM (#2993069)
(User #557310 Info | http://slashdot.org/)
It does sound like your boss was trying to tell you to use Python, unless it needs to be blindingly fast (as in "operating system" or "3d shooter"). I'd like to second that recommendation and the arguments already posted.

To respond to Cliff's question: The one thing that every new language should have is that "significant whitespace" look-ma-no-braces syntax used by Python - or at least the option of not having to spend your life typing semicolons.

In fact, you could probably retrofit the common C and C++ compilers to accept significant whitespace. This would mean the Linux kernel people would be reading:

static int proc_sel(struct task_struct *p, int which, int who):
if(p->pid):
switch (which):
case PRIO_PROCESS:
if (!who && p == current):
return 1
return(p->pid == who)
case PRIO_PGRP:
if (!who):
who = current->pgrp
return(p->pgrp == who)
case PRIO_USER:
if (!who):
who = current->uid
return(p->uid == who)
return 0

(Modified from /linux/kernel/sys.c (C) Linus Torvalds)

Of course, there is a problem of finding a new name for this sort of C without violating somebodys copyright...is "Clean C" still available?

[Dec 20, 2001] Semi-coroutines in Python 2.2

Search Result 31

From: Steven Majewski ([email protected])
Subject: Re: (semi) stackless python
Newsgroups: comp.lang.python

View: Complete Thread (2 articles) | Original Format

Date: 2001-12-20 13:01:46 PST

 The new generators in Python2.2 implement semi-coroutines, not
full coroutines:  the limitation is that they always return to
their caller -- they can't take an arbitrary continuation as
a return target.

 So you can't do everything you can do in Stackless, or at least,
you can't do it the same way. I'm not sure what the limitations
are yet, but you might be surprised with what you CAN do.
 If all the generator objects yield back to the same 'driver'
procedure, then you can do a sort of cooperative multithreading.
In that case, you're executing the generators for their side
effects -- the value returned by yield may be unimportant except
perhaps as a status code.

[ See also Tim Peters' post on the performance advantages of
  generators -- you only parse args once for many generator
  calls, and you keep and reuse the same stack frame. So I
  believe you get some of the same benefits of Stackless
  microthreads. ]

Maybe I can find the time to post an example of what I mean.
In the mean time, some previous posts on generators might give
you some ideas.

<http://groups.google.com/groups?q=generators+group:comp.lang.python+author:Majewski&hl=en&scoring=d&as_drrb=b&as_mind=12&as_minm=1&as_miny=2001&as_maxd=20&as_maxm=12&as_maxy=2001&rnum=1&selm=mailman.1008090197.6318.python-list%40python.org>

<http://groups.google.com/groups?hl=en&threadm=mailman.996870501.28562.python-list%40python.org&rnum=6&prev=/groups%3Fas_q%3Dgenerators%26as_ugroup%3Dcomp.lang.python%26as_uauthors%3DMajewski%26as_drrb%3Db%26as_mind%3D12%26as_minm%3D1%26as_miny%3D2001%26as_maxd%3D20%26as_maxm%3D12%26as_maxy%3D2001%26num%3D100%26as_scoring%3Dd%26hl%3Den>

The ungrammatical title of that last thread above:
"Nested generators is the Python equivalent of unix pipe cmds."
might also suggest the solution I'm thinking of.


-- Steve Majewski

TechNetCast Archives

Python for Lisp Programmers

This is a brief introduction to Python for Lisp programmers. Basically, Python can be seen as a dialect of Lisp with "traditional" syntax (what Lisp people call "infix" or "m-lisp" syntax). One message on comp.lang.python said "I never understood why LISP was a good idea until I started playing with python." Python supports all of Lisp's essential features except macros, and you don't miss macros all that much because it does have eval, and operator overloading, so you can create custom languages that way. (Although it wasn't my intent, Python programmers have told me this page has helped them learn Lisp.)

I looked into Python because I was considering translating the code for the Russell & Norvig AI textbook from Lisp to Java so that I could have (1) portable GUI demos, (2) portable http/ftp/html libraries, (3) free development environments on all major platforms, and (4) a syntax for students and professors who are afraid of parentheses. But writing all that Java seemed much too daunting, and I think that students who used Java would suffer by not having access to an interactive environment to try things out. Then I discovered JPython, a version of Python that is neatly integrated into Java, giving us access to the Java GUIs. Of course, Python already has web libraries, so JPython can use either those or Java's.

[Nov 02, 2001] Perl to Python Migration by Martin C. Brown

Martin C Brown is the author of several books on Perl (including Debugging Perl) and a book on Python (Python The Complete Reference, 2001) so he knows both languages. which makes the book pretty unique. there is no other books that cover Python from the position of Perl programmer.
My conclusion

Python is an excellent language for my intended use. It is a good language for many of the applications that one would use Lisp as a rapid prototyping environment for. The three main drawbacks are (1) execution time is slow, (2) there is very little compile-time error analysis, even less than Lisp, and (3) Python isn't called "Java", which is a requirement in its own right for some of my audience. I need to determine if JPython is close enough for them.

Python can be seen as either a practical (better libraries) version of Scheme, or as a cleaned-up (no $@&%) version of Perl. While Perl's philosophy is TIMTOWTDI (there's more than one way to do it), Python tries to provide a minimal subset that people will tend to use in the same way. One of Python's controversial features, using indentation level rather than begin/end or {/}, was driven by this philosophy: since there are no braces, there are no style wars over where to put the braces. Interestingly, Lisp has exactly the same philosophy on this point: everyone uses emacs to indent their code. If you deleted the parens on control structure special forms, Lisp and Python programs would look quite similar.

Python has the philosophy of making sensible compromises that make the easy things very easy, and don't preclude too many hard things. In my opinion it does a very good job. The easy things are easy, the harder things are progressively harder, and you tend not to notice the inconsistencies. Lisp has the philosophy of making fewer compromises: of providing a very powerful and totally consistent core. This can make Lisp harder to learn because you operate at a higher level of abstraction right from the start and because you need to understand what you're doing, rather than just relying on what feels or looks nice. But it also means that in Lisp it is easier to add levels of abstraction and complexity; Lisp makes the very hard things not too hard.

[May 6, 2001] Zope for the Perl-CGI programmer by Michael Roberts ([email protected]) Zope is written in Python

Zope (the Z Object Publishing Environment) is an application server that is gaining in popularity. But what is it? What's an application server, anyway? How does all this compare with nice familiar paradigms like CGI? More importantly, is Zope a fad, or is it here to stay?

In a nutshell, here's what you get from Zope:

And you get all this in an easily installed, easily maintained package. But the most significantly new thing about Zope is how it encourages you to look at your Web site differently than you do now. Let's talk about that a little before we get down to brass tacks.

Python creator: Perl users are moving to Python

Link is now dead...
searchEnterpriseLinux: Are there situations in which it's better to use Perl than Python, or vice versa?
van Rossum: They're both competing for the same niche, in some cases. If all you need to do is simple text processing, you might use Perl. Python, much more than Perl, encourages clean coding habits to make it easy for other people to follow what you are doing.

I've seen a lot of people who were developing in Perl moving to Python because it's easier to use. Where Python wins is when you have users who are not very sophisticated but have to write some code.

An example would be in educational settings, where you have students who have no prior programming experience. Also, Python seems to work better when you have to write a large system, working together in a large group of developers, and when you want the lifetime of your program to be long.

searchEnterpriseLinux: In what situations would people use Java instead of Python?
van Rossum: Java and Python have quite different characteristics. Java is sophisticated language. It takes a longer time to learn Java than Python. With Java, you have to get used to the compile-edit-run cycle of software development. Python is a lighter-weight language. There is less to learn to get started. You can hit the ground running when you have only a few days of Python training under your belt. Python and Java can be used in the same project. Python would be used for higher level control of an application, and Java would be used for implementing the lower level that needs to run relatively efficiently. Another place where Python is used is an extension language. An example of that is Object Domain, a UML editing tool written in Java. It uses Python as an extension language.
searchEnterpriseLinux: Will use of Python in developing applications increase?
van Rossum: It's going to increase dramatically. There's a definite need for a language that's as easy to use as Python. Python is used to teach students how to program, so that is creating a base of people who know and like the language.
searchEnterpriseLinux: Are there technology developments that will benefit Linux developers?
van Rossum: Python developer tools are becoming available that work well in the Linux world. Until a year ago, the Python developer was stuck with using Emacs or VI to edit source code. Both of those editors have support for Python, but they're just editors. Now there are tools like Wing IDE, which is a development tool that is written in Python. Komodo is more ambitious and less finished at the moment, and it also supports program development with Perl. There is a Swedish company called Secret Labs which have had Windows development tools for Python, but which is now focusing on the Linux market. As more development environments become available for Python, we'll see a lot more users on the Linux platform.

Dave Warner wrote an excellent article on XML-RPC and Python here, using Meerkat as an example of a real XML-RPC server

Another article about XML-RPC

According to Jeff Walsh's InfoWorld article, Microsoft is planning to open up their operating system using XML-RPC. Such a protocol could be deployed quickly in other operating systems that support HTTP, ranging from Perl scripts running on Linux, to Quark publishing systems running on Macs, to relational databases running on mainframe systems. It could put Windows at the center of a new kind of web, one built of logic, storing objects and methods, not just pages and graphics.

The XML-RPC HOWTO is here.

People interested in XML-RPC might be interested in checking it out using KDE. Since KDE 2.0 a DCOP XMLRPC bridge has been included allowing easy access to a wide range of the desktops APIs.

My experiencie implementing XML-RPC (Score:3, Interesting)
by Nicopa ([email protected]) on Tuesday February 20, @08:04PM EST (#88)
(User #87617 Info)
This is my experience with XML-RPC:

I work for a company (Technisys) which have created several years ago an RPC tool called "tmgen". This tool is built as a layer on top of rpcgen, adding session cookie handling, SSL support, a stateless server, handling of enumerated values with associated long and short descriptions, and many other thing. It's in fact, an application server built on top of RPC.

This baby have been running for many years in the most important banks and credit cards companies here in Argentina (yes, you know the brands, but I'm not sure I can tell you which ones =) ).

The "tmgen" tool reads a ".def" file that defines the datatyes, and ".trn" files which have the code of the different "transactions". Having read those files, it automatically generates the server source (including the rpcgen input source).

I was asked to make it possible for the clients to be programmed in the Java language. I evaluated several possibilities, one of them using a Java client for RPC. This required us to go for a proprietary solution, besides, being in control of both sides it looked silly to be tied to a protocol. Another possibility would have been to modify tmgen to create an RMI server. But the best was to create an XML server (IMO). I then evaluated SOAP and XML-RPC. SOAP seemed very nice, but XML-RPC was *direct* mapping of the semantics and types of our existing solution. The benefits of SOAP were a drawback in this case, we just wanted to have strings, structs, ints and floats.

So, now it's working. It takes a definition of the structs, the method and which parameters they get, and it creates code (using the gnome-xml library (which I recommend). The automatically generated code works as a standalone inetd webserver which reads an XML-RPC query from the net, parses the query, loads it in the generated C structures, run the "transaction", and creates a response from the reply structures. The final result was that all those old C, RPC-only, programs started to have an XML interface.

I added the Helma RPC-XML client and voila, we had a Java client. So I must say that my experience in this legacy system with XML-RPC was great.

Talking about new systems, I think that XML-RPC does the wrong thing, by defining markup for the types instead of marking the semantic of the data.

XML-RPC, SOAP, and the Rest... (Score:2, Informative)
by edashofy on Tuesday February 20, @08:17PM EST (#95)
(User #265252 Info)
I've used XML-RPC before, written my own version of it, and used SOAP extensively (and even posted a few patches to the IBM SOAP for Java implementation). My conclusion has been that while RPC is slow, XML-RPC (and its variants) are necessarily slower. The idea itself is good, and it may be useful for so-called "web services" where there is a strict client-server relationship between communicating machines and there are few RPC's bouncing around the network. However, the overhead of a full XML parser, plus a protocol implementation, plus a marshaller and demarshaller to and from XML (especially since no language really provides this) is a big problem for XML-RPC and its kin. According to Dave Winer... "Conceptually, there's no difference between a local procedure call and a remote one, but they are implemented differently, perform differently (RPC is much slower) and therefore are used for different things." Bzzt! A local procedure call can pass references, pointers, etc. A remote procedure call is not just slower, but it also limits what kind of data you can reasonably expect to pass in the parameters. A pointer to a database driver, for instance, or a function pointer (for callbacks) are entirely different beasts! From the slashdot header: "It's deliberately minimalist but nevertheless quite powerful, offering a way for the vast majority of RPC applications that can get by on passing around boolean/integer/float/string datatypes to do their thing in a way that is lightweight and easy to to understand and monitor" Lightweight? Simple to understand and monitor? First of all, XML parsers are NEVER lightweight, especially if you want to do SOAP and use things like namespaces. Second of all, if you're doing only booleans/integers/floats/strings as the above suggests, then you're fine. More complex datatype marshalling? Ouch! I'm going to leave XML-RPC, SOAP, and its kin behind for a while and see what happens. Until then, caveat earlius adopterus!
Why isn't XML-RPC considered bloat? (Score:3, Interesting)
by Carnage4Life on Tuesday February 20, @09:16PM EST (#116)
(User #106069 Info) http://www.25hoursaday.com
I am a big fan of distributed computing, heck I wrote an article about it on K5, and have always wondered what the XML-RPC payoff is.

From what I can tell, XML-RPC is a way to replace the binary protocols that current distributed systems use (e.g. Corba's IIOP or DCOM's ORPC) with an XML based one. So when an object needs perform a remote method call, instead of just sending it's arguments in a compact efficient binary packet, it builds an XML string which has to be parsed on the other end. So with XML RPC, remote method calls now need the addition of an XML parser to their current bag of tricks.

On the surface it seems that this makes it easier to perform distributed computing since any shmuck can use an XML parser and call a few functions. But it means that an extra layer of abstraction has been added to operations that should be performed rather quickly for the dubious benefit of compatibility across platforms (which is yet to be realized) which seems to be more jumping on the XML hype bandwagon than reality. My biggest issue is that for XML-RPC to support things that are the biggest issues of distributed computing (e.g. keeping track of state) would add so much bloat to the XML parsing, string building, etc process for making a remote call as to make it unfeasible.

Anyone see any errors in this thinking?

Grabel's Law
2 is not equal to 3 - not even for very large values of 2.

Re:Why isn't XML-RPC considered bloat? (Score:4, Informative)
by Hrunting ([email protected]) on Tuesday February 20, @10:19PM EST (#134)
(User #2191 Info) http://hrunting.home.texas.net/
Anyone see any errors in this thinking?

Yes. It's true that as a consequence of its text-based nature, communication via XML is decidedly more bandwidth-intensive than any binary counterpart. The problem, though, lies in translation of that binary format two and from different machines, architectures, languages, and implementations.

I currently do quite a bit of work using SOAP, which is similar to XML-RPC, but a little less well-developed. It's a no-brainer. If I'm using Java, Perl, C, C++ or even Python, it's relatively easy to make calls between these different languages. I don't have to worry about the endianess of the data I'm working with. I don't have to worry about learning some new data encoding scheme (XML is very well-defined, and very ubiquitous; almost every language has a translator). Communicating between a language like Perl, which has no typing, and Java, which has more strict typing is a no brainer, because the data structures are defined by a well-documented, human readable schema, and when I look at the data I'm sending, I can see the raw information.

Bandwidth concerns might have been paramount two years ago, but the world in which XML-RPC and SOAP are being used has already shifted to broadband. Now, human clarity and complete interoperability, as well as the ease of use of porting XML-RPC (or SOAP) constructs to another language (since it's just XML text) make it a much more efficient model in terms of programmer time.

Yes, from a strictly bandwidth concern, CORBA or DCOM beat XML hands down, but when you remove that consideration, it's not that big a deal. Couple it with existing protocols (I've seen SOAP implementations via HTTP, FTP, and even SMTP), and the opportunity to grow via existing infrastructure and well-known technologies, and you just have an easier (and thus, I would argue, more open) model to work with.

Random links

[Nov 22, 2000] LinuxPR: PythonWorks Pro v1.1 - Now Also for Linux and Solaris!

Early IDE for Pryhon...

"PythonWorks Pro, entirely developed in Python, contains an editor, project manager, a deployment tool, an integrated browser, debugger, and a proprietary interface design tool. PythonWorks Pro allows the developer to easily create Python applications using the built-in tools."

[Nov 14, 2000] Linux Charming Python -- Inside Python's implementations See also Slashdot Interviews With The Creators of Vyper and Stackless

developerWorks

Stackless Python: An interview with creator Christian Tismer

At first brush, Stackless Python might seem like a minor fork to CPython. In terms of coding, Stackless makes just a few changes to the actual Python C code (and redefines "truth"). The concept that Christian Tismer (the creator of Stackless Python) introduces with Stackless is quite profound, however. It is the concept of "continuations" (and a way to program them in Python).

To attempt to explain it in the simplest terms, a continuation is a representation, at a particular point in a program, of everything the program is capable of doing subsequently. A continuation is a potential that depends on initial conditions. Rather than loop in a traditional way, it is possible to invoke the same continuation recursively with different initial conditions. One broad claim I have read is that continuations, in a theoretical sense, are more fundamental and underlie every other control structure. Don't worry if these ideas cause your brain to melt; that is a normal reaction.

Reading Tismer's background article in the Resources is a good start for further understanding. Pursuing his references is a good way to continue from there. But for now, let's talk with Tismer at a more general level:

Mertz: Exactly what is Stackless Python? Is there something a beginner can get his or her mind around that explains what is different about Stackless?

Tismer: Stackless Python is a Python implementation that does not save state on the C stack. It does have stacks -- as many as you want -- but these are Python stacks.

The C stack cannot be modified in a clean way from a language like C, unless you do it in the expected order. It imposes a big obligation on you: You will come back, exactly here, exactly in the reverse way as you went off.

"Normal" programmers do not see this as a restriction in the first place. They have to learn to push their minds onto stacks from the outset. There is nothing bad about stacks, and usually their imposed execution order is the way to go, but that does not mean that we have to wait for one such stack sequence to complete before we can run a different one.

Programmers realize this when they have to do non-blocking calls and callbacks. Suddenly the stack is in the way, we must use threads, or explicitly store state in objects, or build explicit, switchable stacks, and so on. The aim of Stackless is to deliver the programmer from these problems.

Mertz: The goal of Stackless is to be 100% binary compatible with CPython. Is it?

Tismer: Stackless is 100% binary compatible at the moment. That means: You install Python 1.5.2, you replace Python15.dll with mine, and everything still works, including every extension module. It is not a goal, it was a demand, since I didn't want to take care about all the extensions.

Mertz: Stackless Python has been absolutely fascinating to read about for me. Like most earthbound programmers, I have trouble getting my mind wholly around it, but that is part of what makes it so interesting.

Tismer: Well, I'm earthbound, too, and you might imagine how difficult it was to implement such a thing, without any idea what a continuation is and what it should look like in Python. Getting myself into doing something that I wasn't able to think was my big challenge. After it's done, it is easy to think, also to redesign. But of those six months of full-time work, I guess five were spent goggling into my screen and banging my head onto the keyboard.

Continuations are hard to sell. Coroutines and generators, and especially microthreads are easier. All of the above can be implemented without having explicit continuations. But when you have continuations already, you find that the step to these other structures is quite small, and continuations are the way to go. So I'm going to change my marketing strategy and not try any longer to sell the continuations, but their outcome. Continuations will still be there for those who can see the light.

Mertz: There is a joke about American engineers and French engineers. The American team brings a prototype to the French team. The French team's response is: "Well, it works fine in practice; but how will it hold up in theory?" I think the joke is probably meant to poke fun at a "French" style, but in my own mind I completely identify with the "French" reaction. Bracketing any specific national stereotypes in the joke, it is my identification in it that draws me to Stackless. CPython works in practice, but Stackless works in theory! (In other words, the abstract purity of continuations is more interesting to me personally than is the context switch speedups of microthreads, for example).

Tismer: My feeling is a bit similar. After realizing that CPython can be implemented without the C stack involved, I was sure that it must be implemented this way; everything else looks insane to me. CPython already pays for the overhead of frame objects, but it throws all their freedom away by tying them to the C stack. I felt I had to liberate Python. :-)

I started the project in May 1999. Sam Rushing was playing with a hardware coroutine implementation, and a discussion on Python-dev began. Such a stack copying hack would never make it into Python, that was clear. But a portable, clean implementation of coroutines would, possibly. Unfortunately, this is impossible. Steve Majewski gave up five years ago, after he realized that he could not solve this problem without completely rewriting Python.

That was the challenge. I had to find out. Either it is possible, and I would implement it; or it is not, and I would prove the impossibility. Not much later, after first thoughts and attempts, Sam told me about call/cc and how powerful it was. At this time, I had no idea in what way they could be more powerful than coroutines, but I believed him and implemented them; after six or seven times, always a complete rewrite, I understood more.

Ultimately I wanted to create threads at blinding speed, but my primary intent was to find out how far I can reach at all.

Mertz: On the practical side, just what performance improvements is Stackless likely to have? How great are these improvements in the current implementation? How much more is possible with tweaking? What specific sorts of applications are most likely to benefit from Stackless?

Tismer: With the current implementation, there is no large advantage for Stackless over the traditional calling scheme. Normal Python starts a recursion to a new interpreter. Stackless unwinds up to a dispatcher and starts an interpreter from there. This is nearly the same. Real improvements are there for implementations of coroutines and threads. They need to be simulated by classes, or to be real threads in Standard Python, while they can be implemented much more directly with Stackless.

Much more improvement of the core doesn't seem possible without dramatic changes to the opcode set. But a re-implementation, with more built-in support for continuations et. al., can improve the speed of these quite a lot.

Specific applications that might benefit greatly are possibly Swarm simulations, or multiuser games with very many actors performing tiny tasks. One example is the EVE game (see Resources below), which is under development, using Stackless Python.

Mertz: What do you think about incorporating Stackless into the CPython trunk? Is Stackless just as good as an available branch, or does something get better if it becomes the core version?

Tismer: There are arguments for and against it. Against: As long as I'm sitting on the Stackless implementation, it is mine, and I do not need to discuss the hows and whys. But at the same time, I'm struggling (and don't manage) to keep up with CVS. Better to have other people doing this.

Other Python users, who aren't necessarily interested in kinky stuff, won't recognize Stackless at all; just the fact that it happens to be faster, and that the maximum recursion level now is an option and not a hardware limit. And there is another promise for every user: There will be pickleable execution states. That means you can save your program while it is running, send it to a friend, and continue running it.

Finally, I'm all for it, provided that all my stuff makes it into the core; at the same time, I do not want to see a half-baked solution, as has been proposed several times.

Mertz: Any thoughts on future directions for Stackless? Anything new and different expected down the pipeline? Stackless still suffers from some recursions. Will they vanish?

Tismer: Pickling support will be partially implemented. This will be working first for microthreads since they provide the cleanest abstraction at the moment. They are living in a "clean room" where the remaining recursion problem doesn't exist. My final goal is to remove all interpreter recursion from Python. Some parts of Stackless still have recursions, especially all the predefined __xxx__ methods of objects. This is very hard to finalize since we need to change quite a few things, add new opcodes, unroll certain internal calling sequences, and so on.

Resources

LinuxMall.com - BeOpen.com Swallows Python Whole (page 1 of 3)

Python's fearless leader and self-proclaimed "Benevolent Dictator for Life" (BDFL), Guido van Rossum, recently wrote an open letter to the community, proclaiming Python's decision to move the project to new auspices in the Open Source realm.

"Python is growing rapidly. In order to take it to the next level, I've moved with my core development group to a new employer, BeOpen.com," van Rossum wrote. "BeOpen.com is a startup company with a focus on Open Source communities, and an interest in facilitating next-generation application development. It is a natural fit for Python."

BeOpen.com develops online communities for Open Source application users, developers and interested parties through its network of portal Web sites. It also offers support and custom development for Open Source applications specific to major corporations and governmental entities.

Van Rossum created Python in the early 1990s at CWI (the National Research Institute for Mathematics and Computer Science in the Netherlands) in Amsterdam. In 1995, he moved to the United States, and lives in Reston, Virginia.

In the past five years, van Rossum has worked as a researcher at the Corporation for National Research Initiatives (CNRI). He also is technical director of the Python Consortium, a CNRI-hosted international consortium that promotes Python use and development.

Van Rossum has headed up Python's development for the past decade. He said he expects to remain in the position for at least another 10 years and although he "is not worried about getting hit by a bus," he also has announced that he soon will be married and the union between Python and BeOpen.com will give him some time for a honeymoon.

At BeOpen.com, van Rossum is the director of a new development team recently dubbed PythonLabs. The team includes three of his colleagues from CNRI: Fred Drake, Jeremy Hylton, and Barry Warsaw. "Another familiar face will join us shortly: Tim Peters. We have our own Web site where you can read more about us, our plans and our activities. We've also posted a FAQ there specifically about PythonLabs, our transition to BeOpen.com, and what it means for the Python community," he said.

BeOpen Interview with Guido van Rossum [Python creator]

Linux Today

As open source projects go, Python, the "very high level language" developed by Guido Van Rossum 10 years ago, is a prime example of the "scratch your own itch" design philosophy.

Van Rossum, a 44 year old developer who spent much of his collegiate and post- collegiate years working with instructional software languages such as ABC and Pascal, freely admits that it was his annoyance with these languages' real-word performance that drove him to create Python.

"The history of Python came out of the frustration I had with ABC when it wasn't being used for teaching but for day-to-day ad hoc programming," says Van Rossum, who in 1999 received an "Excellence in Programming" award from the Dr. Dobb's Journal for his Python work.

Nearly a decade after releasing the first version of Python, Van Rossum is currently looking at how to bridge the gap between languages that are easy for non-technical users to grasp and languages that are capable of performing industrial-strength computational tasks. Recently, he and other Python developers have joined together to form Computer Programming for Everybody, or CP4E, a project that is currently seeking funds from DARPA, the modern day descendant of the organization that helped build the early Internet. According to Van Rossum, CP4E will explore the way non-technical users engage in computing, in a way that will gain new insights into smoothing the human/machine interaction.

Fast, Free Prototyping in Python by Andrea Provaglio

Though Visual Basic has unbeatable GUI and COM capabilities, Python is an open-source tool with surprising utility for corporate development


Recommended Links

Google matched content

Softpanorama Recommended

Top articles

Sites

List of Python software - Wikipedia, the free encyclopedia


Recommended Papers


Tutorials

Give Python a try. I think you'll like it. The language itself is very simple (see also discussion of the braces problem below). All the power is found in the extension modules. To learn the language basics should only take a week or so. I think the ease of learning argument is also a powerful argument in favor of Python. I've been programming Perl off and on several years now, but I cannot claim to be a Perl guru -- the language is so huge and so baroque, a really PL/1-style language that I like too ;-). IMHO nobody has any change to master Perl in a week ;-).



Etc

Society

Groupthink : Two Party System as Polyarchy : Corruption of Regulators : Bureaucracies : Understanding Micromanagers and Control Freaks : Toxic Managers :   Harvard Mafia : Diplomatic Communication : Surviving a Bad Performance Review : Insufficient Retirement Funds as Immanent Problem of Neoliberal Regime : PseudoScience : Who Rules America : Neoliberalism  : The Iron Law of Oligarchy : Libertarian Philosophy

Quotes

War and Peace : Skeptical Finance : John Kenneth Galbraith :Talleyrand : Oscar Wilde : Otto Von Bismarck : Keynes : George Carlin : Skeptics : Propaganda  : SE quotes : Language Design and Programming Quotes : Random IT-related quotesSomerset Maugham : Marcus Aurelius : Kurt Vonnegut : Eric Hoffer : Winston Churchill : Napoleon Bonaparte : Ambrose BierceBernard Shaw : Mark Twain Quotes

Bulletin:

Vol 25, No.12 (December, 2013) Rational Fools vs. Efficient Crooks The efficient markets hypothesis : Political Skeptic Bulletin, 2013 : Unemployment Bulletin, 2010 :  Vol 23, No.10 (October, 2011) An observation about corporate security departments : Slightly Skeptical Euromaydan Chronicles, June 2014 : Greenspan legacy bulletin, 2008 : Vol 25, No.10 (October, 2013) Cryptolocker Trojan (Win32/Crilock.A) : Vol 25, No.08 (August, 2013) Cloud providers as intelligence collection hubs : Financial Humor Bulletin, 2010 : Inequality Bulletin, 2009 : Financial Humor Bulletin, 2008 : Copyleft Problems Bulletin, 2004 : Financial Humor Bulletin, 2011 : Energy Bulletin, 2010 : Malware Protection Bulletin, 2010 : Vol 26, No.1 (January, 2013) Object-Oriented Cult : Political Skeptic Bulletin, 2011 : Vol 23, No.11 (November, 2011) Softpanorama classification of sysadmin horror stories : Vol 25, No.05 (May, 2013) Corporate bullshit as a communication method  : Vol 25, No.06 (June, 2013) A Note on the Relationship of Brooks Law and Conway Law

History:

Fifty glorious years (1950-2000): the triumph of the US computer engineering : Donald Knuth : TAoCP and its Influence of Computer Science : Richard Stallman : Linus Torvalds  : Larry Wall  : John K. Ousterhout : CTSS : Multix OS Unix History : Unix shell history : VI editor : History of pipes concept : Solaris : MS DOSProgramming Languages History : PL/1 : Simula 67 : C : History of GCC developmentScripting Languages : Perl history   : OS History : Mail : DNS : SSH : CPU Instruction Sets : SPARC systems 1987-2006 : Norton Commander : Norton Utilities : Norton Ghost : Frontpage history : Malware Defense History : GNU Screen : OSS early history

Classic books:

The Peter Principle : Parkinson Law : 1984 : The Mythical Man-MonthHow to Solve It by George Polya : The Art of Computer Programming : The Elements of Programming Style : The Unix Hater’s Handbook : The Jargon file : The True Believer : Programming Pearls : The Good Soldier Svejk : The Power Elite

Most popular humor pages:

Manifest of the Softpanorama IT Slacker Society : Ten Commandments of the IT Slackers Society : Computer Humor Collection : BSD Logo Story : The Cuckoo's Egg : IT Slang : C++ Humor : ARE YOU A BBS ADDICT? : The Perl Purity Test : Object oriented programmers of all nations : Financial Humor : Financial Humor Bulletin, 2008 : Financial Humor Bulletin, 2010 : The Most Comprehensive Collection of Editor-related Humor : Programming Language Humor : Goldman Sachs related humor : Greenspan humor : C Humor : Scripting Humor : Real Programmers Humor : Web Humor : GPL-related Humor : OFM Humor : Politically Incorrect Humor : IDS Humor : "Linux Sucks" Humor : Russian Musical Humor : Best Russian Programmer Humor : Microsoft plans to buy Catholic Church : Richard Stallman Related Humor : Admin Humor : Perl-related Humor : Linus Torvalds Related humor : PseudoScience Related Humor : Networking Humor : Shell Humor : Financial Humor Bulletin, 2011 : Financial Humor Bulletin, 2012 : Financial Humor Bulletin, 2013 : Java Humor : Software Engineering Humor : Sun Solaris Related Humor : Education Humor : IBM Humor : Assembler-related Humor : VIM Humor : Computer Viruses Humor : Bright tomorrow is rescheduled to a day after tomorrow : Classic Computer Humor

The Last but not Least Technology is dominated by two types of people: those who understand what they do not manage and those who manage what they do not understand ~Archibald Putt. Ph.D


Copyright © 1996-2021 by Softpanorama Society. www.softpanorama.org was initially created as a service to the (now defunct) UN Sustainable Development Networking Programme (SDNP) without any remuneration. This document is an industrial compilation designed and created exclusively for educational use and is distributed under the Softpanorama Content License. Original materials copyright belong to respective owners. Quotes are made for educational purposes only in compliance with the fair use doctrine.

FAIR USE NOTICE This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available to advance understanding of computer science, IT technology, economic, scientific, and social issues. We believe this constitutes a 'fair use' of any such copyrighted material as provided by section 107 of the US Copyright Law according to which such material can be distributed without profit exclusively for research and educational purposes.

This is a Spartan WHYFF (We Help You For Free) site written by people for whom English is not a native language. Grammar and spelling errors should be expected. The site contain some broken links as it develops like a living tree...

You can use PayPal to to buy a cup of coffee for authors of this site

Disclaimer:

The statements, views and opinions presented on this web page are those of the author (or referenced source) and are not endorsed by, nor do they necessarily reflect, the opinions of the Softpanorama society. We do not warrant the correctness of the information provided or its fitness for any purpose. The site uses AdSense so you need to be aware of Google privacy policy. You you do not want to be tracked by Google please disable Javascript for this site. This site is perfectly usable without Javascript.

Last modified: October, 15, 2020