Softpanorama

May the source be with you, but remember the KISS principle ;-)
Home Switchboard Unix Administration Red Hat TCP/IP Networks Neoliberalism Toxic Managers
(slightly skeptical) Educational society promoting "Back to basics" movement against IT overcomplexity and  bastardization of classic Unix

Local variables

Temporary Values via local()

WARNING: In general, you should be using my instead of local, because it's faster and safer. Exceptions to this include the global punctuation variables, global filehandles and formats, and direct manipulation of the Perl symbol table itself.

local is mostly used when the current value of a variable must be visible to called subroutines.

Synopsis:

  1. # localization of values
  2. local $foo; # make $foo dynamically local
  3. local (@wid, %get); # make list of variables local
  4. local $foo = "flurp"; # make $foo dynamic, and init it
  5. local @oof = @bar; # make @oof dynamic, and init it
  6. local $hash{key} = "val"; # sets a local value for this hash entry
  7. delete local $hash{key}; # delete this entry for the current block
  8. local ($cond ? $v1 : $v2); # several types of lvalues support
  9. # localization
  10. # localization of symbols
  11. local *FH; # localize $FH, @FH, %FH, &FH ...
  12. local *merlyn = *randal; # now $merlyn is really $randal, plus
  13. # @merlyn is really @randal, etc
  14. local *merlyn = 'randal'; # SAME THING: promote 'randal' to *randal
  15. local *merlyn = \$randal; # just alias $merlyn, not @merlyn etc

A local modifies its listed variables to be "local" to the enclosing block, eval, or do FILE --and to any subroutine called from within that block. A local just gives temporary values to global (meaning package) variables. It does not create a local variable. This is known as dynamic scoping. Lexical scoping is done with my, which works more like C's auto declarations.

Some types of lvalues can be localized as well: hash and array elements and slices, conditionals (provided that their result is always localizable), and symbolic references. As for simple variables, this creates new, dynamically scoped values.

If more than one variable or expression is given to local, they must be placed in parentheses. This operator works by saving the current values of those variables in its argument list on a hidden stack and restoring them upon exiting the block, subroutine, or eval. This means that called subroutines can also reference the local variable, but not the global one. The argument list may be assigned to if desired, which allows you to initialize your local variables. (If no initializer is given for a particular variable, it is created with an undefined value.)

Because local is a run-time operator, it gets executed each time through a loop. Consequently, it's more efficient to localize your variables outside the loop.

Grammatical note on local()

A local is simply a modifier on an lvalue expression. When you assign to a localized variable, the local doesn't change whether its list is viewed as a scalar or an array. So

  1. local($foo) = <STDIN>;
  2. local @FOO = <STDIN>;

both supply a list context to the right-hand side, while

  1. local $foo = <STDIN>;

supplies a scalar context.

Localization of special variables

If you localize a special variable, you'll be giving a new value to it, but its magic won't go away. That means that all side-effects related to this magic still work with the localized value.

This feature allows code like this to work :

  1. # Read the whole contents of FILE in $slurp
  2. { local $/ = undef; $slurp = <FILE>; }

Note, however, that this restricts localization of some values ; for example, the following statement dies, as of perl 5.10.0, with an error Modification of a read-only value attempted, because the $1 variable is magical and read-only :

  1. local $1 = 2;

One exception is the default scalar variable:

  • starting with perl 5.14 local($_) will always strip all magic from $_, to make it possible to safely reuse $_ in a subroutine.
  • WARNING: Localization of tied arrays and hashes does not currently work as described. This will be fixed in a future release of Perl; in the meantime, avoid code that relies on any particular behavior of localising tied arrays or hashes (localising individual elements is still okay). See Localising Tied Arrays and Hashes Is Broken in perl58delta for more details.

    Localization of globs

    The construct

    1. local *name;

    creates a whole new symbol table entry for the glob name in the current package. That means that all variables in its glob slot ($name, @name, %name, &name, and the name filehandle) are dynamically reset.

    This implies, among other things, that any magic eventually carried by those variables is locally lost. In other words, saying local */ will not have any effect on the internal value of the input record separator.

    Localization of elements of composite types

    It's also worth taking a moment to explain what happens when you localize a member of a composite type (i.e. an array or hash element). In this case, the element is localized by name. This means that when the scope of the local() ends, the saved value will be restored to the hash element whose key was named in the local(), or the array element whose index was named in the local(). If that element was deleted while the local() was in effect (e.g. by a delete() from a hash or a shift() of an array), it will spring back into existence, possibly extending an array and filling in the skipped elements with undef. For instance, if you say

    1. %hash = ( 'This' => 'is', 'a' => 'test' );
    2. @ary = ( 0..5 );
    3. {
    4. local($ary[5]) = 6;
    5. local($hash{'a'}) = 'drill';
    6. while (my $e = pop(@ary)) {
    7. print "$e . . .\n";
    8. last unless $e > 3;
    9. }
    10. if (@ary) {
    11. $hash{'only a'} = 'test';
    12. delete $hash{'a'};
    13. }
    14. }
    15. print join(' ', map { "$_ $hash{$_}" } sort keys %hash),".\n";
    16. scalar(@ary)," elements: ",
    17. join(', ', map { defined $_ ? $_ : 'undef' } @ary),"\n";

    Perl will print

    1. 6 . . .
    2. 4 . . .
    3. 3 . . .
    4. This is a test only a test.
    5. The array has 6 elements: 0, 1, 2, undef, undef, 5

    The behavior of local() on non-existent members of composite types is subject to change in future.

    Localized deletion of elements of composite types

    You can use the delete local $array[$idx] and delete local $hash{key} constructs to delete a composite type entry for the current block and restore it when it ends. They return the array/hash value before the localization, which means that they are respectively equivalent to

    1. do {
    2. my $val = $array[$idx];
    3. local $array[$idx];
    4. delete $array[$idx];
    5. $val
    6. }

    and

    1. do {
    2. my $val = $hash{key};
    3. local $hash{key};
    4. delete $hash{key};
    5. $val
    6. }

    except that for those the local is scoped to the do block. Slices are also accepted.

    1. my %hash = (
    2. a => [ 7, 8, 9 ],
    3. b => 1,
    4. )
    5. {
    6. my $a = delete local $hash{a};
    7. # $a is [ 7, 8, 9 ]
    8. # %hash is (b => 1)
    9. {
    10. my @nums = delete local @$a[0, 2]
    11. # @nums is (7, 9)
    12. # $a is [ undef, 8 ]
    13. $a[0] = 999; # will be erased when the scope ends
    14. }
    15. # $a is back to [ 7, 8, 9 ]
    16. }
    17. # %hash is back to its original state

    Lvalue subroutines

    It is possible to return a modifiable value from a subroutine. To do this, you have to declare the subroutine to return an lvalue.

    1. my $val;
    2. sub canmod : lvalue {
    3. $val; # or: return $val;
    4. }
    5. sub nomod {
    6. $val;
    7. }
    8. canmod() = 5; # assigns to $val
    9. nomod() = 5; # ERROR

    The scalar/list context for the subroutine and for the right-hand side of assignment is determined as if the subroutine call is replaced by a scalar. For example, consider:

    1. data(2,3) = get_data(3,4);

    Both subroutines here are called in a scalar context, while in:

    1. (data(2,3)) = get_data(3,4);

    and in:

    1. (data(2),data(3)) = get_data(3,4);

    all the subroutines are called in a list context.

    Lvalue subroutines are convenient, but you have to keep in mind that, when used with objects, they may violate encapsulation. A normal mutator can check the supplied argument before setting the attribute it is protecting, an lvalue subroutine cannot. If you require any special processing when storing and retrieving the values, consider using the CPAN module Sentinel or something similar.

    Lexical Subroutines

    WARNING: Lexical subroutines are still experimental. The feature may be modified or removed in future versions of Perl.

    Lexical subroutines are only available under the use feature 'lexical_subs' pragma, which produces a warning unless the "experimental::lexical_subs" warnings category is disabled.

    Beginning with Perl 5.18, you can declare a private subroutine with my or state. As with state variables, the state keyword is only available under use feature 'state' or use 5.010 or higher.

    These subroutines are only visible within the block in which they are declared, and only after that declaration:

    1. no warnings "experimental::lexical_subs";
    2. use feature 'lexical_subs';
    3. foo(); # calls the package/global subroutine
    4. state sub foo {
    5. foo(); # also calls the package subroutine
    6. }
    7. foo(); # calls "state" sub
    8. my $ref = \&foo; # take a reference to "state" sub
    9. my sub bar { ... }
    10. bar(); # calls "my" sub

    To use a lexical subroutine from inside the subroutine itself, you must predeclare it. The sub foo {...} subroutine definition syntax respects any previous my sub; or state sub; declaration.

    1. my sub baz; # predeclaration
    2. sub baz { # define the "my" sub
    3. baz(); # recursive call
    4. }

    state sub vs my sub

    What is the difference between "state" subs and "my" subs? Each time that execution enters a block when "my" subs are declared, a new copy of each sub is created. "State" subroutines persist from one execution of the containing block to the next.

    So, in general, "state" subroutines are faster. But "my" subs are necessary if you want to create closures:

    1. no warnings "experimental::lexical_subs";
    2. use feature 'lexical_subs';
    3. sub whatever {
    4. my $x = shift;
    5. my sub inner {
    6. ... do something with $x ...
    7. }
    8. inner();
    9. }

    In this example, a new $x is created when whatever is called, and also a new inner , which can see the new $x . A "state" sub will only see the $x from the first call to whatever .

    our subroutines

    Like our $variable , our sub creates a lexical alias to the package subroutine of the same name.

    The two main uses for this are to switch back to using the package sub inside an inner scope:

    1. no warnings "experimental::lexical_subs";
    2. use feature 'lexical_subs';
    3. sub foo { ... }
    4. sub bar {
    5. my sub foo { ... }
    6. {
    7. # need to use the outer foo here
    8. our sub foo;
    9. foo();
    10. }
    11. }

    and to make a subroutine visible to other packages in the same scope:

    1. package MySneakyModule;
    2. no warnings "experimental::lexical_subs";
    3. use feature 'lexical_subs';
    4. our sub do_something { ... }
    5. sub do_something_with_caller {
    6. package DB;
    7. () = caller 1; # sets @DB::args
    8. do_something(@args); # uses MySneakyModule::do_something
    9. }

    Passing Symbol Table Entries (typeglobs)

    WARNING: The mechanism described in this section was originally the only way to simulate pass-by-reference in older versions of Perl. While it still works fine in modern versions, the new reference mechanism is generally easier to work with. See below.

    Sometimes you don't want to pass the value of an array to a subroutine but rather the name of it, so that the subroutine can modify the global copy of it rather than working with a local copy. In perl you can refer to all objects of a particular name by prefixing the name with a star: *foo . This is often known as a "typeglob", because the star on the front can be thought of as a wildcard match for all the funny prefix characters on variables and subroutines and such.

    When evaluated, the typeglob produces a scalar value that represents all the objects of that name, including any filehandle, format, or subroutine. When assigned to, it causes the name mentioned to refer to whatever * value was assigned to it. Example:

    1. sub doubleary {
    2. local(*someary) = @_;
    3. foreach $elem (@someary) {
    4. $elem *= 2;
    5. }
    6. }
    7. doubleary(*foo);
    8. doubleary(*bar);

    Scalars are already passed by reference, so you can modify scalar arguments without using this mechanism by referring explicitly to $_[0] etc. You can modify all the elements of an array by passing all the elements as scalars, but you have to use the * mechanism (or the equivalent reference mechanism) to push, pop, or change the size of an array. It will certainly be faster to pass the typeglob (or reference).

    Even if you don't want to modify an array, this mechanism is useful for passing multiple arrays in a single LIST, because normally the LIST mechanism will merge all the array values so that you can't extract out the individual arrays. For more on typeglobs, see Typeglobs and Filehandles in perldata.

    When to Still Use local()

    Despite the existence of my, there are still three places where the local operator still shines. In fact, in these three places, you must use local instead of my.

    1.
    You need to give a global variable a temporary value, especially $_.

    The global variables, like @ARGV or the punctuation variables, must be localized with local(). This block reads in /etc/motd, and splits it up into chunks separated by lines of equal signs, which are placed in @Fields .

    1. {
    2. local @ARGV = ("/etc/motd");
    3. local $/ = undef;
    4. local $_ = <>;
    5. @Fields = split /^\s*=+\s*$/;
    6. }

    It particular, it's important to localize $_ in any routine that assigns to it. Look out for implicit assignments in while conditionals.

    2.
    You need to create a local file or directory handle or a local function.

    A function that needs a filehandle of its own must use local() on a complete typeglob. This can be used to create new symbol table entries:

    1. sub ioqueue {
    2. local (*READER, *WRITER); # not my!
    3. pipe (READER, WRITER) or die "pipe: $!";
    4. return (*READER, *WRITER);
    5. }
    6. ($head, $tail) = ioqueue();

    See the Symbol module for a way to create anonymous symbol table entries.

    Because assignment of a reference to a typeglob creates an alias, this can be used to create what is effectively a local function, or at least, a local alias.

    1. {
    2. local *grow = \&shrink; # only until this block exits
    3. grow(); # really calls shrink()
    4. move(); # if move() grow()s, it shrink()s too
    5. }
    6. grow(); # get the real grow() again

    See Function Templates in perlref for more about manipulating functions by name in this way.

    3.
    You want to temporarily change just one element of an array or hash.

    You can localize just one element of an aggregate. Usually this is done on dynamics:

    1. {
    2. local $SIG{INT} = 'IGNORE';
    3. funct(); # uninterruptible
    4. }
    5. # interruptibility automatically restored here

    But it also works on lexically declared aggregates.

    Pass by Reference

    If you want to pass more than one array or hash into a function--or return them from it--and have them maintain their integrity, then you're going to have to use an explicit pass-by-reference. Before you do that, you need to understand references as detailed in perlref. This section may not make much sense to you otherwise.

    Here are a few simple examples. First, let's pass in several arrays to a function and have it pop all of then, returning a new list of all their former last elements:

    1. @tailings = popmany ( \@a, \@b, \@c, \@d );
    2. sub popmany {
    3. my $aref;
    4. my @retlist = ();
    5. foreach $aref ( @_ ) {
    6. push @retlist, pop @$aref;
    7. }
    8. return @retlist;
    9. }

    Here's how you might write a function that returns a list of keys occurring in all the hashes passed to it:

    1. @common = inter( \%foo, \%bar, \%joe );
    2. sub inter {
    3. my ($k, $href, %seen); # locals
    4. foreach $href (@_) {
    5. while ( $k = each %$href ) {
    6. $seen{$k}++;
    7. }
    8. }
    9. return grep { $seen{$_} == @_ } keys %seen;
    10. }

    So far, we're using just the normal list return mechanism. What happens if you want to pass or return a hash? Well, if you're using only one of them, or you don't mind them concatenating, then the normal calling convention is ok, although a little expensive.

    Where people get into trouble is here:

    1. (@a, @b) = func(@c, @d);
    2. or
    3. (%a, %b) = func(%c, %d);

    That syntax simply won't work. It sets just @a or %a and clears the @b or %b . Plus the function didn't get passed into two separate arrays or hashes: it got one long list in @_ , as always.

    If you can arrange for everyone to deal with this through references, it's cleaner code, although not so nice to look at. Here's a function that takes two array references as arguments, returning the two array elements in order of how many elements they have in them:

    1. ($aref, $bref) = func(\@c, \@d);
    2. print "@$aref has more than @$bref\n";
    3. sub func {
    4. my ($cref, $dref) = @_;
    5. if (@$cref > @$dref) {
    6. return ($cref, $dref);
    7. } else {
    8. return ($dref, $cref);
    9. }
    10. }

    It turns out that you can actually do this also:

    1. (*a, *b) = func(\@c, \@d);
    2. print "@a has more than @b\n";
    3. sub func {
    4. local (*c, *d) = @_;
    5. if (@c > @d) {
    6. return (\@c, \@d);
    7. } else {
    8. return (\@d, \@c);
    9. }
    10. }

    Here we're using the typeglobs to do symbol table aliasing. It's a tad subtle, though, and also won't work if you're using my variables, because only globals (even in disguise as locals) are in the symbol table.

    If you're passing around filehandles, you could usually just use the bare typeglob, like *STDOUT , but typeglobs references work, too. For example:

    1. splutter(\*STDOUT);
    2. sub splutter {
    3. my $fh = shift;
    4. print $fh "her um well a hmmm\n";
    5. }
    6. $rec = get_rec(\*STDIN);
    7. sub get_rec {
    8. my $fh = shift;
    9. return scalar <$fh>;
    10. }

    If you're planning on generating new filehandles, you could do this. Notice to pass back just the bare *FH, not its reference.

    1. sub openit {
    2. my $path = shift;
    3. local *FH;
    4. return open (FH, $path) ? *FH : undef;
    5. }

    Prototypes

    Perl supports a very limited kind of compile-time argument checking using function prototyping. This can be declared in either the PROTO section or with a prototype attribute. If you declare either of

    1. sub mypush (+@)
    2. sub mypush :prototype(+@)

    then mypush() takes arguments exactly like push() does.

    If subroutine signatures are enabled (see Signatures), then the shorter PROTO syntax is unavailable, because it would clash with signatures. In that case, a prototype can only be declared in the form of an attribute.

    The function declaration must be visible at compile time. The prototype affects only interpretation of new-style calls to the function, where new-style is defined as not using the & character. In other words, if you call it like a built-in function, then it behaves like a built-in function. If you call it like an old-fashioned subroutine, then it behaves like an old-fashioned subroutine. It naturally falls out from this rule that prototypes have no influence on subroutine references like \&foo or on indirect subroutine calls like &{$subref} or $subref->() .

    Method calls are not influenced by prototypes either, because the function to be called is indeterminate at compile time, since the exact code called depends on inheritance.

    Because the intent of this feature is primarily to let you define subroutines that work like built-in functions, here are prototypes for some other functions that parse almost exactly like the corresponding built-in.

    1. Declared as Called as
    2. sub mylink ($$) mylink $old, $new
    3. sub myvec ($$$) myvec $var, $offset, 1
    4. sub myindex ($$;$) myindex &getstring, "substr"
    5. sub mysyswrite ($$$;$) mysyswrite $buf, 0, length($buf) - $off, $off
    6. sub myreverse (@) myreverse $a, $b, $c
    7. sub myjoin ($@) myjoin ":", $a, $b, $c
    8. sub mypop (+) mypop @array
    9. sub mysplice (+$$@) mysplice @array, 0, 2, @pushme
    10. sub mykeys (+) mykeys %{$hashref}
    11. sub myopen (*;$) myopen HANDLE, $name
    12. sub mypipe (**) mypipe READHANDLE, WRITEHANDLE
    13. sub mygrep (&@) mygrep { /foo/ } $a, $b, $c
    14. sub myrand (;$) myrand 42
    15. sub mytime () mytime

    Any backslashed prototype character represents an actual argument that must start with that character (optionally preceded by my, our or local), with the exception of $ , which will accept any scalar lvalue expression, such as $foo = 7 or my_function()->[0] . The value passed as part of @_ will be a reference to the actual argument given in the subroutine call, obtained by applying \ to that argument.

    You can use the \[] backslash group notation to specify more than one allowed argument type. For example:

    1. sub myref (\[$@%&*])

    will allow calling myref() as

    1. myref $var
    2. myref @array
    3. myref %hash
    4. myref &sub
    5. myref *glob

    and the first argument of myref() will be a reference to a scalar, an array, a hash, a code, or a glob.

    Unbackslashed prototype characters have special meanings. Any unbackslashed @ or % eats all remaining arguments, and forces list context. An argument represented by $ forces scalar context. An & requires an anonymous subroutine, which, if passed as the first argument, does not require the sub keyword or a subsequent comma.

    A * allows the subroutine to accept a bareword, constant, scalar expression, typeglob, or a reference to a typeglob in that slot. The value will be available to the subroutine either as a simple scalar, or (in the latter two cases) as a reference to the typeglob. If you wish to always convert such arguments to a typeglob reference, use Symbol::qualify_to_ref() as follows:

    1. use Symbol 'qualify_to_ref';
    2. sub foo (*) {
    3. shift, caller);
    4. ...
    5. }

    The + prototype is a special alternative to $ that will act like \[@%] when given a literal array or hash variable, but will otherwise force scalar context on the argument. This is useful for functions which should accept either a literal array or an array reference as the argument:

    1. sub mypush (+@) {
    2. my $aref = shift;
    3. die "Not an array or arrayref" unless ref $aref eq 'ARRAY';
    4. push @$aref, @_;
    5. }

    When using the + prototype, your function must check that the argument is of an acceptable type.

    A semicolon (; ) separates mandatory arguments from optional arguments. It is redundant before @ or % , which gobble up everything else.

    As the last character of a prototype, or just before a semicolon, a @ or a % , you can use _ in place of $ : if this argument is not provided, $_ will be used instead.

    Note how the last three examples in the table above are treated specially by the parser. mygrep() is parsed as a true list operator, myrand() is parsed as a true unary operator with unary precedence the same as rand(), and mytime() is truly without arguments, just like time(). That is, if you say

    1. mytime +2;

    you'll get mytime() + 2 , not mytime(2) , which is how it would be parsed without a prototype. If you want to force a unary function to have the same precedence as a list operator, add ; to the end of the prototype:

    1. sub mygetprotobynumber($;);
    2. mygetprotobynumber $a > $b; # parsed as mygetprotobynumber($a > $b)

    The interesting thing about & is that you can generate new syntax with it, provided it's in the initial position:

    1. sub try (&@) {
    2. my($try,$catch) = @_;
    3. eval { &$try };
    4. if ($@) {
    5. local $_ = $@;
    6. &$catch;
    7. }
    8. }
    9. sub catch (&) { $_[0] }
    10. try {
    11. die "phooey";
    12. } catch {
    13. /phooey/ and print "unphooey\n";
    14. };

    That prints "unphooey" . (Yes, there are still unresolved issues having to do with visibility of @_ . I'm ignoring that question for the moment. (But note that if we make @_ lexically scoped, those anonymous subroutines can act like closures... (Gee, is this sounding a little Lispish? (Never mind.))))

    And here's a reimplementation of the Perl grep operator:

    1. sub mygrep (&@) {
    2. my $code = shift;
    3. my @result;
    4. foreach $_ (@_) {
    5. push(@result, $_) if &$code;
    6. }
    7. @result;
    8. }

    Some folks would prefer full alphanumeric prototypes. Alphanumerics have been intentionally left out of prototypes for the express purpose of someday in the future adding named, formal parameters. The current mechanism's main goal is to let module writers provide better diagnostics for module users. Larry feels the notation quite understandable to Perl programmers, and that it will not intrude greatly upon the meat of the module, nor make it harder to read. The line noise is visually encapsulated into a small pill that's easy to swallow.

    If you try to use an alphanumeric sequence in a prototype you will generate an optional warning - "Illegal character in prototype...". Unfortunately earlier versions of Perl allowed the prototype to be used as long as its prefix was a valid prototype. The warning may be upgraded to a fatal error in a future version of Perl once the majority of offending code is fixed.

    It's probably best to prototype new functions, not retrofit prototyping into older ones. That's because you must be especially careful about silent impositions of differing list versus scalar contexts. For example, if you decide that a function should take just one parameter, like this:

    1. sub func ($) {
    2. my $n = shift;
    3. print "you gave me $n\n";
    4. }

    and someone has been calling it with an array or expression returning a list:

    1. func(@foo);
    2. func( split /:/ );

    Then you've just supplied an automatic scalar in front of their argument, which can be more than a bit surprising. The old @foo which used to hold one thing doesn't get passed in. Instead, func() now gets passed in a 1 ; that is, the number of elements in @foo . And the split gets called in scalar context so it starts scribbling on your @_ parameter list. Ouch!

    If a sub has both a PROTO and a BLOCK, the prototype is not applied until after the BLOCK is completely defined. This means that a recursive function with a prototype has to be predeclared for the prototype to take effect, like so:

    1. sub foo($$);
    2. sub foo($$) {
    3. foo 1, 2;
    4. }

    This is all very powerful, of course, and should be used only in moderation to make the world a better place.

    Constant Functions

    Functions with a prototype of () are potential candidates for inlining. If the result after optimization and constant folding is either a constant or a lexically-scoped scalar which has no other references, then it will be used in place of function calls made without & . Calls made using & are never inlined. (See constant.pm for an easy way to declare most constants.)

    The following functions would all be inlined:

    1. sub pi () { 3.14159 } # Not exact, but close.
    2. sub PI () { 4 * atan2 1, 1 } # As good as it gets,
    3. # and it's inlined, too!
    4. sub ST_DEV () { 0 }
    5. sub ST_INO () { 1 }
    6. sub FLAG_FOO () { 1 << 8 }
    7. sub FLAG_BAR () { 1 << 9 }
    8. sub FLAG_MASK () { FLAG_FOO | FLAG_BAR }
    9. sub OPT_BAZ () { not (0x1B58 & FLAG_MASK) }
    10. sub N () { int(OPT_BAZ) / 3 }
    11. sub FOO_SET () { 1 if FLAG_MASK & FLAG_FOO }
    12. sub FOO_SET2 () { if (FLAG_MASK & FLAG_FOO) { 1 } }

    (Be aware that the last example was not always inlined in Perl 5.20 and earlier, which did not behave consistently with subroutines containing inner scopes.) You can countermand inlining by using an explicit return:

    1. sub baz_val () {
    2. if (OPT_BAZ) {
    3. return 23;
    4. }
    5. else {
    6. return 42;
    7. }
    8. }
    9. sub bonk_val () { return 12345 }

    As alluded to earlier you can also declare inlined subs dynamically at BEGIN time if their body consists of a lexically-scoped scalar which has no other references. Only the first example here will be inlined:

    1. BEGIN {
    2. my $var = 1;
    3. no strict 'refs';
    4. *INLINED = sub () { $var };
    5. }
    6. BEGIN {
    7. my $var = 1;
    8. my $ref = \$var;
    9. no strict 'refs';
    10. *NOT_INLINED = sub () { $var };
    11. }

    A not so obvious caveat with this (see [RT #79908]) is that the variable will be immediately inlined, and will stop behaving like a normal lexical variable, e.g. this will print 79907 , not 79908 :

    1. BEGIN {
    2. my $x = 79907;
    3. *RT_79908 = sub () { $x };
    4. $x++;
    5. }
    6. print RT_79908(); # prints 79907

    As of Perl 5.22, this buggy behavior, while preserved for backward compatibility, is detected and emits a deprecation warning. If you want the subroutine to be inlined (with no warning), make sure the variable is not used in a context where it could be modified aside from where it is declared.

    1. # Fine, no warning
    2. BEGIN {
    3. my $x = 54321;
    4. *INLINED = sub () { $x };
    5. }
    6. # Warns. Future Perl versions will stop inlining it.
    7. BEGIN {
    8. my $x;
    9. $x = 54321;
    10. *ALSO_INLINED = sub () { $x };
    11. }

    Perl 5.22 also introduces the experimental "const" attribute as an alternative. (Disable the "experimental::const_attr" warnings if you want to use it.) When applied to an anonymous subroutine, it forces the sub to be called when the sub expression is evaluated. The return value is captured and turned into a constant subroutine:

    1. my $x = 54321;
    2. *INLINED = sub : const { $x };
    3. $x++;

    The return value of INLINED in this example will always be 54321, regardless of later modifications to $x. You can also put any arbitrary code inside the sub, at it will be executed immediately and its return value captured the same way.

    If you really want a subroutine with a () prototype that returns a lexical variable you can easily force it to not be inlined by adding an explicit return:

    1. BEGIN {
    2. my $x = 79907;
    3. *RT_79908 = sub () { return $x };
    4. $x++;
    5. }
    6. print RT_79908(); # prints 79908

    The easiest way to tell if a subroutine was inlined is by using B::Deparse. Consider this example of two subroutines returning 1 , one with a () prototype causing it to be inlined, and one without (with deparse output truncated for clarity):

    1. $ perl -MO=Deparse -le 'sub ONE { 1 } if (ONE) { print ONE if ONE }'
    2. sub ONE {
    3. 1;
    4. }
    5. if (ONE ) {
    6. print ONE() if ONE ;
    7. }
    8. $ perl -MO=Deparse -le 'sub ONE () { 1 } if (ONE) { print ONE if ONE }'
    9. sub ONE () { 1 }
    10. do {
    11. print 1
    12. };

    If you redefine a subroutine that was eligible for inlining, you'll get a warning by default. You can use this warning to tell whether or not a particular subroutine is considered inlinable, since it's different than the warning for overriding non-inlined subroutines:

    1. $ perl -e 'sub one () {1} sub one () {2}'
    2. Constant subroutine one redefined at -e line 1.
    3. $ perl -we 'sub one {1} sub one {2}'
    4. Subroutine one redefined at -e line 1.

    The warning is considered severe enough not to be affected by the -w switch (or its absence) because previously compiled invocations of the function will still be using the old value of the function. If you need to be able to redefine the subroutine, you need to ensure that it isn't inlined, either by dropping the () prototype (which changes calling semantics, so beware) or by thwarting the inlining mechanism in some other way, e.g. by adding an explicit return, as mentioned above:

    1. sub not_inlined () { return 23 }

    Overriding Built-in Functions

    Many built-in functions may be overridden, though this should be tried only occasionally and for good reason. Typically this might be done by a package attempting to emulate missing built-in functionality on a non-Unix system.

    Overriding may be done only by importing the name from a module at compile time--ordinary predeclaration isn't good enough. However, the use subs pragma lets you, in effect, predeclare subs via the import syntax, and these names may then override built-in ones:

    1. use subs 'chdir', 'chroot', 'chmod', 'chown';
    2. chdir $somewhere;
    3. sub chdir { ... }

    To unambiguously refer to the built-in form, precede the built-in name with the special package qualifier CORE:: . For example, saying CORE::open() always refers to the built-in open(), even if the current package has imported some other subroutine called &open() from elsewhere. Even though it looks like a regular function call, it isn't: the CORE:: prefix in that case is part of Perl's syntax, and works for any keyword, regardless of what is in the CORE package. Taking a reference to it, that is, \&CORE::open , only works for some keywords. See CORE.

    Library modules should not in general export built-in names like open or chdir as part of their default @EXPORT list, because these may sneak into someone else's namespace and change the semantics unexpectedly. Instead, if the module adds that name to @EXPORT_OK , then it's possible for a user to import the name explicitly, but not implicitly. That is, they could say

    1. use Module 'open';

    and it would import the open override. But if they said

    1. use Module;

    they would get the default imports without overrides.

    The foregoing mechanism for overriding built-in is restricted, quite deliberately, to the package that requests the import. There is a second method that is sometimes applicable when you wish to override a built-in everywhere, without regard to namespace boundaries. This is achieved by importing a sub into the special namespace CORE::GLOBAL:: . Here is an example that quite brazenly replaces the glob operator with something that understands regular expressions.

    1. package REGlob;
    2. require Exporter;
    3. @ISA = 'Exporter';
    4. @EXPORT_OK = 'glob';
    5. sub import {
    6. my $pkg = shift;
    7. return unless @_;
    8. my $sym = shift;
    9. my $where = ($sym =~ s/^GLOBAL_// ? 'CORE::GLOBAL' : caller(0));
    10. $pkg->export($where, $sym, @_);
    11. }
    12. sub glob {
    13. my $pat = shift;
    14. my @got;
    15. if (opendir my $d, '.') {
    16. @got = grep /$pat/, readdir $d;
    17. closedir $d;
    18. }
    19. return @got;
    20. }
    21. 1;

    And here's how it could be (ab)used:

    1. #use REGlob 'GLOBAL_glob'; # override glob() in ALL namespaces
    2. package Foo;
    3. use REGlob 'glob'; # override glob() in Foo:: only
    4. print for <^[a-z_]+\.pm\$>; # show all pragmatic modules

    The initial comment shows a contrived, even dangerous example. By overriding glob globally, you would be forcing the new (and subversive) behavior for the glob operator for every namespace, without the complete cognizance or cooperation of the modules that own those namespaces. Naturally, this should be done with extreme caution--if it must be done at all.

    The REGlob example above does not implement all the support needed to cleanly override perl's glob operator. The built-in glob has different behaviors depending on whether it appears in a scalar or list context, but our REGlob doesn't. Indeed, many perl built-in have such context sensitive behaviors, and these must be adequately supported by a properly written override. For a fully functional example of overriding glob, study the implementation of File::DosGlob in the standard library.

    When you override a built-in, your replacement should be consistent (if possible) with the built-in native syntax. You can achieve this by using a suitable prototype. To get the prototype of an overridable built-in, use the prototype function with an argument of "CORE::builtin_name" (see prototype).

    Note however that some built-ins can't have their syntax expressed by a prototype (such as system or chomp). If you override them you won't be able to fully mimic their original syntax.

    The built-ins do, require and glob can also be overridden, but due to special magic, their original syntax is preserved, and you don't have to define a prototype for their replacements. (You can't override the do BLOCK syntax, though).

    require has special additional dark magic: if you invoke your require replacement as require Foo::Bar , it will actually receive the argument "Foo/Bar.pm" in @_. See require.

    And, as you'll have noticed from the previous example, if you override glob, the <*> glob operator is overridden as well.

    In a similar fashion, overriding the readline function also overrides the equivalent I/O operator <FILEHANDLE> . Also, overriding readpipe also overrides the operators `` and qx//.

    Finally, some built-ins (e.g. exists or grep) can't be overridden.

    Autoloading

    If you call a subroutine that is undefined, you would ordinarily get an immediate, fatal error complaining that the subroutine doesn't exist. (Likewise for subroutines being used as methods, when the method doesn't exist in any base class of the class's package.) However, if an AUTOLOAD subroutine is defined in the package or packages used to locate the original subroutine, then that AUTOLOAD subroutine is called with the arguments that would have been passed to the original subroutine. The fully qualified name of the original subroutine magically appears in the global $AUTOLOAD variable of the same package as the AUTOLOAD routine. The name is not passed as an ordinary argument because, er, well, just because, that's why. (As an exception, a method call to a nonexistent import or unimport method is just skipped instead. Also, if the AUTOLOAD subroutine is an XSUB, there are other ways to retrieve the subroutine name. See Autoloading with XSUBs in perlguts for details.)

    Many AUTOLOAD routines load in a definition for the requested subroutine using eval(), then execute that subroutine using a special form of goto() that erases the stack frame of the AUTOLOAD routine without a trace. (See the source to the standard module documented in AutoLoader, for example.) But an AUTOLOAD routine can also just emulate the routine and never define it. For example, let's pretend that a function that wasn't defined should just invoke system with those arguments. All you'd do is:

    1. sub AUTOLOAD {
    2. my $program = $AUTOLOAD;
    3. $program =~ s/.*:://;
    4. system($program, @_);
    5. }
    6. date();
    7. who('am', 'i');
    8. ls('-l');

    In fact, if you predeclare functions you want to call that way, you don't even need parentheses:

    1. use subs qw(date who ls);
    2. date;
    3. who "am", "i";
    4. ls '-l';

    A more complete example of this is the Shell module on CPAN, which can treat undefined subroutine calls as calls to external programs.

    Mechanisms are available to help modules writers split their modules into autoloadable files. See the standard AutoLoader module described in AutoLoader and in AutoSplit, the standard SelfLoader modules in SelfLoader, and the document on adding C functions to Perl code in perlxs.

    Subroutine Attributes

    A subroutine declaration or definition may have a list of attributes associated with it. If such an attribute list is present, it is broken up at space or colon boundaries and treated as though a use attributes had been seen. See attributes for details about what attributes are currently supported. Unlike the limitation with the obsolescent use attrs , the sub : ATTRLIST syntax works to associate the attributes with a pre-declaration, and not just with a subroutine definition.

    The attributes must be valid as simple identifier names (without any punctuation other than the '_' character). They may have a parameter list appended, which is only checked for whether its parentheses ('(',')') nest properly.

    Examples of valid syntax (even though the attributes are unknown):

    1. sub fnord (&\%) : switch(10,foo(7,3)) : expensive;
    2. sub plugh () : Ugly('\(") :Bad;
    3. sub xyzzy : _5x5 { ... }

    Examples of invalid syntax:

    1. sub fnord : switch(10,foo(); # ()-string not balanced
    2. sub snoid : Ugly('('); # ()-string not balanced
    3. sub xyzzy : 5x5; # "5x5" not a valid identifier
    4. sub plugh : Y2::north; # "Y2::north" not a simple identifier
    5. sub snurt : foo + bar; # "+" not a colon or space

    The attribute list is passed as a list of constant strings to the code which associates them with the subroutine. In particular, the second example of valid syntax above currently looks like this in terms of how it's parsed and invoked:

    1. use attributes __PACKAGE__, \&plugh, q[Ugly('\(")], 'Bad';

    For further details on attribute lists and their manipulation, see attributes and Attribute::Handlers.

    SEE ALSO

    See Function Templates in perlref for more about references and closures. See perlxs if you'd like to learn about calling C subroutines from Perl. See perlembed if you'd like to learn about calling Perl subroutines from C. See perlmod to learn about bundling up your functions in separate files. See perlmodlib to learn what library modules come standard on your system. See perlootut to learn how to make object method calls.


    Top Visited
    Switchboard
    Latest
    Past week
    Past month

    NEWS CONTENTS

    Old News ;-)

    Recommended Links

    Google matched content

    Softpanorama Recommended

    Top articles

    Sites



    Etc

    Society

    Groupthink : Two Party System as Polyarchy : Corruption of Regulators : Bureaucracies : Understanding Micromanagers and Control Freaks : Toxic Managers :   Harvard Mafia : Diplomatic Communication : Surviving a Bad Performance Review : Insufficient Retirement Funds as Immanent Problem of Neoliberal Regime : PseudoScience : Who Rules America : Neoliberalism  : The Iron Law of Oligarchy : Libertarian Philosophy

    Quotes

    War and Peace : Skeptical Finance : John Kenneth Galbraith :Talleyrand : Oscar Wilde : Otto Von Bismarck : Keynes : George Carlin : Skeptics : Propaganda  : SE quotes : Language Design and Programming Quotes : Random IT-related quotesSomerset Maugham : Marcus Aurelius : Kurt Vonnegut : Eric Hoffer : Winston Churchill : Napoleon Bonaparte : Ambrose BierceBernard Shaw : Mark Twain Quotes

    Bulletin:

    Vol 25, No.12 (December, 2013) Rational Fools vs. Efficient Crooks The efficient markets hypothesis : Political Skeptic Bulletin, 2013 : Unemployment Bulletin, 2010 :  Vol 23, No.10 (October, 2011) An observation about corporate security departments : Slightly Skeptical Euromaydan Chronicles, June 2014 : Greenspan legacy bulletin, 2008 : Vol 25, No.10 (October, 2013) Cryptolocker Trojan (Win32/Crilock.A) : Vol 25, No.08 (August, 2013) Cloud providers as intelligence collection hubs : Financial Humor Bulletin, 2010 : Inequality Bulletin, 2009 : Financial Humor Bulletin, 2008 : Copyleft Problems Bulletin, 2004 : Financial Humor Bulletin, 2011 : Energy Bulletin, 2010 : Malware Protection Bulletin, 2010 : Vol 26, No.1 (January, 2013) Object-Oriented Cult : Political Skeptic Bulletin, 2011 : Vol 23, No.11 (November, 2011) Softpanorama classification of sysadmin horror stories : Vol 25, No.05 (May, 2013) Corporate bullshit as a communication method  : Vol 25, No.06 (June, 2013) A Note on the Relationship of Brooks Law and Conway Law

    History:

    Fifty glorious years (1950-2000): the triumph of the US computer engineering : Donald Knuth : TAoCP and its Influence of Computer Science : Richard Stallman : Linus Torvalds  : Larry Wall  : John K. Ousterhout : CTSS : Multix OS Unix History : Unix shell history : VI editor : History of pipes concept : Solaris : MS DOSProgramming Languages History : PL/1 : Simula 67 : C : History of GCC developmentScripting Languages : Perl history   : OS History : Mail : DNS : SSH : CPU Instruction Sets : SPARC systems 1987-2006 : Norton Commander : Norton Utilities : Norton Ghost : Frontpage history : Malware Defense History : GNU Screen : OSS early history

    Classic books:

    The Peter Principle : Parkinson Law : 1984 : The Mythical Man-MonthHow to Solve It by George Polya : The Art of Computer Programming : The Elements of Programming Style : The Unix Hater’s Handbook : The Jargon file : The True Believer : Programming Pearls : The Good Soldier Svejk : The Power Elite

    Most popular humor pages:

    Manifest of the Softpanorama IT Slacker Society : Ten Commandments of the IT Slackers Society : Computer Humor Collection : BSD Logo Story : The Cuckoo's Egg : IT Slang : C++ Humor : ARE YOU A BBS ADDICT? : The Perl Purity Test : Object oriented programmers of all nations : Financial Humor : Financial Humor Bulletin, 2008 : Financial Humor Bulletin, 2010 : The Most Comprehensive Collection of Editor-related Humor : Programming Language Humor : Goldman Sachs related humor : Greenspan humor : C Humor : Scripting Humor : Real Programmers Humor : Web Humor : GPL-related Humor : OFM Humor : Politically Incorrect Humor : IDS Humor : "Linux Sucks" Humor : Russian Musical Humor : Best Russian Programmer Humor : Microsoft plans to buy Catholic Church : Richard Stallman Related Humor : Admin Humor : Perl-related Humor : Linus Torvalds Related humor : PseudoScience Related Humor : Networking Humor : Shell Humor : Financial Humor Bulletin, 2011 : Financial Humor Bulletin, 2012 : Financial Humor Bulletin, 2013 : Java Humor : Software Engineering Humor : Sun Solaris Related Humor : Education Humor : IBM Humor : Assembler-related Humor : VIM Humor : Computer Viruses Humor : Bright tomorrow is rescheduled to a day after tomorrow : Classic Computer Humor

    The Last but not Least Technology is dominated by two types of people: those who understand what they do not manage and those who manage what they do not understand ~Archibald Putt. Ph.D


    Copyright © 1996-2021 by Softpanorama Society. www.softpanorama.org was initially created as a service to the (now defunct) UN Sustainable Development Networking Programme (SDNP) without any remuneration. This document is an industrial compilation designed and created exclusively for educational use and is distributed under the Softpanorama Content License. Original materials copyright belong to respective owners. Quotes are made for educational purposes only in compliance with the fair use doctrine.

    FAIR USE NOTICE This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available to advance understanding of computer science, IT technology, economic, scientific, and social issues. We believe this constitutes a 'fair use' of any such copyrighted material as provided by section 107 of the US Copyright Law according to which such material can be distributed without profit exclusively for research and educational purposes.

    This is a Spartan WHYFF (We Help You For Free) site written by people for whom English is not a native language. Grammar and spelling errors should be expected. The site contain some broken links as it develops like a living tree...

    You can use PayPal to to buy a cup of coffee for authors of this site

    Disclaimer:

    The statements, views and opinions presented on this web page are those of the author (or referenced source) and are not endorsed by, nor do they necessarily reflect, the opinions of the Softpanorama society. We do not warrant the correctness of the information provided or its fitness for any purpose. The site uses AdSense so you need to be aware of Google privacy policy. You you do not want to be tracked by Google please disable Javascript for this site. This site is perfectly usable without Javascript.

    Last modified: March, 12, 2019