What’s New In Python 3.2¶
- Author
Raymond Hettinger
This article explains the new features in Python 3.2 as compared to 3.1. Python 3.2 was released on February 20, 2011. It focuses on a few highlights and gives a few examples. For full details, see the Misc/NEWS file.
See also
PEP 392 - Python 3.2 Release Schedule
PEP 384: Defining a Stable ABI¶
In the past, extension modules built for one Python version were often not usable with other Python versions. Particularly on Windows, every feature release of Python required rebuilding all extension modules that one wanted to use. This requirement was the result of the free access to Python interpreter internals that extension modules could use.
With Python 3.2, an alternative approach becomes available: extension modules which restrict themselves to a limited API (by defining Py_LIMITED_API) cannot use many of the internals, but are constrained to a set of API functions that are promised to be stable for several releases. As a consequence, extension modules built for 3.2 in that mode will also work with 3.3, 3.4, and so on. Extension modules that make use of details of memory structures can still be built, but will need to be recompiled for every feature release.
See also
- PEP 384 - Defining a Stable ABI
PEP written by Martin von Löwis.
PEP 389: Argparse Command Line Parsing Module¶
A new module for command line parsing, argparse, was introduced to
overcome the limitations of optparse which did not provide support for
positional arguments (not just options), subcommands, required options and other
common patterns of specifying and validating options.
This module has already had widespread success in the community as a
third-party module. Being more fully featured than its predecessor, the
argparse module is now the preferred module for command-line processing.
The older module is still being kept available because of the substantial amount
of legacy code that depends on it.
Here’s an annotated example parser showing features like limiting results to a set of choices, specifying a metavar in the help screen, validating that one or more positional arguments is present, and making a required option:
import argparse
parser = argparse.ArgumentParser(
description = 'Manage servers', # main description for help
epilog = 'Tested on Solaris and Linux') # displayed after help
parser.add_argument('action', # argument name
choices = ['deploy', 'start', 'stop'], # three allowed values
help = 'action on each target') # help msg
parser.add_argument('targets',
metavar = 'HOSTNAME', # var name used in help msg
nargs = '+', # require one or more targets
help = 'url for target machines') # help msg explanation
parser.add_argument('-u', '--user', # -u or --user option
required = True, # make it a required argument
help = 'login as user')
Example of calling the parser on a command string:
>>> cmd = 'deploy sneezy.example.com sleepy.example.com -u skycaptain'
>>> result = parser.parse_args(cmd.split())
>>> result.action
'deploy'
>>> result.targets
['sneezy.example.com', 'sleepy.example.com']
>>> result.user
'skycaptain'
Example of the parser’s automatically generated help:
>>> parser.parse_args('-h'.split())
usage: manage_cloud.py [-h] -u USER
{deploy,start,stop} HOSTNAME [HOSTNAME ...]
Manage servers
positional arguments:
{deploy,start,stop} action on each target
HOSTNAME url for target machines
optional arguments:
-h, --help show this help message and exit
-u USER, --user USER login as user
Tested on Solaris and Linux
An especially nice argparse feature is the ability to define subparsers,
each with their own argument patterns and help displays:
import argparse
parser = argparse.ArgumentParser(prog='HELM')
subparsers = parser.add_subparsers()
parser_l = subparsers.add_parser('launch', help='Launch Control') # first subgroup
parser_l.add_argument('-m', '--missiles', action='store_true')
parser_l.add_argument('-t', '--torpedos', action='store_true')
parser_m = subparsers.add_parser('move', help='Move Vessel', # second subgroup
aliases=('steer', 'turn')) # equivalent names
parser_m.add_argument('-c', '--course', type=int, required=True)
parser_m.add_argument('-s', '--speed', type=int, default=0)
$ ./helm.py --help # top level help (launch and move)
$ ./helm.py launch --help # help for launch options
$ ./helm.py launch --missiles # set missiles=True and torpedos=False
$ ./helm.py steer --course 180 --speed 5 # set movement parameters
See also
- PEP 389 - New Command Line Parsing Module
PEP written by Steven Bethard.
Upgrading optparse code for details on the differences from optparse.
PEP 391: Dictionary Based Configuration for Logging¶
The logging module provided two kinds of configuration, one style with
function calls for each option or another style driven by an external file saved
in a ConfigParser format. Those options did not provide the flexibility
to create configurations from JSON or YAML files, nor did they support
incremental configuration, which is needed for specifying logger options from a
command line.
To support a more flexible style, the module now offers
logging.config.dictConfig() for specifying logging configuration with
plain Python dictionaries. The configuration options include formatters,
handlers, filters, and loggers. Here’s a working example of a configuration
dictionary:
{"version": 1,
"formatters": {"brief": {"format": "%(levelname)-8s: %(name)-15s: %(message)s"},
"full": {"format": "%(asctime)s %(name)-15s %(levelname)-8s %(message)s"}
},
"handlers": {"console": {
"class": "logging.StreamHandler",
"formatter": "brief",
"level": "INFO",
"stream": "ext://sys.stdout"},
"console_priority": {
"class": "logging.StreamHandler",
"formatter": "full",
"level": "ERROR",
"stream": "ext://sys.stderr"}
},
"root": {"level": "DEBUG", "handlers": ["console", "console_priority"]}}
If that dictionary is stored in a file called conf.json, it can be
loaded and called with code like this:
>>> import json, logging.config
>>> with open('conf.json') as f:
... conf = json.load(f)
...
>>> logging.config.dictConfig(conf)
>>> logging.info("Transaction completed normally")
INFO : root : Transaction completed normally
>>> logging.critical("Abnormal termination")
2011-02-17 11:14:36,694 root CRITICAL Abnormal termination
See also
- PEP 391 - Dictionary Based Configuration for Logging
PEP written by Vinay Sajip.
PEP 3148: The concurrent.futures module¶
Code for creating and managing concurrency is being collected in a new top-level namespace, concurrent. Its first member is a futures package which provides a uniform high-level interface for managing threads and processes.
The design for concurrent.futures was inspired by the
java.util.concurrent package. In that model, a running call and its result
are represented by a Future object that abstracts
features common to threads, processes, and remote procedure calls. That object
supports status checks (running or done), timeouts, cancellations, adding
callbacks, and access to results or exceptions.
The primary offering of the new module is a pair of executor classes for launching and managing calls. The goal of the executors is to make it easier to use existing tools for making parallel calls. They save the effort needed to setup a pool of resources, launch the calls, create a results queue, add time-out handling, and limit the total number of threads, processes, or remote procedure calls.
Ideally, each application should share a single executor across multiple components so that process and thread limits can be centrally managed. This solves the design challenge that arises when each component has its own competing strategy for resource management.
Both classes share a common interface with three methods:
submit() for scheduling a callable and
returning a Future object;
map() for scheduling many asynchronous calls
at a time, and shutdown() for freeing
resources. The class is a context manager and can be used in a
with statement to assure that resources are automatically released
when currently pending futures are done executing.
A simple of example of ThreadPoolExecutor is a
launch of four parallel threads for copying files:
import concurrent.futures, shutil
with concurrent.futures.ThreadPoolExecutor(max_workers=4) as e:
e.submit(shutil.copy, 'src1.txt', 'dest1.txt')
e.submit(shutil.copy, 'src2.txt', 'dest2.txt')
e.submit(shutil.copy, 'src3.txt', 'dest3.txt')
e.submit(shutil.copy, 'src3.txt', 'dest4.txt')
See also
- PEP 3148 - Futures – Execute Computations Asynchronously
PEP written by Brian Quinlan.
Code for Threaded Parallel URL reads, an example using threads to fetch multiple web pages in parallel.
Code for computing prime numbers in
parallel, an example demonstrating
ProcessPoolExecutor.
PEP 3147: PYC Repository Directories¶
Python’s scheme for caching bytecode in .pyc files did not work well in environments with multiple Python interpreters. If one interpreter encountered a cached file created by another interpreter, it would recompile the source and overwrite the cached file, thus losing the benefits of caching.
The issue of “pyc fights” has become more pronounced as it has become commonplace for Linux distributions to ship with multiple versions of Python. These conflicts also arise with CPython alternatives such as Unladen Swallow.
To solve this problem, Python’s import machinery has been extended to use distinct filenames for each interpreter. Instead of Python 3.2 and Python 3.3 and Unladen Swallow each competing for a file called “mymodule.pyc”, they will now look for “mymodule.cpython-32.pyc”, “mymodule.cpython-33.pyc”, and “mymodule.unladen10.pyc”. And to prevent all of these new files from cluttering source directories, the pyc files are now collected in a “__pycache__” directory stored under the package directory.
Aside from the filenames and target directories, the new scheme has a few aspects that are visible to the programmer:
Imported modules now have a
__cached__attribute which stores the name of the actual file that was imported:>>> import collections >>> collections.__cached__ 'c:/py32/lib/__pycache__/collections.cpython-32.pyc'
The tag that is unique to each interpreter is accessible from the
impmodule:>>> import imp >>> imp.get_tag() 'cpython-32'
Scripts that try to deduce source filename from the imported file now need to be smarter. It is no longer sufficient to simply strip the “c” from a “.pyc” filename. Instead, use the new functions in the
impmodule:>>> imp.source_from_cache('c:/py32/lib/__pycache__/collections.cpython-32.pyc') 'c:/py32/lib/collections.py' >>> imp.cache_from_source('c:/py32/lib/collections.py') 'c:/py32/lib/__pycache__/collections.cpython-32.pyc'
The
py_compileandcompileallmodules have been updated to reflect the new naming convention and target directory. The command-line invocation of compileall has new options:-ifor specifying a list of files and directories to compile and-bwhich causes bytecode files to be written to their legacy location rather than __pycache__.The
importlib.abcmodule has been updated with new abstract base classes for loading bytecode files. The obsolete ABCs,PyLoaderandPyPycLoader, have been deprecated (instructions on how to stay Python 3.1 compatible are included with the documentation).
See also
- PEP 3147 - PYC Repository Directories
PEP written by Barry Warsaw.
PEP 3149: ABI Version Tagged .so Files¶
The PYC repository directory allows multiple bytecode cache files to be co-located. This PEP implements a similar mechanism for shared object files by giving them a common directory and distinct names for each version.
The common directory is “pyshared” and the file names are made distinct by identifying the Python implementation (such as CPython, PyPy, Jython, etc.), the major and minor version numbers, and optional build flags (such as “d” for debug, “m” for pymalloc, “u” for wide-unicode). For an arbitrary package “foo”, you may see these files when the distribution package is installed:
/usr/share/pyshared/foo.cpython-32m.so
/usr/share/pyshared/foo.cpython-33md.so
In Python itself, the tags are accessible from functions in the sysconfig
module:
>>> import sysconfig
>>> sysconfig.get_config_var('SOABI') # find the version tag
'cpython-32mu'
>>> sysconfig.get_config_var('EXT_SUFFIX') # find the full filename extension
'.cpython-32mu.so'
See also
- PEP 3149 - ABI Version Tagged .so Files
PEP written by Barry Warsaw.
PEP 3333: Python Web Server Gateway Interface v1.0.1¶
This informational PEP clarifies how bytes/text issues are to be handled by the
WSGI protocol. The challenge is that string handling in Python 3 is most
conveniently handled with the str type even though the HTTP protocol
is itself bytes oriented.
The PEP differentiates so-called native strings that are used for request/response headers and metadata versus byte strings which are used for the bodies of requests and responses.
The native strings are always of type str but are restricted to code
points between U+0000 through U+00FF which are translatable to bytes using
Latin-1 encoding. These strings are used for the keys and values in the
environment dictionary and for response headers and statuses in the
start_response() function. They must follow RFC 2616 with respect to
encoding. That is, they must either be ISO-8859-1 characters or use
RFC 2047 MIME encoding.
For developers porting WSGI applications from Python 2, here are the salient points:
If the app already used strings for headers in Python 2, no change is needed.
If instead, the app encoded output headers or decoded input headers, then the headers will need to be re-encoded to Latin-1. For example, an output header encoded in utf-8 was using
h.encode('utf-8')now needs to convert from bytes to native strings usingh.encode('utf-8').decode('latin-1').Values yielded by an application or sent using the
write()method must be byte strings. Thestart_response()function and environ must use native strings. The two cannot be mixed.
For server implementers writing CGI-to-WSGI pathways or other CGI-style
protocols, the users must to be able access the environment using native strings
even though the underlying platform may have a different convention. To bridge
this gap, the wsgiref module has a new function,
wsgiref.handlers.read_environ() for transcoding CGI variables from
os.environ into native strings and returning a new dictionary.
See also
- PEP 3333 - Python Web Server Gateway Interface v1.0.1
PEP written by Phillip Eby.
Other Language Changes¶
Some smaller changes made to the core Python language are:
String formatting for
format()andstr.format()gained new capabilities for the format character #. Previously, for integers in binary, octal, or hexadecimal, it caused the output to be prefixed with ‘0b’, ‘0o’, or ‘0x’ respectively. Now it can also handle floats, complex, and Decimal, causing the output to always have a decimal point even when no digits follow it.>>> format(20, '#o') '0o24' >>> format(12.34, '#5.0f') ' 12.'
(Suggested by Mark Dickinson and implemented by Eric Smith in bpo-7094.)
There is also a new
str.format_map()method that extends the capabilities of the existingstr.format()method by accepting arbitrary mapping objects. This new method makes it possible to use string formatting with any of Python’s many dictionary-like objects such asdefaultdict,Shelf,ConfigParser, ordbm. It is also useful with customdictsubclasses that normalize keys before look-up or that supply a__missing__()method for unknown keys:>>> import shelve >>> d = shelve.open('tmp.shl') >>> 'The {project_name} status is {status} as of {date}'.format_map(d) 'The testing project status is green as of February 15, 2011' >>> class LowerCasedDict(dict): ... def __getitem__(self, key): ... return dict.__getitem__(self, key.lower()) >>> lcd = LowerCasedDict(part='widgets', quantity=10) >>> 'There are {QUANTITY} {Part} in stock'.format_map(lcd) 'There are 10 widgets in stock' >>> class PlaceholderDict(dict): ... def __missing__(self, key): ... return '<{}>'.format(key) >>> 'Hello {name}, welcome to {location}'.format_map(PlaceholderDict()) 'Hello <name>, welcome to <location>'
(Suggested by Raymond Hettinger and implemented by Eric Smith in bpo-6081.)
The interpreter can now be started with a quiet option,
-q, to prevent the copyright and version information from being displayed in the interactive mode. The option can be introspected using thesys.flagsattribute:$ python -q >>> sys.flags sys.flags(debug=0, division_warning=0, inspect=0, interactive=0, optimize=0, dont_write_bytecode=0, no_user_site=0, no_site=0, ignore_environment=0, verbose=0, bytes_warning=0, quiet=1)
(Contributed by Marcin Wojdyr in bpo-1772833).
The
hasattr()function works by callinggetattr()and detecting whether an exception is raised. This technique allows it to detect methods created dynamically by__getattr__()or__getattribute__()which would otherwise be absent from the class dictionary. Formerly, hasattr would catch any exception, possibly masking genuine errors. Now, hasattr has been tightened to only catchAttributeErrorand let other exceptions pass through:>>> class A: ... @property ... def f(self): ... return 1 // 0 ... >>> a = A() >>> hasattr(a, 'f') Traceback (most recent call last): ... ZeroDivisionError: integer division or modulo by zero
(Discovered by Yury Selivanov and fixed by Benjamin Peterson; bpo-9666.)
The
str()of a float or complex number is now the same as itsrepr(). Previously, thestr()form was shorter but that just caused confusion and is no longer needed now that the shortest possiblerepr()is displayed by default:>>> import math >>> repr(math.pi) '3.141592653589793' >>> str(math.pi) '3.141592653589793'
(Proposed and implemented by Mark Dickinson; bpo-9337.)
memoryviewobjects now have arelease()method and they also now support the context management protocol. This allows timely release of any resources that were acquired when requesting a buffer from the original object.>>> with memoryview(b'abcdefgh') as v: ... print(v.tolist()) [97, 98, 99, 100, 101, 102, 103, 104]
(Added by Antoine Pitrou; bpo-9757.)
Previously it was illegal to delete a name from the local namespace if it occurs as a free variable in a nested block:
def outer(x): def inner(): return x inner() del x
This is now allowed. Remember that the target of an
exceptclause is cleared, so this code which used to work with Python 2.6, raised aSyntaxErrorwith Python 3.1 and now works again:def f(): def print_error(): print(e) try: something except Exception as e: print_error() # implicit "del e" here
(See bpo-4617.)
The internal
structsequencetool now creates subclasses of tuple. This means that C structures like those returned byos.stat(),time.gmtime(), andsys.version_infonow work like a named tuple and now work with functions and methods that expect a tuple as an argument. This is a big step forward in making the C structures as flexible as their pure Python counterparts:>>> import sys >>> isinstance(sys.version_info, tuple) True >>> 'Version %d.%d.%d %s(%d)' % sys.version_info 'Version 3.2.0 final(0)'
(Suggested by Arfrever Frehtes Taifersar Arahesis and implemented by Benjamin Peterson in bpo-8413.)
Warnings are now easier to control using the
PYTHONWARNINGSenvironment variable as an alternative to using-Wat the command line:$ export PYTHONWARNINGS='ignore::RuntimeWarning::,once::UnicodeWarning::'
(Suggested by Barry Warsaw and implemented by Philip Jenvey in bpo-7301.)
A new warning category,
ResourceWarning, has been added. It is emitted when potential issues with resource consumption or cleanup are detected. It is silenced by default in normal release builds but can be enabled through the means provided by thewarningsmodule, or on the command line.A
ResourceWarningis issued at interpreter shutdown if thegc.garbagelist isn’t empty, and ifgc.DEBUG_UNCOLLECTABLEis set, all uncollectable objects are printed. This is meant to make the programmer aware that their code contains object finalization issues.A
ResourceWarningis also issued when a file object is destroyed without having been explicitly closed. While the deallocator for such object ensures it closes the underlying operating system resource (usually, a file descriptor), the delay in deallocating the object could produce various issues, especially under Windows. Here is an example of enabling the warning from the command line:$ python -q -Wdefault >>> f = open("foo", "wb") >>> del f __main__:1: ResourceWarning: unclosed file <_io.BufferedWriter name='foo'>
(Added by Antoine Pitrou and Georg Brandl in bpo-10093 and bpo-477863.)
rangeobjects now support index and count methods. This is part of an effort to make more objects fully implement thecollections.Sequenceabstract base class. As a result, the language will have a more uniform API. In addition,rangeobjects now support slicing and negative indices, even with values larger thansys.maxsize. This makes range more interoperable with lists:>>> range(0, 100, 2).count(10) 1 >>> range(0, 100, 2).index(10) 5 >>> range(0, 100, 2)[5] 10 >>> range(0, 100, 2)[0:5] range(0, 10, 2)
(Contributed by Daniel Stutzbach in bpo-9213, by Alexander Belopolsky in bpo-2690, and by Nick Coghlan in bpo-10889.)
The
callable()builtin function from Py2.x was resurrected. It provides a concise, readable alternative to using an abstract base class in an expression likeisinstance(x, collections.Callable):>>> callable(max) True >>> callable(20) False
(See bpo-10518.)
Python’s import mechanism can now load modules installed in directories with non-ASCII characters in the path name. This solved an aggravating problem with home directories for users with non-ASCII characters in their usernames.
(Required extensive work by Victor Stinner in bpo-9425.)
New, Improved, and Deprecated Modules¶
Python’s standard library has undergone significant maintenance efforts and quality improvements.
The biggest news for Python 3.2 is that the email package, mailbox
module, and nntplib modules now work correctly with the bytes/text model
in Python 3. For the first time, there is correct handling of messages with
mixed encodings.
Throughout the standard library, there has been more careful attention to encodings and text versus bytes issues. In particular, interactions with the operating system are now better able to exchange non-ASCII data using the Windows MBCS encoding, locale-aware encodings, or UTF-8.
Another significant win is the addition of substantially better support for SSL connections and security certificates.
In addition, more classes now implement a context manager to support
convenient and reliable resource clean-up using a with statement.
email¶
The usability of the email package in Python 3 has been mostly fixed by
the extensive efforts of R. David Murray. The problem was that emails are
typically read and stored in the form of bytes rather than str
text, and they may contain multiple encodings within a single email. So, the
email package had to be extended to parse and generate email messages in bytes
format.
New functions
message_from_bytes()andmessage_from_binary_file(), and new classesBytesFeedParserandBytesParserallow binary message data to be parsed into model objects.Given bytes input to the model,
get_payload()will by default decode a message body that has a Content-Transfer-Encoding of 8bit using the charset specified in the MIME headers and return the resulting string.Given bytes input to the model,
Generatorwill convert message bodies that have a Content-Transfer-Encoding of 8bit to instead have a 7bit Content-Transfer-Encoding.Headers with unencoded non-ASCII bytes are deemed to be RFC 2047-encoded using the unknown-8bit character set.
A new class
BytesGeneratorproduces bytes as output, preserving any unchanged non-ASCII data that was present in the input used to build the model, including message bodies with a Content-Transfer-Encoding of 8bit.The
smtplibSMTPclass now accepts a byte string for the msg argument to thesendmail()method, and a new method,send_message()accepts aMessageobject and can optionally obtain the from_addr and to_addrs addresses directly from the object.
(Proposed and implemented by R. David Murray, bpo-4661 and bpo-10321.)
elementtree¶
The xml.etree.ElementTree package and its xml.etree.cElementTree
counterpart have been updated to version 1.3.
Several new and useful functions and methods have been added:
xml.etree.ElementTree.fromstringlist()which builds an XML document from a sequence of fragmentsxml.etree.ElementTree.register_namespace()for registering a global namespace prefixxml.etree.ElementTree.tostringlist()for string representation including all sublistsxml.etree.ElementTree.Element.extend()for appending a sequence of zero or more elementsxml.etree.ElementTree.Element.iterfind()searches an element and subelementsxml.etree.ElementTree.Element.itertext()creates a text iterator over an element and its subelementsxml.etree.ElementTree.TreeBuilder.end()closes the current elementxml.etree.ElementTree.TreeBuilder.doctype()handles a doctype declaration
Two methods have been deprecated:
xml.etree.ElementTree.getchildren()uselist(elem)instead.xml.etree.ElementTree.getiterator()useElement.iterinstead.
For details of the update, see Introducing ElementTree on Fredrik Lundh’s website.
(Contributed by Florent Xicluna and Fredrik Lundh, bpo-6472.)
functools¶
The
functoolsmodule includes a new decorator for caching function calls.functools.lru_cache()can save repeated queries to an external resource whenever the results are expected to be the same.For example, adding a caching decorator to a database query function can save database accesses for popular searches:
>>> import functools >>> @functools.lru_cache(maxsize=300) ... def get_phone_number(name): ... c = conn.cursor() ... c.execute('SELECT phonenumber FROM phonelist WHERE name=?', (name,)) ... return c.fetchone()[0]
>>> for name in user_requests: ... get_phone_number(name) # cached lookup
To help with choosing an effective cache size, the wrapped function is instrumented for tracking cache statistics:
>>> get_phone_number.cache_info() CacheInfo(hits=4805, misses=980, maxsize=300, currsize=300)
If the phonelist table gets updated, the outdated contents of the cache can be cleared with:
>>> get_phone_number.cache_clear()
(Contributed by Raymond Hettinger and incorporating design ideas from Jim Baker, Miki Tebeka, and Nick Coghlan; see recipe 498245, recipe 577479, bpo-10586, and bpo-10593.)
The
functools.wraps()decorator now adds a__wrapped__attribute pointing to the original callable function. This allows wrapped functions to be introspected. It also copies__annotations__if defined. And now it also gracefully skips over missing attributes such as__doc__which might not be defined for the wrapped callable.In the above example, the cache can be removed by recovering the original function:
>>> get_phone_number = get_phone_number.__wrapped__ # uncached function
(By Nick Coghlan and Terrence Cole; bpo-9567, bpo-3445, and bpo-8814.)
To help write classes with rich comparison methods, a new decorator
functools.total_ordering()will use existing equality and inequality methods to fill in the remaining methods.For example, supplying __eq__ and __lt__ will enable
total_ordering()to fill-in __le__, __gt__ and __ge__:@total_ordering class Student: def __eq__(self, other): return ((self.lastname.lower(), self.firstname.lower()) == (other.lastname.lower(), other.firstname.lower())) def __lt__(self, other): return ((self.lastname.lower(), self.firstname.lower()) < (other.lastname.lower(), other.firstname.lower()))
With the total_ordering decorator, the remaining comparison methods are filled in automatically.
(Contributed by Raymond Hettinger.)
To aid in porting programs from Python 2, the
functools.cmp_to_key()function converts an old-style comparison function to modern key function:>>> # locale-aware sort order >>> sorted(iterable, key=cmp_to_key(locale.strcoll))
For sorting examples and a brief sorting tutorial, see the Sorting HowTo tutorial.
(Contributed by Raymond Hettinger.)
itertools¶
The
itertoolsmodule has a newaccumulate()function modeled on APL’s scan operator and Numpy’s accumulate function:>>> from itertools import accumulate >>> list(accumulate([8, 2, 50])) [8, 10, 60]
>>> prob_dist = [0.1, 0.4, 0.2, 0.3] >>> list(accumulate(prob_dist)) # cumulative probability distribution [0.1, 0.5, 0.7, 1.0]
For an example using
accumulate(), see the examples for the random module.(Contributed by Raymond Hettinger and incorporating design suggestions from Mark Dickinson.)
collections¶
The
collections.Counterclass now has two forms of in-place subtraction, the existing -= operator for saturating subtraction and the newsubtract()method for regular subtraction. The former is suitable for multisets which only have positive counts, and the latter is more suitable for use cases that allow negative counts:>>> from collections import Counter >>> tally = Counter(dogs=5, cats=3) >>> tally -= Counter(dogs=2, cats=8) # saturating subtraction >>> tally Counter({'dogs': 3})
>>> tally = Counter(dogs=5, cats=3) >>> tally.subtract(dogs=2, cats=8) # regular subtraction >>> tally Counter({'dogs': 3, 'cats': -5})
(Contributed by Raymond Hettinger.)
The
collections.OrderedDictclass has a new methodmove_to_end()which takes an existing key and moves it to either the first or last position in the ordered sequence.The default is to move an item to the last position. This is equivalent of renewing an entry with
od[k] = od.pop(k).A fast move-to-end operation is useful for resequencing entries. For example, an ordered dictionary can be used to track order of access by aging entries from the oldest to the most recently accessed.
>>> from collections import OrderedDict >>> d = OrderedDict.fromkeys(['a', 'b', 'X', 'd', 'e']) >>> list(d) ['a', 'b', 'X', 'd', 'e'] >>> d.move_to_end('X') >>> list(d) ['a', 'b', 'd', 'e', 'X']
(Contributed by Raymond Hettinger.)
The
collections.dequeclass grew two new methodscount()andreverse()that make them more substitutable forlistobjects:>>> from collections import deque >>> d = deque('simsalabim') >>> d.count('s') 2 >>> d.reverse() >>> d deque(['m', 'i', 'b', 'a', 'l', 'a', 's', 'm', 'i', 's'])
(Contributed by Raymond Hettinger.)
threading¶
The threading module has a new Barrier
synchronization class for making multiple threads wait until all of them have
reached a common barrier point. Barriers are useful for making sure that a task
with multiple preconditions does not run until all of the predecessor tasks are
complete.
Barriers can work with an arbitrary number of threads. This is a generalization of a Rendezvous which is defined for only two threads.
Implemented as a two-phase cyclic barrier, Barrier objects
are suitable for use in loops. The separate filling and draining phases
assure that all threads get released (drained) before any one of them can loop
back and re-enter the barrier. The barrier fully resets after each cycle.
Example of using barriers:
from threading import Barrier, Thread
def get_votes(site):
ballots = conduct_election(site)
all_polls_closed.wait() # do not count until all polls are closed
totals = summarize(ballots)
publish(site, totals)
all_polls_closed = Barrier(len(sites))
for site in sites:
Thread(target=get_votes, args=(site,)).start()
In this example, the barrier enforces a rule that votes cannot be counted at any
polling site until all polls are closed. Notice how a solution with a barrier
is similar to one with threading.Thread.join(), but the threads stay alive
and continue to do work (summarizing ballots) after the barrier point is
crossed.
If any of the predecessor tasks can hang or be delayed, a barrier can be created
with an optional timeout parameter. Then if the timeout period elapses before
all the predecessor tasks reach the barrier point, all waiting threads are
released and a BrokenBarrierError exception is raised:
def get_votes(site):
ballots = conduct_election(site)
try:
all_polls_closed.wait(timeout=midnight - time.now())
except BrokenBarrierError:
lockbox = seal_ballots(ballots)
queue.put(lockbox)
else:
totals = summarize(ballots)
publish(site, totals)
In this example, the barrier enforces a more robust rule. If some election sites do not finish before midnight, the barrier times-out and the ballots are sealed and deposited in a queue for later handling.
See Barrier Synchronization Patterns for more examples of how barriers can be used in parallel computing. Also, there is a simple but thorough explanation of barriers in The Little Book of Semaphores, section 3.6.
(Contributed by Kristján Valur Jónsson with an API review by Jeffrey Yasskin in bpo-8777.)
datetime and time¶
The
datetimemodule has a new typetimezonethat implements thetzinfointerface by returning a fixed UTC offset and timezone name. This makes it easier to create timezone-aware datetime objects:>>> from datetime import datetime, timezone >>> datetime.now(timezone.utc) datetime.datetime(2010, 12, 8, 21, 4, 2, 923754, tzinfo=datetime.timezone.utc) >>> datetime.strptime("01/01/2000 12:00 +0000", "%m/%d/%Y %H:%M %z") datetime.datetime(2000, 1, 1, 12, 0, tzinfo=datetime.timezone.utc)
Also,
timedeltaobjects can now be multiplied byfloatand divided byfloatandintobjects. Andtimedeltaobjects can now divide one another.The
datetime.date.strftime()method is no longer restricted to years after 1900. The new supported year range is from 1000 to 9999 inclusive.Whenever a two-digit year is used in a time tuple, the interpretation has been governed by
time.accept2dyear. The default isTruewhich means that for a two-digit year, the century is guessed according to the POSIX rules governing the%ystrptime format.Starting with Py3.2, use of the century guessing heuristic will emit a
DeprecationWarning. Instead, it is recommended thattime.accept2dyearbe set toFalseso that large date ranges can be used without guesswork:>>> import time, warnings >>> warnings.resetwarnings() # remove the default warning filters >>> time.accept2dyear = True # guess whether 11 means 11 or 2011 >>> time.asctime((11, 1, 1, 12, 34, 56, 4, 1, 0)) Warning (from warnings module): ... DeprecationWarning: Century info guessed for a 2-digit year. 'Fri Jan 1 12:34:56 2011' >>> time.accept2dyear = False # use the full range of allowable dates >>> time.asctime((11, 1, 1, 12, 34, 56, 4, 1, 0)) 'Fri Jan 1 12:34:56 11'
Several functions now have significantly expanded date ranges. When
time.accept2dyearis false, thetime.asctime()function will accept any year that fits in a C int, while thetime.mktime()andtime.strftime()functions will accept the full range supported by the corresponding operating system functions.
(Contributed by Alexander Belopolsky and Victor Stinner in bpo-1289118, bpo-5094, bpo-6641, bpo-2706, bpo-1777412, bpo-8013, and bpo-10827.)
math¶
The math module has been updated with six new functions inspired by the
C99 standard.
The isfinite() function provides a reliable and fast way to detect
special values. It returns True for regular numbers and False for Nan or
Infinity:
>>> from math import isfinite
>>> [isfinite(x) for x in (123, 4.56, float('Nan'), float('Inf'))]
[True, True, False, False]
The expm1() function computes e**x-1 for small values of x
without incurring the loss of precision that usually accompanies the subtraction
of nearly equal quantities:
>>> from math import expm1
>>> expm1(0.013671875) # more accurate way to compute e**x-1 for a small x
0.013765762467652909
The erf() function computes a probability integral or Gaussian
error function. The
complementary error function, erfc(), is 1 - erf(x):
>>> from math import erf, erfc, sqrt
>>> erf(1.0/sqrt(2.0)) # portion of normal distribution within 1 standard deviation
0.682689492137086
>>> erfc(1.0/sqrt(2.0)) # portion of normal distribution outside 1 standard deviation
0.31731050786291404
>>> erf(1.0/sqrt(2.0)) + erfc(1.0/sqrt(2.0))
1.0
The gamma() function is a continuous extension of the factorial
function. See https://en.wikipedia.org/wiki/Gamma_function for details. Because
the function is related to factorials, it grows large even for small values of
x, so there is also a lgamma() function for computing the natural
logarithm of the gamma function:
>>> from math import gamma, lgamma
>>> gamma(7.0) # six factorial
720.0
>>> lgamma(801.0) # log(800 factorial)
4551.950730698041
(Contributed by Mark Dickinson.)
abc¶
The abc module now supports abstractclassmethod() and
abstractstaticmethod().
These tools make it possible to define an abstract base class that
requires a particular classmethod() or staticmethod() to be
implemented:
class Temperature(metaclass=abc.ABCMeta):
@abc.abstractclassmethod
def from_fahrenheit(cls, t):
...
@abc.abstractclassmethod
def from_celsius(cls, t):
...
(Patch submitted by Daniel Urban; bpo-5867.)
io¶
The io.BytesIO has a new method, getbuffer(), which
provides functionality similar to memoryview(). It creates an editable
view of the data without making a copy. The buffer’s random access and support
for slice notation are well-suited to in-place editing:
>>> REC_LEN, LOC_START, LOC_LEN = 34, 7, 11
>>> def change_location(buffer, record_number, location):
... start = record_number * REC_LEN + LOC_START
... buffer[start: start+LOC_LEN] = location
>>> import io
>>> byte_stream = io.BytesIO(
... b'G3805 storeroom Main chassis '
... b'X7899 shipping Reserve cog '
... b'L6988 receiving Primary sprocket'
... )
>>> buffer = byte_stream.getbuffer()
>>> change_location(buffer, 1, b'warehouse ')
>>> change_location(buffer, 0, b'showroom ')
>>> print(byte_stream.getvalue())
b'G3805 showroom Main chassis '
b'X7899 warehouse Reserve cog '
b'L6988 receiving Primary sprocket'
(Contributed by Antoine Pitrou in bpo-5506.)
reprlib¶
When writing a __repr__() method for a custom container, it is easy to
forget to handle the case where a member refers back to the container itself.
Python’s builtin objects such as list and set handle
self-reference by displaying “…” in the recursive part of the representation
string.
To help write such __repr__() methods, the reprlib module has a new
decorator, recursive_repr(), for detecting recursive calls to
__repr__() and substituting a placeholder string instead:
>>> class MyList(list):
... @recursive_repr()
... def __repr__(self):
... return '<' + '|'.join(map(repr, self)) + '>'
...
>>> m = MyList('abc')
>>> m.append(m)
>>> m.append('x')
>>> print(m)
<'a'|'b'|'c'|...|'x'>
(Contributed by Raymond Hettinger in bpo-9826 and bpo-9840.)
logging¶
In addition to dictionary-based configuration described above, the
logging package has many other improvements.
The logging documentation has been augmented by a basic tutorial, an advanced tutorial, and a cookbook of logging recipes. These documents are the fastest way to learn about logging.
The logging.basicConfig() set-up function gained a style argument to
support three different types of string formatting. It defaults to “%” for
traditional %-formatting, can be set to “{” for the new str.format() style, or
can be set to “$” for the shell-style formatting provided by
string.Template. The following three configurations are equivalent:
>>> from logging import basicConfig
>>> basicConfig(style='%', format="%(name)s -> %(levelname)s: %(message)s")
>>> basicConfig(style='{', format="{name} -> {levelname} {message}")
>>> basicConfig(style='$', format="$name -> $levelname: $message")
If no configuration is set-up before a logging event occurs, there is now a
default configuration using a StreamHandler directed to
sys.stderr for events of WARNING level or higher. Formerly, an
event occurring before a configuration was set-up would either raise an
exception or silently drop the event depending on the value of
logging.raiseExceptions. The new default handler is stored in
logging.lastResort.
The use of filters has been simplified. Instead of creating a
Filter object, the predicate can be any Python callable that
returns True or False.
There were a number of other improvements that add flexibility and simplify configuration. See the module documentation for a full listing of changes in Python 3.2.
csv¶
The csv module now supports a new dialect, unix_dialect,
which applies quoting for all fields and a traditional Unix style with '\n' as
the line terminator. The registered dialect name is unix.
The csv.DictWriter has a new method,