What’s new in Python 3.14¶
- Editors:
Adam Turner and Hugo van Kemenade
This article explains the new features in Python 3.14, compared to 3.13. Python 3.14 was released on 7 October 2025. For full details, see the changelog.
See also
PEP 745 – Python 3.14 release schedule
Summary – Release highlights¶
Python 3.14 is the latest stable release of the Python programming language, with a mix of changes to the language, the implementation, and the standard library. The biggest changes include template string literals, deferred evaluation of annotations, and support for subinterpreters in the standard library.
The library changes include significantly improved capabilities for
introspection in asyncio,
support for Zstandard via a new
compression.zstd module, syntax highlighting in the REPL,
as well as the usual deprecations and removals,
and improvements in user-friendliness and correctness.
This article doesn’t attempt to provide a complete specification of all new features, but instead gives a convenient overview. For full details refer to the documentation, such as the Library Reference and Language Reference. To understand the complete implementation and design rationale for a change, refer to the PEP for a particular new feature; but note that PEPs usually are not kept up-to-date once a feature has been fully implemented. See Porting to Python 3.14 for guidance on upgrading from earlier versions of Python.
Interpreter improvements:
Significant improvements in the standard library:
Syntax highlighting in the default interactive shell, and color output in several standard library CLIs
C API improvements:
Platform support:
PEP 776: Emscripten is now an officially supported platform, at tier 3.
Release changes:
New features¶
PEP 649 & PEP 749: Deferred evaluation of annotations¶
The annotations on functions, classes, and modules are no
longer evaluated eagerly. Instead, annotations are stored in special-purpose
annotate functions and evaluated only when
necessary (except if from __future__ import annotations is used).
This change is designed to improve performance and usability of annotations in Python in most circumstances. The runtime cost for defining annotations is minimized, but it remains possible to introspect annotations at runtime. It is no longer necessary to enclose annotations in strings if they contain forward references.
The new annotationlib module provides tools for inspecting deferred
annotations. Annotations may be evaluated in the VALUE
format (which evaluates annotations to runtime values, similar to the behavior in
earlier Python versions), the FORWARDREF format
(which replaces undefined names with special markers), and the
STRING format (which returns annotations as strings).
This example shows how these formats behave:
>>> from annotationlib import get_annotations, Format
>>> def func(arg: Undefined):
... pass
>>> get_annotations(func, format=Format.VALUE)
Traceback (most recent call last):
...
NameError: name 'Undefined' is not defined
>>> get_annotations(func, format=Format.FORWARDREF)
{'arg': ForwardRef('Undefined', owner=<function func at 0x...>)}
>>> get_annotations(func, format=Format.STRING)
{'arg': 'Undefined'}
The porting section contains guidance on changes that may be needed due to these changes, though in the majority of cases, code will continue working as-is.
(Contributed by Jelle Zijlstra in PEP 749 and gh-119180; PEP 649 was written by Larry Hastings.)
PEP 734: Multiple interpreters in the standard library¶
The CPython runtime supports running multiple copies of Python in the same process simultaneously and has done so for over 20 years. Each of these separate copies is called an ‘interpreter’. However, the feature had been available only through the C-API.
That limitation is removed in Python 3.14,
with the new concurrent.interpreters module.
There are at least two notable reasons why using multiple interpreters has significant benefits:
they support a new (to Python), human-friendly concurrency model
true multi-core parallelism
For some use cases, concurrency in software improves efficiency and
can simplify design, at a high level.
At the same time, implementing and maintaining all but the simplest concurrency
is often a struggle for the human brain.
That especially applies to plain threads (for example, threading),
where all memory is shared between all threads.
With multiple isolated interpreters, you can take advantage of a class of concurrency models, like Communicating Sequential Processes (CSP) or the actor model, that have found success in other programming languages, like Smalltalk, Erlang, Haskell, and Go. Think of multiple interpreters as threads but with opt-in sharing.
Regarding multi-core parallelism: as of Python 3.12, interpreters are now sufficiently isolated from one another to be used in parallel (see PEP 684). This unlocks a variety of CPU-intensive use cases for Python that were limited by the GIL.
Using multiple interpreters is similar in many ways to
multiprocessing, in that they both provide isolated logical
“processes” that can run in parallel, with no sharing by default.
However, when using multiple interpreters, an application will use
fewer system resources and will operate more efficiently (since it
stays within the same process). Think of multiple interpreters as
having the isolation of processes with the efficiency of threads.
While the feature has been around for decades, multiple interpreters have not been used widely, due to low awareness and the lack of a standard library module. Consequently, they currently have several notable limitations, which are expected to improve significantly now that the feature is going mainstream.
Current limitations:
starting each interpreter has not been optimized yet
each interpreter uses more memory than necessary (work continues on extensive internal sharing between interpreters)
there aren’t many options yet for truly sharing objects or other data between interpreters (other than
memoryview)many third-party extension modules on PyPI are not yet compatible with multiple interpreters (all standard library extension modules are compatible)
the approach to writing applications that use multiple isolated interpreters is mostly unfamiliar to Python users, for now
The impact of these limitations will depend on future CPython improvements, how interpreters are used, and what the community solves through PyPI packages. Depending on the use case, the limitations may not have much impact, so try it out!
Furthermore, future CPython releases will reduce or eliminate overhead and provide utilities that are less appropriate on PyPI. In the meantime, most of the limitations can also be addressed through extension modules, meaning PyPI packages can fill any gap for 3.14, and even back to 3.12 where interpreters were finally properly isolated and stopped sharing the GIL. Likewise, libraries on PyPI are expected to emerge for high-level abstractions on top of interpreters.
Regarding extension modules, work is in progress to update some PyPI projects, as well as tools like Cython, pybind11, nanobind, and PyO3. The steps for isolating an extension module are found at Isolating Extension Modules. Isolating a module has a lot of overlap with what is required to support free-threading, so the ongoing work in the community in that area will help accelerate support for multiple interpreters.
Also added in 3.14: concurrent.futures.InterpreterPoolExecutor.
(Contributed by Eric Snow in gh-134939.)
See also
PEP 750: Template string literals¶
Template strings are a new mechanism for custom string processing.
They share the familiar syntax of f-strings but, unlike f-strings,
return an object representing the static and interpolated parts of
the string, instead of a simple str.
To write a t-string, use a 't' prefix instead of an 'f':
>>> variety = 'Stilton'
>>> template = t'Try some {variety} cheese!'
>>> type(template)
<class 'string.templatelib.Template'>
Template objects provide access to the static
and interpolated (in curly braces) parts of a string before they are combined.
Iterate over Template instances to access their parts in order:
>>> list(template)
['Try some ', Interpolation('Stilton', 'variety', None, ''), ' cheese!']
It’s easy to write (or call) code to process Template instances.
For example, here’s a function that renders static parts lowercase and
Interpolation instances uppercase:
from string.templatelib import Interpolation
def lower_upper(template):
"""Render static parts lowercase and interpolations uppercase."""
parts = []
for part in template:
if isinstance(part, Interpolation):
parts.append(str(part.value).upper())
else:
parts.append(part.lower())
return ''.join(parts)
name = 'Wenslydale'
template = t'Mister {name}'
assert lower_upper(template) == 'mister WENSLYDALE'
Because Template instances distinguish between static strings and
interpolations at runtime, they can be useful for sanitising user input.
Writing a html() function that escapes user input in HTML is an exercise
left to the reader!
Template processing code can provide improved flexibility.
For instance, a more advanced html() function could accept
a dict of HTML attributes directly in the template:
attributes = {'src': 'limburger.jpg', 'alt': 'lovely cheese'}
template = t'<img {attributes}>'
assert html(template) == '<img src="limburger.jpg" alt="lovely cheese" />'
Of course, template processing code does not need to return a string-like result.
An even more advanced html() could return a custom type representing
a DOM-like structure.
With t-strings in place, developers can write systems that sanitise SQL, make safe shell operations, improve logging, tackle modern ideas in web development (HTML, CSS, and so on), and implement lightweight custom business DSLs.
(Contributed by Jim Baker, Guido van Rossum, Paul Everitt, Koudai Aono, Lysandros Nikolaou, Dave Peck, Adam Turner, Jelle Zijlstra, Bénédikt Tran, and Pablo Galindo Salgado in gh-132661.)
See also
PEP 768: Safe external debugger interface¶
Python 3.14 introduces a zero-overhead debugging interface that allows debuggers and profilers to safely attach to running Python processes without stopping or restarting them. This is a significant enhancement to Python’s debugging capabilities, meaning that unsafe alternatives are no longer required.
The new interface provides safe execution points for attaching debugger code without modifying the interpreter’s normal execution path or adding any overhead at runtime. Due to this, tools can now inspect and interact with Python applications in real-time, which is a crucial capability for high-availability systems and production environments.
For convenience, this interface is implemented in the sys.remote_exec()
function. For example:
import sys
from tempfile import NamedTemporaryFile
with NamedTemporaryFile(mode='w', suffix='.py', delete=False) as f:
script_path = f.name
f.write(f'import my_debugger; my_debugger.connect({os.getpid()})')
# Execute in process with PID 1234
print('Behold! An offering:')
sys.remote_exec(1234, script_path)
This function allows sending Python code to be executed in a target process at the next safe execution point. However, tool authors can also implement the protocol directly as described in the PEP, which details the underlying mechanisms used to safely attach to running processes.
The debugging interface has been carefully designed with security in mind and includes several mechanisms to control access:
A
PYTHON_DISABLE_REMOTE_DEBUGenvironment variable.A
-X disable-remote-debugcommand-line option.A
--without-remote-debugconfigure flag to completely disable the feature at build time.
(Contributed by Pablo Galindo Salgado, Matt Wozniski, and Ivona Stojanovic in gh-131591.)
See also
A new type of interpreter¶
A new type of interpreter has been added to CPython.
It uses tail calls between small C functions that implement individual
Python opcodes, rather than one large C case statement.
For certain newer compilers, this interpreter provides
significantly better performance. Preliminary benchmarks suggest a geometric
mean of 3-5% faster on the standard pyperformance benchmark suite,
depending on platform and architecture.
The baseline is Python 3.14 built with Clang 19, without this new interpreter.
This interpreter currently only works with Clang 19 and newer on x86-64 and AArch64 architectures. However, a future release of GCC is expected to support this as well.
This feature is opt-in for now. Enabling profile-guided optimization is highly
recommendeded when using the new interpreter as it is the only configuration
that has been tested and validated for improved performance.
For further information, see --with-tail-call-interp.
Note
This is not to be confused with tail call optimization of Python functions, which is currently not implemented in CPython.
This new interpreter type is an internal implementation detail of the CPython interpreter. It doesn’t change the visible behavior of Python programs at all. It can improve their performance, but doesn’t change anything else.
(Contributed by Ken Jin in gh-128563, with ideas on how to implement this in CPython by Mark Shannon, Garrett Gu, Haoran Xu, and Josh Haberman.)
Free-threaded mode improvements¶
CPython’s free-threaded mode (PEP 703), initially added in 3.13, has been significantly improved in Python 3.14. The implementation described in PEP 703 has been finished, including C API changes, and temporary workarounds in the interpreter were replaced with more permanent solutions. The specializing adaptive interpreter (PEP 659) is now enabled in free-threaded mode, which along with many other optimizations greatly improves its performance. The performance penalty on single-threaded code in free-threaded mode is now roughly 5-10%, depending on the platform and C compiler used.
From Python 3.14, when compiling extension modules for the free-threaded build of
CPython on Windows, the preprocessor variable Py_GIL_DISABLED now needs to
be specified by the build backend, as it will no longer be determined
automatically by the C compiler. For a running interpreter, the setting that
was used at compile time can be found using sysconfig.get_config_var().
The new -X context_aware_warnings flag controls if
concurrent safe warnings control
is enabled. The flag defaults to true for the free-threaded build
and false for the GIL-enabled build.
A new thread_inherit_context flag has been added,
which if enabled means that threads created with threading.Thread
start with a copy of the Context() of the caller of
start(). Most significantly, this makes the warning
filtering context established by catch_warnings be
“inherited” by threads (or asyncio tasks) started within that context. It also
affects other modules that use context variables, such as the decimal
context manager.
This flag defaults to true for the free-threaded build and false for
the GIL-enabled build.
(Contributed by Sam Gross, Matt Page, Neil Schemenauer, Thomas Wouters, Donghee Na, Kirill Podoprigora, Ken Jin, Itamar Oren, Brett Simmers, Dino Viehland, Nathan Goldbaum, Ralf Gommers, Lysandros Nikolaou, Kumar Aditya, Edgar Margffoy, and many others. Some of these contributors are employed by Meta, which has continued to provide significant engineering resources to support this project.)
Improved error messages¶
The interpreter now provides helpful suggestions when it detects typos in Python keywords. When a word that closely resembles a Python keyword is encountered, the interpreter will suggest the correct keyword in the error message. This feature helps programmers quickly identify and fix common typing mistakes. For example:
>>> whille True: ... pass Traceback (most recent call last): File "<stdin>", line 1 whille True: ^^^^^^ SyntaxError: invalid syntax. Did you mean 'while'?
While the feature focuses on the most common cases, some variations of misspellings may still result in regular syntax errors. (Contributed by Pablo Galindo in gh-132449.)
elifstatements that follow anelseblock now have a specific error message. (Contributed by Steele Farnsworth in gh-129902.)>>> if who == "me": ... print("It's me!") ... else: ... print("It's not me!") ... elif who is None: ... print("Who is it?") File "<stdin>", line 5 elif who is None: ^^^^ SyntaxError: 'elif' block follows an 'else' block
If a statement is passed to the Conditional expressions after
else, or one ofpass,break, orcontinueis passed beforeif, then the error message highlights where theexpressionis required. (Contributed by Sergey Miryanov in gh-129515.)>>> x = 1 if True else pass Traceback (most recent call last): File "<string>", line 1 x = 1 if True else pass ^^^^ SyntaxError: expected expression after 'else', but statement is given >>> x = continue if True else break Traceback (most recent call last): File "<string>", line 1 x = continue if True else break ^^^^^^^^ SyntaxError: expected expression before 'if', but statement is given
When incorrectly closed strings are detected, the error message suggests that the string may be intended to be part of the string. (Contributed by Pablo Galindo in gh-88535.)
>>> "The interesting object "The important object" is very important" Traceback (most recent call last): SyntaxError: invalid syntax. Is this intended to be part of the string?
When strings have incompatible prefixes, the error now shows which prefixes are incompatible. (Contributed by Nikita Sobolev in gh-133197.)
>>> ub'abc' File "<python-input-0>", line 1 ub'abc' ^^ SyntaxError: 'u' and 'b' prefixes are incompatible
Improved error messages when using
aswith incompatible targets in:Imports:
import ... as ...From imports:
from ... import ... as ...Except handlers:
except ... as ...Pattern-match cases:
case ... as ...
(Contributed by Nikita Sobolev in gh-123539, gh-123562, and gh-123440.)
Improved error message when trying to add an instance of an unhashable type to a
dictorset. (Contributed by CF Bolz-Tereick and Victor Stinner in gh-132828.)>>> s = set() >>> s.add({'pages': 12, 'grade': 'A'}) Traceback (most recent call last): File "<python-input-1>", line 1, in <module> s.add({'pages': 12, 'grade': 'A'}) ~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ TypeError: cannot use 'dict' as a set element (unhashable type: 'dict') >>> d = {} >>> l = [1, 2, 3] >>> d[l] = 12 Traceback (most recent call last): File "<python-input-4>", line 1, in <module> d[l] = 12 ~^^^ TypeError: cannot use 'list' as a dict key (unhashable type: 'list')
Improved error message when an object supporting the synchronous context manager protocol is entered using
async withinstead ofwith, and vice versa for the asynchronous context manager protocol. (Contributed by Bénédikt Tran in gh-128398.)
PEP 784: Zstandard support in the standard library¶
The new compression package contains modules compression.lzma,
compression.bz2, compression.gzip and compression.zlib
which re-export the lzma, bz2, gzip and zlib
modules respectively. The new import names under compression are the
preferred names for importing these compression modules from Python 3.14. However,
the existing modules names have not been deprecated. Any deprecation or removal
of the existing compression modules will occur no sooner than five years after
the release of 3.14.
The new compression.zstd module provides compression and decompression
APIs for the Zstandard format via bindings to Meta’s zstd library. Zstandard is a widely adopted, highly
efficient, and fast compression format. In addition to the APIs introduced in
compression.zstd, support for reading and writing Zstandard compressed
archives has been added to the tarfile, zipfile, and
shutil modules.
Here’s an example of using the new module to compress some data:
from compression import zstd
import math
data = str(math.pi).encode() * 20
compressed = zstd.compress(data)
ratio = len(compressed) / len(data)
print(f"Achieved compression ratio of {ratio}")
As can be seen, the API is similar to the APIs of the lzma and
bz2 modules.
(Contributed by Emma Harper Smith, Adam Turner, Gregory P. Smith, Tomas Roun, Victor Stinner, and Rogdham in gh-132983.)
See also
Asyncio introspection capabilities¶
Added a new command-line interface to inspect running Python processes
using asynchronous tasks, available via python -m asyncio ps PID
or python -m asyncio pstree PID.
The ps subcommand inspects the given process ID (PID) and displays
information about currently running asyncio tasks.
It outputs a task table: a flat listing of all tasks, their names,
their coroutine stacks, and which tasks are awaiting them.
The pstree subcommand fetches the same information, but instead renders a
visual async call tree, showing coroutine relationships in a hierarchical format.
This command is particularly useful for debugging long-running or stuck
asynchronous programs.
It can help developers quickly identify where a program is blocked,
what tasks are pending, and how coroutines are chained together.
For example given this code:
import asyncio
async def play_track(track):
await asyncio.sleep(5)
print(f'🎵 Finished: {track}')
async def play_album(name, tracks):
async with asyncio.TaskGroup() as tg:
for track in tracks:
tg.create_task(play_track(track), name=track)
async def main():
async with asyncio.TaskGroup() as tg:
tg.create_task(
play_album('Sundowning', ['TNDNBTG', 'Levitate']),
name='Sundowning')
tg.create_task(
play_album('TMBTE', ['DYWTYLM', 'Aqua Regia']),
name='TMBTE')
if __name__ == '__main__':
asyncio.run(main())
Executing the new tool on the running process will yield a table like this:
python -m asyncio ps 12345
tid task id task name coroutine stack awaiter chain awaiter name awaiter id
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
1935500 0x7fc930c18050 Task-1 TaskGroup._aexit -> TaskGroup.__aexit__ -> main 0x0
1935500 0x7fc930c18230 Sundowning TaskGroup._aexit -> TaskGroup.__aexit__ -> album TaskGroup._aexit -> TaskGroup.__aexit__ -> main Task-1 0x7fc930c18050
1935500 0x7fc93173fa50 TMBTE TaskGroup._aexit -> TaskGroup.__aexit__ -> album TaskGroup._aexit -> TaskGroup.__aexit__ -> main Task-1 0x7fc930c18050
1935500 0x7fc93173fdf0 TNDNBTG sleep -> play TaskGroup._aexit -> TaskGroup.__aexit__ -> album Sundowning 0x7fc930c18230
1935500 0x7fc930d32510 Levitate sleep -> play TaskGroup._aexit -> TaskGroup.__aexit__ -> album Sundowning 0x7fc930c18230
1935500 0x7fc930d32890 DYWTYLM sleep -> play TaskGroup._aexit -> TaskGroup.__aexit__ -> album TMBTE 0x7fc93173fa50
1935500 0x7fc93161ec30 Aqua Regia sleep -> play TaskGroup._aexit -> TaskGroup.__aexit__ -> album TMBTE 0x7fc93173fa50
or a tree like this:
python -m asyncio pstree 12345
└── (T) Task-1
└── main example.py:13
└── TaskGroup.__aexit__ Lib/asyncio/taskgroups.py:72
└── TaskGroup._aexit Lib/asyncio/taskgroups.py:121
├── (T) Sundowning
│ └── album example.py:8
│ └── TaskGroup.__aexit__ Lib/asyncio/taskgroups.py:72
│ └── TaskGroup._aexit Lib/asyncio/taskgroups.py:121
│ ├── (T) TNDNBTG
│ │ └── play example.py:4
│ │ └── sleep Lib/asyncio/tasks.py:702
│ └── (T) Levitate
│ └── play example.py:4
│ └── sleep Lib/asyncio/tasks.py:702
└── (T) TMBTE
└── album example.py:8
└── TaskGroup.__aexit__ Lib/asyncio/taskgroups.py:72
└── TaskGroup._aexit Lib/asyncio/taskgroups.py:121
├── (T) DYWTYLM
│ └── play example.py:4
│ └── sleep Lib/asyncio/tasks.py:702
└── (T) Aqua Regia
└── play example.py:4
└── sleep Lib/asyncio/tasks.py:702
If a cycle is detected in the async await graph (which could indicate a programming issue), the tool raises an error and lists the cycle paths that prevent tree construction:
python -m asyncio pstree 12345
ERROR: await-graph contains cycles - cannot print a tree!
cycle: Task-2 → Task-3 → Task-2
(Contributed by Pablo Galindo, Łukasz Langa, Yury Selivanov, and Marta Gomez Macias in gh-91048.)
Concurrent safe warnings control¶
The warnings.catch_warnings context manager will now optionally
use a context variable for warning filters. This is enabled by setting
the context_aware_warnings flag, either with the -X
command-line option or an environment variable. This gives predictable
warnings control when using catch_warnings combined with
multiple threads or asynchronous tasks. The flag defaults to true for the
free-threaded build and false for the GIL-enabled build.
(Contributed by Neil Schemenauer and Kumar Aditya in gh-130010.)
Other language changes¶
All Windows code pages are now supported as ‘cpXXX’ codecs on Windows. (Contributed by Serhiy Storchaka in gh-123803.)
Implement mixed-mode arithmetic rules combining real and complex numbers as specified by the C standard since C99. (Contributed by Sergey B Kirpichev in gh-69639.)
More syntax errors are now detected regardless of optimisation and the
-Ocommand-line option. This includes writes to__debug__, incorrect use ofawait, and asynchronous comprehensions outside asynchronous functions. For example,python -O -c 'assert (__debug__ := 1)'orpython -O -c 'assert await 1'now produceSyntaxErrors. (Contributed by Irit Katriel and Jelle Zijlstra in gh-122245 & gh-121637.)When subclassing a pure C type, the C slots for the new type are no longer replaced with a wrapped version on class creation if they are not explicitly overridden in the subclass. (Contributed by Tomasz Pytel in gh-132284.)
Built-ins¶
The
bytes.fromhex()andbytearray.fromhex()methods now accept ASCIIbytesand bytes-like objects. (Contributed by Daniel Pope in gh-129349.)Add class methods
float.from_number()andcomplex.from_number()to convert a number tofloatorcomplextype correspondingly. They raise aTypeErrorif the argument is not a real number. (Contributed by Serhiy Storchaka in gh-84978.)Support underscore and comma as thousands separators in the fractional part for floating-point presentation types of the new-style string formatting (with
format()or f-strings). (Contributed by Sergey B Kirpichev in gh-87790.)The
int()function no longer delegates to__trunc__(). Classes that want to support conversion toint()must implement either__int__()or__index__(). (Contributed by Mark Dickinson in gh-119743.)The
map()function now has an optional keyword-only strict flag likezip()to check that all the iterables are of equal length. (Contributed by Wannes Boeykens in gh-119793.)The
memoryviewtype now supports subscription, making it a generic type. (Contributed by Brian Schubert in gh-126012.)Using
NotImplementedin a boolean context will now raise aTypeError. This has raised aDeprecationWarningsince Python 3.9. (Contributed by Jelle Zijlstra in gh-118767.)Three-argument
pow()now tries calling__rpow__()if necessary. Previously it was only called in two-argumentpow()and the binary power operator. (Contributed by Serhiy Storchaka in gh-130104.)superobjects are nowcopyableandpickleable. (Contributed by Serhiy Storchaka in gh-125767.)
Command line and environment¶
The import time flag can now track modules that are already loaded (‘cached’), via the new
-X importtime=2. When such a module is imported, theselfandcumulativetimes are replaced by the stringcached.Values above
2for-X importtimeare now reserved for future use.(Contributed by Noah Kim and Adam Turner in gh-118655.)
The command-line option
-cnow automatically dedents its code argument before execution. The auto-dedentation behavior mirrorstextwrap.dedent(). (Contributed by Jon Crall and Steven Sun in gh-103998.)-Jis no longer a reserved flag for Jython, and now has no special meaning. (Contributed by Adam Turner in gh-133336.)