Python 3.8 (B3) to release on July 29 | Here’s new features & what you can expect?

Python is the number one choice for everything from machine learning to automation.

A couple of mounts ago we get introduced to the first version of Python 3.8 alpha 1 and then in the month of May to its beta 1 version and now on 29 July, the 3rd beta version of iPython 3.8 is getting released.



From 1990 the year when Python first get introduced to the world, it’s has gone through many different improvement and changes. Python is most popular for similar syntax and easy learning curve.

Version 3.8 is also maintaining it’s a traditional feature but coming up with many new and improved features like memory sharing, efficient serialization, and deserialization, revamped dictionaries, and much more.




According to developers, the 3.8 version is coming with many new features that are promising more faster, more concise and more consistent performance improvement.

So, let’s talk about what’s new and most significant in Python 3.8.

python 3.8 techgrabyte

Improved C API and CPython

In the new 3.8 version, the major work has been done on refactoring the C API used in CPython.

CPython is the default implementation of the Python, it compiles the python source code into intermediate bytecode, which is executed by the CPython virtual machine.




The PEP 587 adds a new C API to configure the Python Initialization providing finer control on the whole configuration, it ensures that all of Python’s configuration controls have a single consistent home.

This will ultimately makes it easier to embed a Python runtime into an application, and to pass startup arguments to Python programmatically.

Another new C API for CPython called the “vectorcall” protocol is added to the Python/C API. It allows for far faster calls to internal Python methods without the overhead of creating temporary objects to handle the call.

It is meant to formalize existing optimizations which were already done for various classes. Any extension type implementing a callable can use this protocol.

The API is still unstable but has been made provisionally available. The plan is to finalize it as of Python 3.9.

Python runtime audit hooks provide two APIs in the Python runtime for hooking events and making them observable to outside tools like testing frameworks or logging and auditing systems.



Improved Performance

Python has faced lots of criticism about its speed. As Python is an interpreted language, it is little slower than other compiled languages like Java, but this time developer has come-up with many newer approaches that will be going to boost up the python.

In Python 3.8 many built-in methods and functions have been sped up by around 20% to 50%, as many of them were unnecessarily converting arguments passed to them.

A new opcode cache can speed up certain instructions in the interpreter. However, the only currently implemented speed-up is for the LOAD_GLOBAL opcode, now 40% faster. Similar optimizations are planned for later versions of Python.

File copying operations, such as shutil.copyfile() and shutil.copytree(), now use platform-specific calls and other optimizations to speed up operations.

Newly created lists are now, on average, 12% smaller than before, thanks to optimizations that make use of the length of the list constructor object if it is known beforehand.

Writes to class variables on new-style classes (e.g., class A(object)) are much faster in Python 3.8. The performance of the operator.itemgetter() has been increased by 33%.

Optimized argument handling and added a fast path for the common case of a single non-negative integer index into a tuple (which is the typical use case in the standard library).




Parallel filesystem cache

The new Python cache prefix setting also configures the implicit bytecode cache.

The new PYTHONPYCACHEPREFIX setting (also available as -X pycache_prefix) configures the implicit bytecode cache to use a separate parallel filesystem tree, rather than the default __pycache__ subdirectories within each source directory.

This can be used to separate the parallel filesystem tree. The location of the cache is reported in sys.pycache_prefix (None indicates the default location in __pycache__ subdirectories).

Multiprocessing shared memory

With Python 3.8, the multiprocessing the module now offers a SharedMemory class that allows regions of memory to be created and shared between different Python processes.

In previous versions of Python, data could be shared between processes only by writing it out to a file, sending it over a network socket, or serializing it using Python’s pickle module.

Shared memory provides a much faster path for passing data between processes, allowing Python to more efficiently use multiple processors and processor cores.

Shared memory segments can be allocated as raw regions of bytes, or they can use immutable list-like objects that store a small subset of Python objects numeric types, strings, byte objects, and the None object.

The walrus operator

The most eye-catching change in Python 3.8 is the walrus operator.

The walrus operator  (:= ) allow a value to be assigned to a variable, even a variable that doesn’t exist yet, in the context of expression rather than as a stand-alone statement.

The purpose of creating the walrus operator is to simplify things like multiple-pattern matches and the so-called loop and a half.

while (line := file.readline()) != "end":
    print(chunk)

In this example, the variable line is created if it doesn’t exist, then assigned the value from file.readline(). Then line is checked to see if it equates to "end". If not, the next line is read, stored in, tested, and so on.

The assignment expressions follow the tradition of comprehensible terseness that we get to see in Pythonwhich includes list comprehensions.

Here, the idea is to cut down on some of the tedious boilerplates that tend to appear in certain Pythonprogramming patterns. The above snippet, for instance, would normally have 3 lines of code to express.

Dustin Ingram, a PyPI maintainer, gave a few examples where you can use this syntax like balancing lines of codes and complexity, avoiding inefficient comprehensions, avoiding unnecessary variables in scope.

The walrus operator was proposed in PEP 572 (Assignment Expressions) by Chris Angelico, Tim Peters, and Guido van Rossum last year.

The feature was implemented by Emily Morehouse, Python core developer and Founder, Director of Engineering at Cuttlesoft, and was merged earlier this year.

Positional-only parameters

In Python 3.8 there is a new syntax (/) to specify positional-only parameters in Python function definitions.

The positional only parameters are similar to how * indicates that the arguments to its right are keyword only. Many built-in CPython functions already use the syntax.

The positional-only parameters help to remove any ambiguity about which arguments in a function definition are positional and which are keyword arguments.

It makes it possible to define scenarios where, for instance, a function accepts any keyword argument but can also accept one or more positions.

This is often the case with Python built-ins, so giving Python developers a way to do this themselves reinforces consistency in the language.

This syntax gives library authors more control over better expressing the intended usage of an API and allows the API to “evolve in a safe, backward-compatible way.”

It gives library authors the flexibility to change the name of positional-only parameters without breaking callers. Additionally, this also ensures consistency of the Python language with existing documentation and the behavior of various  “builtin” and standard library functions.

An example from Python’s documentation:

def pow(x, y, z=None, /):
    r = x**y
    if z is not None:
        r %= z
    return r

The / separates positional from keyword arguments; in this example, all of the arguments are positional.

In previous versions of Python, z would be considered a keyword argument. Given the above function definition, pow(2, 10) and pow(2, 10, 5) are valid calls, but pow(2, 10, z=5) is not.

Pickle Protocol 5

Python’s pickle module provides a way to serialize and deserialize Python data structures, for instance, to allow a dictionary to be saved as-is to a file and reloaded later.

Different versions of Python support different levels of the pickle protocol, with more recent versions supporting a broader range of capabilities and more efficient serialization.

When pickle is used to transfer large data between Python processes in order to take advantage of multi-core or multi-machine processing, it is important to optimize the transfer by reducing memory copies, and possibly by applying custom techniques such as data-dependent compression.

Version 5 of pickle, introduced with Python 3.8, provides a new way to pickle objects that implement Python’s buffer protocol, such as bytes, memory views, or NumPy arrays.

The new pickle cuts down on the number of memory copies that have to be made for such objects.

The pickle protocol 5 introduces support for out-of-band buffers where PEP 3118-compatible data can be transmitted separately from the main pickle stream, at the discretion of the communication layer.

External libraries like NumPy and Apache Arrow support the new pickleprotocol in their Python bindings. The new pickle is also available as an add-on for Python 3.6 and Python 3.7 from PyPI.

F-string debugging support

We first get to see formatted strings (f-strings) in Python 3.6. The f-string format provides a convenient (and more performant) way to print text and computed values or variables in the same expression.

It enables you to evaluate an expression as part of the string along with inserting the result of function calls and so on. In Python 3.8, some additional syntax changes have been made by adding add (=) specifier and a !d conversion for ease of debugging.

You can use this feature like this:

print(f'{foo=} {bar=}’)

This provides developers a better way of doing “print-style debugging”, especially for those who have a background in languages that already have such feature such as  Perl, Ruby, JavaScript, etc.

x = 3 
print(f'{x+1}')

This would yield 4.

Adding an = to the end of an f-string expression prints the text of the f-string expression itself, followed by the value:

x = 3
print (f'{x+1=}')

This would yield x+1=4.

Debug build

Python now uses the same ABI whether it built-in release or debug mode. On Unix, when Python is built in debug mode, it is now possible to load C extensions built-in release mode and C extensions built using the stable ABI.

Release builds and debugs builds are now ABI compatible: defining the Py_DEBUG macro no longer implies the Py_TRACE_REFS macro, which introduces the only ABI incompatibility.

The Py_TRACE_REFS macro, which adds the sys.getobjects() function and the PYTHONDUMPREFSenvironment variable can be set using the new ./configure --with-trace-refs build option.]

On Unix, C extensions are no longer linked to libpython except on Android and Cygwin. It is now possible for a statically linked Python to load a C extension built using a shared library Python.

On Unix, when Python is built in debug mode, import now also looks for C extensions compiled in release mode and for C extensions compiled with the stable ABI.

To embed Python into an application, a new --embed the option must be passed to python3-config --libs --embed to get -lpython3.8 (link the application to libpython).

To support both 3.8 and older, try python3-config--libs --embed first and fallback to python3-config --libs (without --embed) if the previous command fails.

Add a pkg-config python-3.8-embed module to embed Python into an application: pkg-config python-3.8-embed --libs includes -lpython3.8.

To support both 3.8 and older, try pkg-config python-X.Y-embed --libs first and fallback to pkg-config python-X.Y --libs (without --embed) if the previous command fails (replace X.Y with the Python version).

On the other hand, pkg-config python3.8 --libs no longer contains -lpython3.8.

C extensions must not be linked to libpython (except on Android and Cygwin, whose cases are handled by the script); this change is backward incompatible on purpose. (Contributed by Victor Stinner in bpo-36721.)

More in AI

FaceApp That Can Make You Old, How Actually Does It Work?

Elon Musk’s Startup Neuralink Develops A System That Merges Brain With AI

DeepNude An AI App That “Undressed” Women Show How Harmful DeepFakes Is

MIT Release New AI Programming Language Called ‘GEN’



Leave a Reply

Your email address will not be published.