完成微波仿真、点云构网、模型导出、仿真成像功能移植

master
剑古敛锋 2024-04-11 22:01:25 +08:00
parent 51341c8e72
commit 796fac0821
2331 changed files with 90189 additions and 652837 deletions

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

View File

@ -59,6 +59,17 @@ direction to make these releases possible.
B. TERMS AND CONDITIONS FOR ACCESSING OR OTHERWISE USING PYTHON
===============================================================
Python software and documentation are licensed under the
Python Software Foundation License Version 2.
Starting with Python 3.8.6, examples, recipes, and other code in
the documentation are dual licensed under the PSF License Version 2
and the Zero-Clause BSD license.
Some software incorporated into Python is under different licenses.
The licenses are listed with code falling under that license.
PYTHON SOFTWARE FOUNDATION LICENSE VERSION 2
--------------------------------------------
@ -73,8 +84,8 @@ analyze, test, perform and/or display publicly, prepare derivative works,
distribute, and otherwise use Python alone or in any derivative version,
provided, however, that PSF's License Agreement and PSF's notice of copyright,
i.e., "Copyright (c) 2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008, 2009, 2010,
2011, 2012, 2013, 2014, 2015, 2016, 2017, 2018 Python Software Foundation; All
Rights Reserved" are retained in Python alone or in any derivative version
2011, 2012, 2013, 2014, 2015, 2016, 2017, 2018, 2019, 2020, 2021, 2022, 2023 Python Software Foundation;
All Rights Reserved" are retained in Python alone or in any derivative version
prepared by Licensee.
3. In the event Licensee prepares a derivative work that is based on
@ -253,351 +264,16 @@ WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT
OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
Additional Conditions for this Windows binary build
---------------------------------------------------
This program is linked with and uses Microsoft Distributable Code,
copyrighted by Microsoft Corporation. The Microsoft Distributable Code
is embedded in each .exe, .dll and .pyd file as a result of running
the code through a linker.
If you further distribute programs that include the Microsoft
Distributable Code, you must comply with the restrictions on
distribution specified by Microsoft. In particular, you must require
distributors and external end users to agree to terms that protect the
Microsoft Distributable Code at least as much as Microsoft's own
requirements for the Distributable Code. See Microsoft's documentation
(included in its developer tools and on its website at microsoft.com)
for specific details.
Redistribution of the Windows binary build of the Python interpreter
complies with this agreement, provided that you do not:
- alter any copyright, trademark or patent notice in Microsoft's
Distributable Code;
- use Microsoft's trademarks in your programs' names or in a way that
suggests your programs come from or are endorsed by Microsoft;
- distribute Microsoft's Distributable Code to run on a platform other
than Microsoft operating systems, run-time technologies or application
platforms; or
- include Microsoft Distributable Code in malicious, deceptive or
unlawful programs.
These restrictions apply only to the Microsoft Distributable Code as
defined above, not to Python itself or any programs running on the
Python interpreter. The redistribution of the Python interpreter and
libraries is governed by the Python Software License included with this
file, or by other licenses as marked.
--------------------------------------------------------------------------
This program, "bzip2", the associated library "libbzip2", and all
documentation, are copyright (C) 1996-2010 Julian R Seward. All
rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions
are met:
1. Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
2. The origin of this software must not be misrepresented; you must
not claim that you wrote the original software. If you use this
software in a product, an acknowledgment in the product
documentation would be appreciated but is not required.
3. Altered source versions must be plainly marked as such, and must
not be misrepresented as being the original software.
4. The name of the author may not be used to endorse or promote
products derived from this software without specific prior written
permission.
THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS
OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY
DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE
GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY,
WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
Julian Seward, jseward@bzip.org
bzip2/libbzip2 version 1.0.6 of 6 September 2010
--------------------------------------------------------------------------
LICENSE ISSUES
==============
The OpenSSL toolkit stays under a double license, i.e. both the conditions of
the OpenSSL License and the original SSLeay license apply to the toolkit.
See below for the actual license texts.
OpenSSL License
---------------
/* ====================================================================
* Copyright (c) 1998-2018 The OpenSSL Project. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
*
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
*
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in
* the documentation and/or other materials provided with the
* distribution.
*
* 3. All advertising materials mentioning features or use of this
* software must display the following acknowledgment:
* "This product includes software developed by the OpenSSL Project
* for use in the OpenSSL Toolkit. (http://www.openssl.org/)"
*
* 4. The names "OpenSSL Toolkit" and "OpenSSL Project" must not be used to
* endorse or promote products derived from this software without
* prior written permission. For written permission, please contact
* openssl-core@openssl.org.
*
* 5. Products derived from this software may not be called "OpenSSL"
* nor may "OpenSSL" appear in their names without prior written
* permission of the OpenSSL Project.
*
* 6. Redistributions of any form whatsoever must retain the following
* acknowledgment:
* "This product includes software developed by the OpenSSL Project
* for use in the OpenSSL Toolkit (http://www.openssl.org/)"
*
* THIS SOFTWARE IS PROVIDED BY THE OpenSSL PROJECT ``AS IS'' AND ANY
* EXPRESSED OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
* PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE OpenSSL PROJECT OR
* ITS CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
* SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
* NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
* LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
* HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
* STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
* ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED
* OF THE POSSIBILITY OF SUCH DAMAGE.
* ====================================================================
*
* This product includes cryptographic software written by Eric Young
* (eay@cryptsoft.com). This product includes software written by Tim
* Hudson (tjh@cryptsoft.com).
*
*/
Original SSLeay License
-----------------------
/* Copyright (C) 1995-1998 Eric Young (eay@cryptsoft.com)
* All rights reserved.
*
* This package is an SSL implementation written
* by Eric Young (eay@cryptsoft.com).
* The implementation was written so as to conform with Netscapes SSL.
*
* This library is free for commercial and non-commercial use as long as
* the following conditions are aheared to. The following conditions
* apply to all code found in this distribution, be it the RC4, RSA,
* lhash, DES, etc., code; not just the SSL code. The SSL documentation
* included with this distribution is covered by the same copyright terms
* except that the holder is Tim Hudson (tjh@cryptsoft.com).
*
* Copyright remains Eric Young's, and as such any Copyright notices in
* the code are not to be removed.
* If this package is used in a product, Eric Young should be given attribution
* as the author of the parts of the library used.
* This can be in the form of a textual message at program startup or
* in documentation (online or textual) provided with the package.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the copyright
* notice, this list of conditions and the following disclaimer.
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
* 3. All advertising materials mentioning features or use of this software
* must display the following acknowledgement:
* "This product includes cryptographic software written by
* Eric Young (eay@cryptsoft.com)"
* The word 'cryptographic' can be left out if the rouines from the library
* being used are not cryptographic related :-).
* 4. If you include any Windows specific code (or a derivative thereof) from
* the apps directory (application code) you must include an acknowledgement:
* "This product includes software written by Tim Hudson (tjh@cryptsoft.com)"
*
* THIS SOFTWARE IS PROVIDED BY ERIC YOUNG ``AS IS'' AND
* ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
* ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE
* FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
* DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
* OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
* HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
* LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
* OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
* SUCH DAMAGE.
*
* The licence and distribution terms for any publically available version or
* derivative of this code cannot be changed. i.e. this code cannot simply be
* copied and put under another distribution licence
* [including the GNU Public Licence.]
*/
This software is copyrighted by the Regents of the University of
California, Sun Microsystems, Inc., Scriptics Corporation, ActiveState
Corporation and other parties. The following terms apply to all files
associated with the software unless explicitly disclaimed in
individual files.
The authors hereby grant permission to use, copy, modify, distribute,
and license this software and its documentation for any purpose, provided
that existing copyright notices are retained in all copies and that this
notice is included verbatim in any distributions. No written agreement,
license, or royalty fee is required for any of the authorized uses.
Modifications to this software may be copyrighted by their authors
and need not follow the licensing terms described here, provided that
the new terms are clearly indicated on the first page of each file where
they apply.
IN NO EVENT SHALL THE AUTHORS OR DISTRIBUTORS BE LIABLE TO ANY PARTY
FOR DIRECT, INDIRECT, SPECIAL, INCIDENTAL, OR CONSEQUENTIAL DAMAGES
ARISING OUT OF THE USE OF THIS SOFTWARE, ITS DOCUMENTATION, OR ANY
DERIVATIVES THEREOF, EVEN IF THE AUTHORS HAVE BEEN ADVISED OF THE
POSSIBILITY OF SUCH DAMAGE.
THE AUTHORS AND DISTRIBUTORS SPECIFICALLY DISCLAIM ANY WARRANTIES,
INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE, AND NON-INFRINGEMENT. THIS SOFTWARE
IS PROVIDED ON AN "AS IS" BASIS, AND THE AUTHORS AND DISTRIBUTORS HAVE
NO OBLIGATION TO PROVIDE MAINTENANCE, SUPPORT, UPDATES, ENHANCEMENTS, OR
MODIFICATIONS.
GOVERNMENT USE: If you are acquiring this software on behalf of the
U.S. government, the Government shall have only "Restricted Rights"
in the software and related documentation as defined in the Federal
Acquisition Regulations (FARs) in Clause 52.227.19 (c) (2). If you
are acquiring the software on behalf of the Department of Defense, the
software shall be classified as "Commercial Computer Software" and the
Government shall have only "Restricted Rights" as defined in Clause
252.227-7014 (b) (3) of DFARs. Notwithstanding the foregoing, the
authors grant the U.S. Government and others acting in its behalf
permission to use and distribute the software in accordance with the
terms specified in this license.
This software is copyrighted by the Regents of the University of
California, Sun Microsystems, Inc., Scriptics Corporation, ActiveState
Corporation, Apple Inc. and other parties. The following terms apply to
all files associated with the software unless explicitly disclaimed in
individual files.
The authors hereby grant permission to use, copy, modify, distribute,
and license this software and its documentation for any purpose, provided
that existing copyright notices are retained in all copies and that this
notice is included verbatim in any distributions. No written agreement,
license, or royalty fee is required for any of the authorized uses.
Modifications to this software may be copyrighted by their authors
and need not follow the licensing terms described here, provided that
the new terms are clearly indicated on the first page of each file where
they apply.
IN NO EVENT SHALL THE AUTHORS OR DISTRIBUTORS BE LIABLE TO ANY PARTY
FOR DIRECT, INDIRECT, SPECIAL, INCIDENTAL, OR CONSEQUENTIAL DAMAGES
ARISING OUT OF THE USE OF THIS SOFTWARE, ITS DOCUMENTATION, OR ANY
DERIVATIVES THEREOF, EVEN IF THE AUTHORS HAVE BEEN ADVISED OF THE
POSSIBILITY OF SUCH DAMAGE.
THE AUTHORS AND DISTRIBUTORS SPECIFICALLY DISCLAIM ANY WARRANTIES,
INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE, AND NON-INFRINGEMENT. THIS SOFTWARE
IS PROVIDED ON AN "AS IS" BASIS, AND THE AUTHORS AND DISTRIBUTORS HAVE
NO OBLIGATION TO PROVIDE MAINTENANCE, SUPPORT, UPDATES, ENHANCEMENTS, OR
MODIFICATIONS.
GOVERNMENT USE: If you are acquiring this software on behalf of the
U.S. government, the Government shall have only "Restricted Rights"
in the software and related documentation as defined in the Federal
Acquisition Regulations (FARs) in Clause 52.227.19 (c) (2). If you
are acquiring the software on behalf of the Department of Defense, the
software shall be classified as "Commercial Computer Software" and the
Government shall have only "Restricted Rights" as defined in Clause
252.227-7013 (b) (3) of DFARs. Notwithstanding the foregoing, the
authors grant the U.S. Government and others acting in its behalf
permission to use and distribute the software in accordance with the
terms specified in this license.
Copyright (c) 1993-1999 Ioi Kim Lam.
Copyright (c) 2000-2001 Tix Project Group.
Copyright (c) 2004 ActiveState
This software is copyrighted by the above entities
and other parties. The following terms apply to all files associated
with the software unless explicitly disclaimed in individual files.
The authors hereby grant permission to use, copy, modify, distribute,
and license this software and its documentation for any purpose, provided
that existing copyright notices are retained in all copies and that this
notice is included verbatim in any distributions. No written agreement,
license, or royalty fee is required for any of the authorized uses.
Modifications to this software may be copyrighted by their authors
and need not follow the licensing terms described here, provided that
the new terms are clearly indicated on the first page of each file where
they apply.
IN NO EVENT SHALL THE AUTHORS OR DISTRIBUTORS BE LIABLE TO ANY PARTY
FOR DIRECT, INDIRECT, SPECIAL, INCIDENTAL, OR CONSEQUENTIAL DAMAGES
ARISING OUT OF THE USE OF THIS SOFTWARE, ITS DOCUMENTATION, OR ANY
DERIVATIVES THEREOF, EVEN IF THE AUTHORS HAVE BEEN ADVISED OF THE
POSSIBILITY OF SUCH DAMAGE.
THE AUTHORS AND DISTRIBUTORS SPECIFICALLY DISCLAIM ANY WARRANTIES,
INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE, AND NON-INFRINGEMENT. THIS SOFTWARE
IS PROVIDED ON AN "AS IS" BASIS, AND THE AUTHORS AND DISTRIBUTORS HAVE
NO OBLIGATION TO PROVIDE MAINTENANCE, SUPPORT, UPDATES, ENHANCEMENTS, OR
MODIFICATIONS.
GOVERNMENT USE: If you are acquiring this software on behalf of the
U.S. government, the Government shall have only "Restricted Rights"
in the software and related documentation as defined in the Federal
Acquisition Regulations (FARs) in Clause 52.227.19 (c) (2). If you
are acquiring the software on behalf of the Department of Defense, the
software shall be classified as "Commercial Computer Software" and the
Government shall have only "Restricted Rights" as defined in Clause
252.227-7013 (c) (1) of DFARs. Notwithstanding the foregoing, the
authors grant the U.S. Government and others acting in its behalf
permission to use and distribute the software in accordance with the
terms specified in this license.
ZERO-CLAUSE BSD LICENSE FOR CODE IN THE PYTHON DOCUMENTATION
----------------------------------------------------------------------
Parts of this software are based on the Tcl/Tk software copyrighted by
the Regents of the University of California, Sun Microsystems, Inc.,
and other parties. The original license terms of the Tcl/Tk software
distribution is included in the file docs/license.tcltk.
Parts of this software are based on the HTML Library software
copyrighted by Sun Microsystems, Inc. The original license terms of
the HTML Library software distribution is included in the file
docs/license.html_lib.
Permission to use, copy, modify, and/or distribute this software for any
purpose with or without fee is hereby granted.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES WITH
REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY
AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY SPECIAL, DIRECT,
INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM
LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR
OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR
PERFORMANCE OF THIS SOFTWARE.

View File

@ -66,18 +66,20 @@ __all__ = ["all_feature_names"] + all_feature_names
# code.h and used by compile.h, so that an editor search will find them here.
# However, they're not exported in __all__, because they don't really belong to
# this module.
CO_NESTED = 0x0010 # nested_scopes
CO_GENERATOR_ALLOWED = 0 # generators (obsolete, was 0x1000)
CO_FUTURE_DIVISION = 0x2000 # division
CO_FUTURE_ABSOLUTE_IMPORT = 0x4000 # perform absolute imports by default
CO_FUTURE_WITH_STATEMENT = 0x8000 # with statement
CO_FUTURE_PRINT_FUNCTION = 0x10000 # print function
CO_FUTURE_UNICODE_LITERALS = 0x20000 # unicode string literals
CO_FUTURE_BARRY_AS_BDFL = 0x40000
CO_FUTURE_GENERATOR_STOP = 0x80000 # StopIteration becomes RuntimeError in generators
CO_FUTURE_ANNOTATIONS = 0x100000 # annotations become strings at runtime
CO_NESTED = 0x0010 # nested_scopes
CO_GENERATOR_ALLOWED = 0 # generators (obsolete, was 0x1000)
CO_FUTURE_DIVISION = 0x20000 # division
CO_FUTURE_ABSOLUTE_IMPORT = 0x40000 # perform absolute imports by default
CO_FUTURE_WITH_STATEMENT = 0x80000 # with statement
CO_FUTURE_PRINT_FUNCTION = 0x100000 # print function
CO_FUTURE_UNICODE_LITERALS = 0x200000 # unicode string literals
CO_FUTURE_BARRY_AS_BDFL = 0x400000
CO_FUTURE_GENERATOR_STOP = 0x800000 # StopIteration becomes RuntimeError in generators
CO_FUTURE_ANNOTATIONS = 0x1000000 # annotations become strings at runtime
class _Feature:
def __init__(self, optionalRelease, mandatoryRelease, compiler_flag):
self.optional = optionalRelease
self.mandatory = mandatoryRelease
@ -88,7 +90,6 @@ class _Feature:
This is a 5-tuple, of the same form as sys.version_info.
"""
return self.optional
def getMandatoryRelease(self):
@ -97,7 +98,6 @@ class _Feature:
This is a 5-tuple, of the same form as sys.version_info, or, if
the feature was dropped, is None.
"""
return self.mandatory
def __repr__(self):
@ -105,6 +105,7 @@ class _Feature:
self.mandatory,
self.compiler_flag))
nested_scopes = _Feature((2, 1, 0, "beta", 1),
(2, 2, 0, "alpha", 0),
CO_NESTED)
@ -134,7 +135,7 @@ unicode_literals = _Feature((2, 6, 0, "alpha", 2),
CO_FUTURE_UNICODE_LITERALS)
barry_as_FLUFL = _Feature((3, 1, 0, "alpha", 2),
(3, 9, 0, "alpha", 0),
(4, 0, 0, "alpha", 0),
CO_FUTURE_BARRY_AS_BDFL)
generator_stop = _Feature((3, 5, 0, "beta", 1),
@ -142,5 +143,5 @@ generator_stop = _Feature((3, 5, 0, "beta", 1),
CO_FUTURE_GENERATOR_STOP)
annotations = _Feature((3, 7, 0, "beta", 1),
(4, 0, 0, "alpha", 0),
(3, 10, 0, "alpha", 0),
CO_FUTURE_ANNOTATIONS)

View File

@ -9,6 +9,12 @@ Unit tests are in test_collections.
from abc import ABCMeta, abstractmethod
import sys
GenericAlias = type(list[int])
EllipsisType = type(...)
def _f(): pass
FunctionType = type(_f)
del _f
__all__ = ["Awaitable", "Coroutine",
"AsyncIterable", "AsyncIterator", "AsyncGenerator",
"Hashable", "Iterable", "Iterator", "Generator", "Reversible",
@ -110,6 +116,8 @@ class Awaitable(metaclass=ABCMeta):
return _check_methods(C, "__await__")
return NotImplemented
__class_getitem__ = classmethod(GenericAlias)
class Coroutine(Awaitable):
@ -169,6 +177,8 @@ class AsyncIterable(metaclass=ABCMeta):
return _check_methods(C, "__aiter__")
return NotImplemented
__class_getitem__ = classmethod(GenericAlias)
class AsyncIterator(AsyncIterable):
@ -255,6 +265,8 @@ class Iterable(metaclass=ABCMeta):
return _check_methods(C, "__iter__")
return NotImplemented
__class_getitem__ = classmethod(GenericAlias)
class Iterator(Iterable):
@ -274,6 +286,7 @@ class Iterator(Iterable):
return _check_methods(C, '__iter__', '__next__')
return NotImplemented
Iterator.register(bytes_iterator)
Iterator.register(bytearray_iterator)
#Iterator.register(callable_iterator)
@ -353,6 +366,7 @@ class Generator(Iterator):
'send', 'throw', 'close')
return NotImplemented
Generator.register(generator)
@ -385,6 +399,9 @@ class Container(metaclass=ABCMeta):
return _check_methods(C, "__contains__")
return NotImplemented
__class_getitem__ = classmethod(GenericAlias)
class Collection(Sized, Iterable, Container):
__slots__ = ()
@ -395,6 +412,87 @@ class Collection(Sized, Iterable, Container):
return _check_methods(C, "__len__", "__iter__", "__contains__")
return NotImplemented
class _CallableGenericAlias(GenericAlias):
""" Represent `Callable[argtypes, resulttype]`.
This sets ``__args__`` to a tuple containing the flattened``argtypes``
followed by ``resulttype``.
Example: ``Callable[[int, str], float]`` sets ``__args__`` to
``(int, str, float)``.
"""
__slots__ = ()
def __new__(cls, origin, args):
try:
return cls.__create_ga(origin, args)
except TypeError as exc:
import warnings
warnings.warn(f'{str(exc)} '
f'(This will raise a TypeError in Python 3.10.)',
DeprecationWarning)
return GenericAlias(origin, args)
@classmethod
def __create_ga(cls, origin, args):
if not isinstance(args, tuple) or len(args) != 2:
raise TypeError(
"Callable must be used as Callable[[arg, ...], result].")
t_args, t_result = args
if isinstance(t_args, (list, tuple)):
ga_args = tuple(t_args) + (t_result,)
# This relaxes what t_args can be on purpose to allow things like
# PEP 612 ParamSpec. Responsibility for whether a user is using
# Callable[...] properly is deferred to static type checkers.
else:
ga_args = args
return super().__new__(cls, origin, ga_args)
def __repr__(self):
if len(self.__args__) == 2 and self.__args__[0] is Ellipsis:
return super().__repr__()
return (f'collections.abc.Callable'
f'[[{", ".join([_type_repr(a) for a in self.__args__[:-1]])}], '
f'{_type_repr(self.__args__[-1])}]')
def __reduce__(self):
args = self.__args__
if not (len(args) == 2 and args[0] is Ellipsis):
args = list(args[:-1]), args[-1]
return _CallableGenericAlias, (Callable, args)
def __getitem__(self, item):
# Called during TypeVar substitution, returns the custom subclass
# rather than the default types.GenericAlias object.
ga = super().__getitem__(item)
args = ga.__args__
t_result = args[-1]
t_args = args[:-1]
args = (t_args, t_result)
return _CallableGenericAlias(Callable, args)
def _type_repr(obj):
"""Return the repr() of an object, special-casing types (internal helper).
Copied from :mod:`typing` since collections.abc
shouldn't depend on that module.
"""
if isinstance(obj, GenericAlias):
return repr(obj)
if isinstance(obj, type):
if obj.__module__ == 'builtins':
return obj.__qualname__
return f'{obj.__module__}.{obj.__qualname__}'
if obj is Ellipsis:
return '...'
if isinstance(obj, FunctionType):
return obj.__name__
return repr(obj)
class Callable(metaclass=ABCMeta):
__slots__ = ()
@ -409,6 +507,8 @@ class Callable(metaclass=ABCMeta):
return _check_methods(C, "__call__")
return NotImplemented
__class_getitem__ = classmethod(_CallableGenericAlias)
### SETS ###
@ -542,6 +642,7 @@ class Set(Collection):
hx = hash(x)
h ^= (hx ^ (hx << 16) ^ 89869747) * 3644798167
h &= MASK
h ^= (h >> 11) ^ (h >> 25)
h = h * 69069 + 907133923
h &= MASK
if h > MAX:
@ -550,6 +651,7 @@ class Set(Collection):
h = 590923713
return h
Set.register(frozenset)
@ -632,6 +734,7 @@ class MutableSet(Set):
self.discard(value)
return self
MutableSet.register(set)
@ -688,6 +791,7 @@ class Mapping(Collection):
__reversed__ = None
Mapping.register(mappingproxy)
@ -704,13 +808,15 @@ class MappingView(Sized):
def __repr__(self):
return '{0.__class__.__name__}({0._mapping!r})'.format(self)
__class_getitem__ = classmethod(GenericAlias)
class KeysView(MappingView, Set):
__slots__ = ()
@classmethod
def _from_iterable(self, it):
def _from_iterable(cls, it):
return set(it)
def __contains__(self, key):
@ -719,6 +825,7 @@ class KeysView(MappingView, Set):
def __iter__(self):
yield from self._mapping
KeysView.register(dict_keys)
@ -727,7 +834,7 @@ class ItemsView(MappingView, Set):
__slots__ = ()
@classmethod
def _from_iterable(self, it):
def _from_iterable(cls, it):
return set(it)
def __contains__(self, item):
@ -743,6 +850,7 @@ class ItemsView(MappingView, Set):
for key in self._mapping:
yield (key, self._mapping[key])
ItemsView.register(dict_items)
@ -761,6 +869,7 @@ class ValuesView(MappingView, Collection):
for key in self._mapping:
yield self._mapping[key]
ValuesView.register(dict_values)
@ -821,30 +930,21 @@ class MutableMapping(Mapping):
except KeyError:
pass
def update(*args, **kwds):
def update(self, other=(), /, **kwds):
''' D.update([E, ]**F) -> None. Update D from mapping/iterable E and F.
If E present and has a .keys() method, does: for k in E: D[k] = E[k]
If E present and lacks .keys() method, does: for (k, v) in E: D[k] = v
In either case, this is followed by: for k, v in F.items(): D[k] = v
'''
if not args:
raise TypeError("descriptor 'update' of 'MutableMapping' object "
"needs an argument")
self, *args = args
if len(args) > 1:
raise TypeError('update expected at most 1 arguments, got %d' %
len(args))
if args:
other = args[0]
if isinstance(other, Mapping):
for key in other:
self[key] = other[key]
elif hasattr(other, "keys"):
for key in other.keys():
self[key] = other[key]
else:
for key, value in other:
self[key] = value
if isinstance(other, Mapping):
for key in other:
self[key] = other[key]
elif hasattr(other, "keys"):
for key in other.keys():
self[key] = other[key]
else:
for key, value in other:
self[key] = value
for key, value in kwds.items():
self[key] = value
@ -856,6 +956,7 @@ class MutableMapping(Mapping):
self[key] = default
return default
MutableMapping.register(dict)
@ -923,6 +1024,7 @@ class Sequence(Reversible, Collection):
'S.count(value) -> integer -- return number of occurrences of value'
return sum(1 for v in self if v is value or v == value)
Sequence.register(tuple)
Sequence.register(str)
Sequence.register(range)
@ -986,6 +1088,8 @@ class MutableSequence(Sequence):
def extend(self, values):
'S.extend(iterable) -- extend sequence by appending elements from the iterable'
if values is self:
values = list(values)
for v in values:
self.append(v)
@ -1007,5 +1111,6 @@ class MutableSequence(Sequence):
self.extend(values)
return self
MutableSequence.register(list)
MutableSequence.register(bytearray) # Multiply inheriting, see ByteString

View File

@ -1,163 +0,0 @@
"""Drop-in replacement for the thread module.
Meant to be used as a brain-dead substitute so that threaded code does
not need to be rewritten for when the thread module is not present.
Suggested usage is::
try:
import _thread
except ImportError:
import _dummy_thread as _thread
"""
# Exports only things specified by thread documentation;
# skipping obsolete synonyms allocate(), start_new(), exit_thread().
__all__ = ['error', 'start_new_thread', 'exit', 'get_ident', 'allocate_lock',
'interrupt_main', 'LockType']
# A dummy value
TIMEOUT_MAX = 2**31
# NOTE: this module can be imported early in the extension building process,
# and so top level imports of other modules should be avoided. Instead, all
# imports are done when needed on a function-by-function basis. Since threads
# are disabled, the import lock should not be an issue anyway (??).
error = RuntimeError
def start_new_thread(function, args, kwargs={}):
"""Dummy implementation of _thread.start_new_thread().
Compatibility is maintained by making sure that ``args`` is a
tuple and ``kwargs`` is a dictionary. If an exception is raised
and it is SystemExit (which can be done by _thread.exit()) it is
caught and nothing is done; all other exceptions are printed out
by using traceback.print_exc().
If the executed function calls interrupt_main the KeyboardInterrupt will be
raised when the function returns.
"""
if type(args) != type(tuple()):
raise TypeError("2nd arg must be a tuple")
if type(kwargs) != type(dict()):
raise TypeError("3rd arg must be a dict")
global _main
_main = False
try:
function(*args, **kwargs)
except SystemExit:
pass
except:
import traceback
traceback.print_exc()
_main = True
global _interrupt
if _interrupt:
_interrupt = False
raise KeyboardInterrupt
def exit():
"""Dummy implementation of _thread.exit()."""
raise SystemExit
def get_ident():
"""Dummy implementation of _thread.get_ident().
Since this module should only be used when _threadmodule is not
available, it is safe to assume that the current process is the
only thread. Thus a constant can be safely returned.
"""
return 1
def allocate_lock():
"""Dummy implementation of _thread.allocate_lock()."""
return LockType()
def stack_size(size=None):
"""Dummy implementation of _thread.stack_size()."""
if size is not None:
raise error("setting thread stack size not supported")
return 0
def _set_sentinel():
"""Dummy implementation of _thread._set_sentinel()."""
return LockType()
class LockType(object):
"""Class implementing dummy implementation of _thread.LockType.
Compatibility is maintained by maintaining self.locked_status
which is a boolean that stores the state of the lock. Pickling of
the lock, though, should not be done since if the _thread module is
then used with an unpickled ``lock()`` from here problems could
occur from this class not having atomic methods.
"""
def __init__(self):
self.locked_status = False
def acquire(self, waitflag=None, timeout=-1):
"""Dummy implementation of acquire().
For blocking calls, self.locked_status is automatically set to
True and returned appropriately based on value of
``waitflag``. If it is non-blocking, then the value is
actually checked and not set if it is already acquired. This
is all done so that threading.Condition's assert statements
aren't triggered and throw a little fit.
"""
if waitflag is None or waitflag:
self.locked_status = True
return True
else:
if not self.locked_status:
self.locked_status = True
return True
else:
if timeout > 0:
import time
time.sleep(timeout)
return False
__enter__ = acquire
def __exit__(self, typ, val, tb):
self.release()
def release(self):
"""Release the dummy lock."""
# XXX Perhaps shouldn't actually bother to test? Could lead
# to problems for complex, threaded code.
if not self.locked_status:
raise error
self.locked_status = False
return True
def locked(self):
return self.locked_status
def __repr__(self):
return "<%s %s.%s object at %s>" % (
"locked" if self.locked_status else "unlocked",
self.__class__.__module__,
self.__class__.__qualname__,
hex(id(self))
)
# Used to signal that interrupt_main was called in a "thread"
_interrupt = False
# True when not executing in a "thread"
_main = True
def interrupt_main():
"""Set _interrupt flag to True to have start_new_thread raise
KeyboardInterrupt upon exiting."""
if _main:
raise KeyboardInterrupt
else:
global _interrupt
_interrupt = True

View File

@ -157,6 +157,7 @@ class ParserBase:
match= _msmarkedsectionclose.search(rawdata, i+3)
else:
self.error('unknown status keyword %r in marked section' % rawdata[i+3:j])
match = None
if not match:
return -1
if report:

View File

@ -17,7 +17,7 @@ __all__ = [
_UNIVERSAL_CONFIG_VARS = ('CFLAGS', 'LDFLAGS', 'CPPFLAGS', 'BASECFLAGS',
'BLDSHARED', 'LDSHARED', 'CC', 'CXX',
'PY_CFLAGS', 'PY_LDFLAGS', 'PY_CPPFLAGS',
'PY_CORE_CFLAGS')
'PY_CORE_CFLAGS', 'PY_CORE_LDFLAGS')
# configuration variables that may contain compiler calls
_COMPILER_CONFIG_VARS = ('BLDSHARED', 'LDSHARED', 'CC', 'CXX')
@ -52,7 +52,7 @@ def _find_executable(executable, path=None):
return executable
def _read_output(commandstring):
def _read_output(commandstring, capture_stderr=False):
"""Output from successful command execution or None"""
# Similar to os.popen(commandstring, "r").read(),
# but without actually using os.popen because that
@ -67,7 +67,10 @@ def _read_output(commandstring):
os.getpid(),), "w+b")
with contextlib.closing(fp) as fp:
cmd = "%s 2>/dev/null >'%s'" % (commandstring, fp.name)
if capture_stderr:
cmd = "%s >'%s' 2>&1" % (commandstring, fp.name)
else:
cmd = "%s 2>/dev/null >'%s'" % (commandstring, fp.name)
return fp.read().decode('utf-8').strip() if not os.system(cmd) else None
@ -110,6 +113,26 @@ def _get_system_version():
return _SYSTEM_VERSION
_SYSTEM_VERSION_TUPLE = None
def _get_system_version_tuple():
"""
Return the macOS system version as a tuple
The return value is safe to use to compare
two version numbers.
"""
global _SYSTEM_VERSION_TUPLE
if _SYSTEM_VERSION_TUPLE is None:
osx_version = _get_system_version()
if osx_version:
try:
_SYSTEM_VERSION_TUPLE = tuple(int(i) for i in osx_version.split('.'))
except ValueError:
_SYSTEM_VERSION_TUPLE = ()
return _SYSTEM_VERSION_TUPLE
def _remove_original_values(_config_vars):
"""Remove original unmodified values for testing"""
# This is needed for higher-level cross-platform tests of get_platform.
@ -125,6 +148,33 @@ def _save_modified_value(_config_vars, cv, newvalue):
_config_vars[_INITPRE + cv] = oldvalue
_config_vars[cv] = newvalue
_cache_default_sysroot = None
def _default_sysroot(cc):
""" Returns the root of the default SDK for this system, or '/' """
global _cache_default_sysroot
if _cache_default_sysroot is not None:
return _cache_default_sysroot
contents = _read_output('%s -c -E -v - </dev/null' % (cc,), True)
in_incdirs = False
for line in contents.splitlines():
if line.startswith("#include <...>"):
in_incdirs = True
elif line.startswith("End of search list"):
in_incdirs = False
elif in_incdirs:
line = line.strip()
if line == '/usr/include':
_cache_default_sysroot = '/'
elif line.endswith(".sdk/usr/include"):
_cache_default_sysroot = line[:-12]
if _cache_default_sysroot is None:
_cache_default_sysroot = '/'
return _cache_default_sysroot
def _supports_universal_builds():
"""Returns True if universal builds are supported on this system"""
# As an approximation, we assume that if we are running on 10.4 or above,
@ -132,14 +182,18 @@ def _supports_universal_builds():
# builds, in particular -isysroot and -arch arguments to the compiler. This
# is in support of allowing 10.4 universal builds to run on 10.3.x systems.
osx_version = _get_system_version()
if osx_version:
try:
osx_version = tuple(int(i) for i in osx_version.split('.'))
except ValueError:
osx_version = ''
osx_version = _get_system_version_tuple()
return bool(osx_version >= (10, 4)) if osx_version else False
def _supports_arm64_builds():
"""Returns True if arm64 builds are supported on this system"""
# There are two sets of systems supporting macOS/arm64 builds:
# 1. macOS 11 and later, unconditionally
# 2. macOS 10.15 with Xcode 12.2 or later
# For now the second category is ignored.
osx_version = _get_system_version_tuple()
return osx_version >= (11, 0) if osx_version else False
def _find_appropriate_compiler(_config_vars):
"""Find appropriate C compiler for extension module builds"""
@ -211,7 +265,7 @@ def _remove_universal_flags(_config_vars):
if cv in _config_vars and cv not in os.environ:
flags = _config_vars[cv]
flags = re.sub(r'-arch\s+\w+\s', ' ', flags, flags=re.ASCII)
flags = re.sub('-isysroot [^ \t]*', ' ', flags)
flags = re.sub(r'-isysroot\s*\S+', ' ', flags)
_save_modified_value(_config_vars, cv, flags)
return _config_vars
@ -287,7 +341,7 @@ def _check_for_unavailable_sdk(_config_vars):
# to /usr and /System/Library by either a standalone CLT
# package or the CLT component within Xcode.
cflags = _config_vars.get('CFLAGS', '')
m = re.search(r'-isysroot\s+(\S+)', cflags)
m = re.search(r'-isysroot\s*(\S+)', cflags)
if m is not None:
sdk = m.group(1)
if not os.path.exists(sdk):
@ -295,7 +349,7 @@ def _check_for_unavailable_sdk(_config_vars):
# Do not alter a config var explicitly overridden by env var
if cv in _config_vars and cv not in os.environ:
flags = _config_vars[cv]
flags = re.sub(r'-isysroot\s+\S+(?:\s|$)', ' ', flags)
flags = re.sub(r'-isysroot\s*\S+(?:\s|$)', ' ', flags)
_save_modified_value(_config_vars, cv, flags)
return _config_vars
@ -320,7 +374,7 @@ def compiler_fixup(compiler_so, cc_args):
stripArch = stripSysroot = True
else:
stripArch = '-arch' in cc_args
stripSysroot = '-isysroot' in cc_args
stripSysroot = any(arg for arg in cc_args if arg.startswith('-isysroot'))
if stripArch or 'ARCHFLAGS' in os.environ:
while True:
@ -331,6 +385,12 @@ def compiler_fixup(compiler_so, cc_args):
except ValueError:
break
elif not _supports_arm64_builds():
# Look for "-arch arm64" and drop that
for idx in reversed(range(len(compiler_so))):
if compiler_so[idx] == '-arch' and compiler_so[idx+1] == "arm64":
del compiler_so[idx:idx+2]
if 'ARCHFLAGS' in os.environ and not stripArch:
# User specified different -arch flags in the environ,
# see also distutils.sysconfig
@ -338,23 +398,34 @@ def compiler_fixup(compiler_so, cc_args):
if stripSysroot:
while True:
try:
index = compiler_so.index('-isysroot')
indices = [i for i,x in enumerate(compiler_so) if x.startswith('-isysroot')]
if not indices:
break
index = indices[0]
if compiler_so[index] == '-isysroot':
# Strip this argument and the next one:
del compiler_so[index:index+2]
except ValueError:
break
else:
# It's '-isysroot/some/path' in one arg
del compiler_so[index:index+1]
# Check if the SDK that is used during compilation actually exists,
# the universal build requires the usage of a universal SDK and not all
# users have that installed by default.
sysroot = None
if '-isysroot' in cc_args:
idx = cc_args.index('-isysroot')
sysroot = cc_args[idx+1]
elif '-isysroot' in compiler_so:
idx = compiler_so.index('-isysroot')
sysroot = compiler_so[idx+1]
argvar = cc_args
indices = [i for i,x in enumerate(cc_args) if x.startswith('-isysroot')]
if not indices:
argvar = compiler_so
indices = [i for i,x in enumerate(compiler_so) if x.startswith('-isysroot')]
for idx in indices:
if argvar[idx] == '-isysroot':
sysroot = argvar[idx+1]
break
else:
sysroot = argvar[idx][len('-isysroot'):]
break
if sysroot and not os.path.isdir(sysroot):
from distutils import log
@ -411,7 +482,7 @@ def customize_compiler(_config_vars):
This customization is performed when the first
extension module build is requested
in distutils.sysconfig.customize_compiler).
in distutils.sysconfig.customize_compiler.
"""
# Find a compiler to use for extension module builds
@ -470,6 +541,8 @@ def get_platform_osx(_config_vars, osname, release, machine):
if len(archs) == 1:
machine = archs[0]
elif archs == ('arm64', 'x86_64'):
machine = 'universal2'
elif archs == ('i386', 'ppc'):
machine = 'fat'
elif archs == ('i386', 'x86_64'):

View File

@ -32,7 +32,7 @@ class ABCMeta(type):
# external code.
_abc_invalidation_counter = 0
def __new__(mcls, name, bases, namespace, **kwargs):
def __new__(mcls, name, bases, namespace, /, **kwargs):
cls = super().__new__(mcls, name, bases, namespace, **kwargs)
# Compute set of abstract method names
abstracts = {name

View File

@ -140,8 +140,11 @@ __all__ = [
# Limits for the C version for compatibility
'MAX_PREC', 'MAX_EMAX', 'MIN_EMIN', 'MIN_ETINY',
# C version: compile time choice that enables the thread local context
'HAVE_THREADS'
# C version: compile time choice that enables the thread local context (deprecated, now always true)
'HAVE_THREADS',
# C version: compile time choice that enables the coroutine local context
'HAVE_CONTEXTVAR'
]
__xname__ = __name__ # sys.modules lookup (--without-threads)
@ -172,6 +175,7 @@ ROUND_05UP = 'ROUND_05UP'
# Compatibility with the C version
HAVE_THREADS = True
HAVE_CONTEXTVAR = True
if sys.maxsize == 2**63-1:
MAX_PREC = 999999999999999999
MAX_EMAX = 999999999999999999
@ -2021,7 +2025,7 @@ class Decimal(object):
if not other and not self:
return context._raise_error(InvalidOperation,
'at least one of pow() 1st argument '
'and 2nd argument must be nonzero ;'
'and 2nd argument must be nonzero; '
'0**0 is not defined')
# compute sign of result
@ -5631,8 +5635,6 @@ class _WorkRep(object):
def __repr__(self):
return "(%r, %r, %r)" % (self.sign, self.int, self.exp)
__str__ = __repr__
def _normalize(op1, op2, prec = 0):

View File

@ -33,6 +33,12 @@ DEFAULT_BUFFER_SIZE = 8 * 1024 # bytes
# Rebind for compatibility
BlockingIOError = BlockingIOError
# Does io.IOBase finalizer log the exception if the close() method fails?
# The exception is ignored silently by default in release build.
_IOBASE_EMITS_UNRAISABLE = (hasattr(sys, "gettotalrefcount") or sys.flags.dev_mode)
# Does open() check its 'errors' argument?
_CHECK_ERRORS = _IOBASE_EMITS_UNRAISABLE
def open(file, mode="r", buffering=-1, encoding=None, errors=None,
newline=None, closefd=True, opener=None):
@ -198,6 +204,11 @@ def open(file, mode="r", buffering=-1, encoding=None, errors=None,
raise ValueError("binary mode doesn't take an errors argument")
if binary and newline is not None:
raise ValueError("binary mode doesn't take a newline argument")
if binary and buffering == 1:
import warnings
warnings.warn("line buffering (buffering=1) isn't supported in binary "
"mode, the default buffer size will be used",
RuntimeWarning, 2)
raw = FileIO(file,
(creating and "x" or "") +
(reading and "r" or "") +
@ -245,11 +256,34 @@ def open(file, mode="r", buffering=-1, encoding=None, errors=None,
result.close()
raise
# Define a default pure-Python implementation for open_code()
# that does not allow hooks. Warn on first use. Defined for tests.
def _open_code_with_warning(path):
"""Opens the provided file with mode ``'rb'``. This function
should be used when the intent is to treat the contents as
executable code.
``path`` should be an absolute path.
When supported by the runtime, this function can be hooked
in order to allow embedders more control over code files.
This functionality is not supported on the current runtime.
"""
import warnings
warnings.warn("_pyio.open_code() may not be using hooks",
RuntimeWarning, 2)
return open(path, "rb")
try:
open_code = io.open_code
except AttributeError:
open_code = _open_code_with_warning
class DocDescriptor:
"""Helper for builtins.open.__doc__
"""
def __get__(self, obj, typ):
def __get__(self, obj, typ=None):
return (
"open(file, mode='r', buffering=-1, encoding=None, "
"errors=None, newline=None, closefd=True)\n\n" +
@ -280,23 +314,21 @@ except AttributeError:
class IOBase(metaclass=abc.ABCMeta):
"""The abstract base class for all I/O classes, acting on streams of
bytes. There is no public constructor.
"""The abstract base class for all I/O classes.
This class provides dummy implementations for many methods that
derived classes can override selectively; the default implementations
represent a file that cannot be read, written or seeked.
Even though IOBase does not declare read, readinto, or write because
Even though IOBase does not declare read or write because
their signatures will vary, implementations and clients should
consider those methods part of the interface. Also, implementations
may raise UnsupportedOperation when operations they do not support are
called.
The basic type used for binary data read from or written to a file is
bytes. Other bytes-like objects are accepted as method arguments too. In
some cases (such as readinto), a writable object is required. Text I/O
classes work with str data.
bytes. Other bytes-like objects are accepted as method arguments too.
Text I/O classes work with str data.
Note that calling any method (even inquiries) on a closed stream is
undefined. Implementations may raise OSError in this case.
@ -374,15 +406,28 @@ class IOBase(metaclass=abc.ABCMeta):
def __del__(self):
"""Destructor. Calls close()."""
# The try/except block is in case this is called at program
# exit time, when it's possible that globals have already been
# deleted, and then the close() call might fail. Since
# there's nothing we can do about such failures and they annoy
# the end users, we suppress the traceback.
try:
closed = self.closed
except AttributeError:
# If getting closed fails, then the object is probably
# in an unusable state, so ignore.
return
if closed:
return
if _IOBASE_EMITS_UNRAISABLE:
self.close()
except:
pass
else:
# The try/except block is in case this is called at program
# exit time, when it's possible that globals have already been
# deleted, and then the close() call might fail. Since
# there's nothing we can do about such failures and they annoy
# the end users, we suppress the traceback.
try:
self.close()
except:
pass
### Inquiries ###
@ -547,6 +592,11 @@ class IOBase(metaclass=abc.ABCMeta):
return lines
def writelines(self, lines):
"""Write a list of lines to the stream.
Line separators are not added, so it is usual for each of the lines
provided to have a line separator at the end.
"""
self._checkClosed()
for line in lines:
self.write(line)
@ -753,6 +803,9 @@ class _BufferedIOMixin(BufferedIOBase):
return pos
def truncate(self, pos=None):
self._checkClosed()
self._checkWritable()
# Flush the stream. We're mixing buffered I/O with lower-level I/O,
# and a flush may be necessary to synch both views of the current
# file state.
@ -809,15 +862,14 @@ class _BufferedIOMixin(BufferedIOBase):
return self.raw.mode
def __getstate__(self):
raise TypeError("can not serialize a '{0}' object"
.format(self.__class__.__name__))
raise TypeError(f"cannot pickle {self.__class__.__name__!r} object")
def __repr__(self):
modname = self.__class__.__module__
clsname = self.__class__.__qualname__
try:
name = self.name
except Exception:
except AttributeError:
return "<{}.{}>".format(modname, clsname)
else:
return "<{}.{} name={!r}>".format(modname, clsname, name)
@ -835,6 +887,10 @@ class BytesIO(BufferedIOBase):
"""Buffered I/O implementation using an in-memory bytes buffer."""
# Initialize _buffer as soon as possible since it's used by __del__()
# which calls close()
_buffer = None
def __init__(self, initial_bytes=None):
buf = bytearray()
if initial_bytes is not None:
@ -862,7 +918,8 @@ class BytesIO(BufferedIOBase):
return memoryview(self._buffer)
def close(self):
self._buffer.clear()
if self._buffer is not None:
self._buffer.clear()
super().close()
def read(self, size=-1):
@ -1518,7 +1575,7 @@ class FileIO(RawIOBase):
raise IsADirectoryError(errno.EISDIR,
os.strerror(errno.EISDIR), file)
except AttributeError:
# Ignore the AttribueError if stat.S_ISDIR or errno.EISDIR
# Ignore the AttributeError if stat.S_ISDIR or errno.EISDIR
# don't exist.
pass
self._blksize = getattr(fdfstat, 'st_blksize', 0)
@ -1534,7 +1591,11 @@ class FileIO(RawIOBase):
# For consistent behaviour, we explicitly seek to the
# end of file (otherwise, it might be done only on the
# first write()).
os.lseek(fd, 0, SEEK_END)
try:
os.lseek(fd, 0, SEEK_END)
except OSError as e:
if e.errno != errno.ESPIPE:
raise
except:
if owned_fd is not None:
os.close(owned_fd)
@ -1549,7 +1610,7 @@ class FileIO(RawIOBase):
self.close()
def __getstate__(self):
raise TypeError("cannot serialize '%s' object", self.__class__.__name__)
raise TypeError(f"cannot pickle {self.__class__.__name__!r} object")
def __repr__(self):
class_name = '%s.%s' % (self.__class__.__module__,
@ -1759,8 +1820,7 @@ class TextIOBase(IOBase):
"""Base class for text I/O.
This class provides a character and line based interface to stream
I/O. There is no readinto method because Python's character strings
are immutable. There is no public constructor.
I/O.
"""
def read(self, size=-1):
@ -1933,6 +1993,10 @@ class TextIOWrapper(TextIOBase):
_CHUNK_SIZE = 2048
# Initialize _buffer as soon as possible since it's used by __del__()
# which calls close()
_buffer = None
# The write_through argument has no effect here since this
# implementation always writes through. The argument is present only
# so that the signature can match the signature of the C version.
@ -1966,6 +2030,8 @@ class TextIOWrapper(TextIOBase):
else:
if not isinstance(errors, str):
raise ValueError("invalid errors: %r" % errors)
if _CHECK_ERRORS:
codecs.lookup_error(errors)
self._buffer = buffer
self._decoded_chars = '' # buffer for text returned from decoder
@ -2023,13 +2089,13 @@ class TextIOWrapper(TextIOBase):
self.__class__.__qualname__)
try:
name = self.name
except Exception:
except AttributeError:
pass
else:
result += " name={0!r}".format(name)
try:
mode = self.mode
except Exception:
except AttributeError:
pass
else:
result += " mode={0!r}".format(mode)
@ -2149,6 +2215,7 @@ class TextIOWrapper(TextIOBase):
self.buffer.write(b)
if self._line_buffering and (haslf or "\r" in s):
self.flush()
self._set_decoded_chars('')
self._snapshot = None
if self._decoder:
self._decoder.reset()
@ -2234,7 +2301,7 @@ class TextIOWrapper(TextIOBase):
return not eof
def _pack_cookie(self, position, dec_flags=0,
bytes_to_feed=0, need_eof=0, chars_to_skip=0):
bytes_to_feed=0, need_eof=False, chars_to_skip=0):
# The meaning of a tell() cookie is: seek to position, set the
# decoder flags to dec_flags, read bytes_to_feed bytes, feed them
# into the decoder with need_eof as the EOF flag, then skip
@ -2248,7 +2315,7 @@ class TextIOWrapper(TextIOBase):
rest, dec_flags = divmod(rest, 1<<64)
rest, bytes_to_feed = divmod(rest, 1<<64)
need_eof, chars_to_skip = divmod(rest, 1<<64)
return position, dec_flags, bytes_to_feed, need_eof, chars_to_skip
return position, dec_flags, bytes_to_feed, bool(need_eof), chars_to_skip
def tell(self):
if not self._seekable:
@ -2282,7 +2349,7 @@ class TextIOWrapper(TextIOBase):
# current pos.
# Rationale: calling decoder.decode() has a large overhead
# regardless of chunk size; we want the number of such calls to
# be O(1) in most situations (common decoders, non-crazy input).
# be O(1) in most situations (common decoders, sensible input).
# Actually, it will be exactly 1 for fixed-size codecs (all
# 8-bit codecs, also UTF-16 and UTF-32).
skip_bytes = int(self._b2cratio * chars_to_skip)
@ -2322,7 +2389,7 @@ class TextIOWrapper(TextIOBase):
# (a point where the decoder has nothing buffered, so seek()
# can safely start from there and advance to this location).
bytes_fed = 0
need_eof = 0
need_eof = False
# Chars decoded since `start_pos`
chars_decoded = 0
for i in range(skip_bytes, len(next_input)):
@ -2339,7 +2406,7 @@ class TextIOWrapper(TextIOBase):
else:
# We didn't get enough decoded data; signal EOF to get more.
chars_decoded += len(decoder.decode(b'', final=True))
need_eof = 1
need_eof = True
if chars_decoded < chars_to_skip:
raise OSError("can't reconstruct logical file position")
@ -2381,18 +2448,18 @@ class TextIOWrapper(TextIOBase):
raise ValueError("tell on closed file")
if not self._seekable:
raise UnsupportedOperation("underlying stream is not seekable")
if whence == 1: # seek relative to current position
if whence == SEEK_CUR:
if cookie != 0:
raise UnsupportedOperation("can't do nonzero cur-relative seeks")
# Seeking to the current position should attempt to
# sync the underlying buffer with the current position.
whence = 0
cookie = self.tell()
if whence == 2: # seek relative to end of file
elif whence == SEEK_END:
if cookie != 0:
raise UnsupportedOperation("can't do nonzero end-relative seeks")
self.flush()
position = self.buffer.seek(0, 2)
position = self.buffer.seek(0, whence)
self._set_decoded_chars('')
self._snapshot = None
if self._decoder:

View File

@ -77,15 +77,6 @@ class LocaleTime(object):
if time.tzname != self.tzname or time.daylight != self.daylight:
raise ValueError("timezone changed during initialization")
def __pad(self, seq, front):
# Add '' to seq to either the front (is True), else the back.
seq = list(seq)
if front:
seq.insert(0, '')
else:
seq.append('')
return seq
def __calc_weekday(self):
# Set self.a_weekday and self.f_weekday using the calendar
# module.
@ -191,7 +182,7 @@ class TimeRE(dict):
self.locale_time = LocaleTime()
base = super()
base.__init__({
# The " \d" part of the regex is to make %c from ANSI C work
# The " [1-9]" part of the regex is to make %c from ANSI C work
'd': r"(?P<d>3[0-1]|[1-2]\d|0[1-9]|[1-9]| [1-9])",
'f': r"(?P<f>[0-9]{1,6})",
'H': r"(?P<H>2[0-3]|[0-1]\d|\d)",
@ -210,7 +201,7 @@ class TimeRE(dict):
#XXX: Does 'Y' need to worry about having less or more than
# 4 digits?
'Y': r"(?P<Y>\d\d\d\d)",
'z': r"(?P<z>[+-]\d\d:?[0-5]\d(:?[0-5]\d(\.\d{1,6})?)?|Z)",
'z': r"(?P<z>[+-]\d\d:?[0-5]\d(:?[0-5]\d(\.\d{1,6})?)?|(?-i:Z))",
'A': self.__seqToRE(self.locale_time.f_weekday, 'A'),
'a': self.__seqToRE(self.locale_time.a_weekday, 'a'),
'B': self.__seqToRE(self.locale_time.f_month[1:], 'B'),

View File

@ -56,7 +56,7 @@ You can create custom local objects by subclassing the local class:
>>> class MyLocal(local):
... number = 2
... def __init__(self, **kw):
... def __init__(self, /, **kw):
... self.__dict__.update(kw)
... def squared(self):
... return self.number ** 2
@ -204,7 +204,7 @@ def _patch(self):
class local:
__slots__ = '_local__impl', '__dict__'
def __new__(cls, *args, **kw):
def __new__(cls, /, *args, **kw):
if (args or kw) and (cls.__init__ is object.__init__):
raise TypeError("Initialization arguments are not supported")
self = object.__new__(cls)

View File

@ -3,6 +3,7 @@
# by abc.py to load everything else at startup.
from _weakref import ref
from types import GenericAlias
__all__ = ['WeakSet']
@ -50,10 +51,14 @@ class WeakSet:
self.update(data)
def _commit_removals(self):
l = self._pending_removals
pop = self._pending_removals.pop
discard = self.data.discard
while l:
discard(l.pop())
while True:
try:
item = pop()
except IndexError:
return
discard(item)
def __iter__(self):
with _IterationGuard(self):
@ -194,3 +199,8 @@ class WeakSet:
def isdisjoint(self, other):
return len(self.intersection(other)) == 0
def __repr__(self):
return repr(self.data)
__class_getitem__ = classmethod(GenericAlias)

View File

@ -11,7 +11,8 @@ def abstractmethod(funcobj):
class that has a metaclass derived from ABCMeta cannot be
instantiated unless all of its abstract methods are overridden.
The abstract methods can be called using any of the normal
'super' call mechanisms.
'super' call mechanisms. abstractmethod() may be used to declare
abstract methods for properties and descriptors.
Usage:
@ -27,17 +28,14 @@ def abstractmethod(funcobj):
class abstractclassmethod(classmethod):
"""A decorator indicating abstract classmethods.
Similar to abstractmethod.
Deprecated, use 'classmethod' with 'abstractmethod' instead:
Usage:
class C(metaclass=ABCMeta):
@abstractclassmethod
class C(ABC):
@classmethod
@abstractmethod
def my_abstract_classmethod(cls, ...):
...
'abstractclassmethod' is deprecated. Use 'classmethod' with
'abstractmethod' instead.
"""
__isabstractmethod__ = True
@ -50,17 +48,14 @@ class abstractclassmethod(classmethod):
class abstractstaticmethod(staticmethod):
"""A decorator indicating abstract staticmethods.
Similar to abstractmethod.
Deprecated, use 'staticmethod' with 'abstractmethod' instead:
Usage:
class C(metaclass=ABCMeta):
@abstractstaticmethod
class C(ABC):
@staticmethod
@abstractmethod
def my_abstract_staticmethod(...):
...
'abstractstaticmethod' is deprecated. Use 'staticmethod' with
'abstractmethod' instead.
"""
__isabstractmethod__ = True
@ -73,29 +68,14 @@ class abstractstaticmethod(staticmethod):
class abstractproperty(property):
"""A decorator indicating abstract properties.
Requires that the metaclass is ABCMeta or derived from it. A
class that has a metaclass derived from ABCMeta cannot be
instantiated unless all of its abstract properties are overridden.
The abstract properties can be called using any of the normal
'super' call mechanisms.
Deprecated, use 'property' with 'abstractmethod' instead:
Usage:
class C(metaclass=ABCMeta):
@abstractproperty
class C(ABC):
@property
@abstractmethod
def my_abstract_property(self):
...
This defines a read-only property; you can also define a read-write
abstract property using the 'long' form of property declaration:
class C(metaclass=ABCMeta):
def getx(self): ...
def setx(self, value): ...
x = abstractproperty(getx, setx)
'abstractproperty' is deprecated. Use 'property' with 'abstractmethod'
instead.
"""
__isabstractmethod__ = True

View File

@ -138,7 +138,7 @@ import struct
import builtins
import warnings
__all__ = ["Error", "open", "openfp"]
__all__ = ["Error", "open"]
class Error(Exception):
pass
@ -920,10 +920,6 @@ def open(f, mode=None):
else:
raise Error("mode must be 'r', 'rb', 'w', or 'wb'")
def openfp(f, mode=None):
warnings.warn("aifc.openfp is deprecated since Python 3.7. "
"Use aifc.open instead.", DeprecationWarning, stacklevel=2)
return open(f, mode=mode)
if __name__ == '__main__':
import sys

View File

@ -11,7 +11,7 @@ def geohash(latitude, longitude, datedow):
37.857713 -122.544543
'''
# http://xkcd.com/426/
h = hashlib.md5(datedow).hexdigest()
# https://xkcd.com/426/
h = hashlib.md5(datedow, usedforsecurity=False).hexdigest()
p, q = [('%f' % float.fromhex('0.' + x)) for x in (h[:16], h[16:32])]
print('%d%s %d%s' % (latitude, p[1:], longitude, q[1:]))

View File

@ -1,4 +1,5 @@
# Author: Steven J. Bethard <steven.bethard@gmail.com>.
# New maintainer as of 29 August 2019: Raymond Hettinger <raymond.hettinger@gmail.com>
"""Command-line parsing library
@ -66,6 +67,7 @@ __all__ = [
'ArgumentParser',
'ArgumentError',
'ArgumentTypeError',
'BooleanOptionalAction',
'FileType',
'HelpFormatter',
'ArgumentDefaultsHelpFormatter',
@ -127,7 +129,7 @@ class _AttributeHolder(object):
return '%s(%s)' % (type_name, ', '.join(arg_strings))
def _get_kwargs(self):
return sorted(self.__dict__.items())
return list(self.__dict__.items())
def _get_args(self):
return []
@ -164,15 +166,12 @@ class HelpFormatter(object):
# default setting for width
if width is None:
try:
width = int(_os.environ['COLUMNS'])
except (KeyError, ValueError):
width = 80
import shutil
width = shutil.get_terminal_size().columns
width -= 2
self._prog = prog
self._indent_increment = indent_increment
self._max_help_position = max_help_position
self._max_help_position = min(max_help_position,
max(width - 20, indent_increment * 2))
self._width = width
@ -265,7 +264,7 @@ class HelpFormatter(object):
invocations.append(get_invocation(subaction))
# update the maximum item length
invocation_length = max([len(s) for s in invocations])
invocation_length = max(map(len, invocations))
action_length = invocation_length + self._current_indent
self._action_max_length = max(self._action_max_length,
action_length)
@ -393,6 +392,9 @@ class HelpFormatter(object):
group_actions = set()
inserts = {}
for group in groups:
if not group._group_actions:
raise ValueError(f'empty group {group}')
try:
start = actions.index(group._group_actions[0])
except ValueError:
@ -407,13 +409,19 @@ class HelpFormatter(object):
inserts[start] += ' ['
else:
inserts[start] = '['
inserts[end] = ']'
if end in inserts:
inserts[end] += ']'
else:
inserts[end] = ']'
else:
if start in inserts:
inserts[start] += ' ('
else:
inserts[start] = '('
inserts[end] = ')'
if end in inserts:
inserts[end] += ')'
else:
inserts[end] = ')'
for i in range(start + 1, end):
inserts[i] = '|'
@ -450,7 +458,7 @@ class HelpFormatter(object):
# if the Optional doesn't take a value, format is:
# -s or --long
if action.nargs == 0:
part = '%s' % option_string
part = action.format_usage()
# if the Optional takes a value, format is:
# -s ARGS or --long ARGS
@ -521,12 +529,13 @@ class HelpFormatter(object):
parts = [action_header]
# if there was help for the action, add lines of help text
if action.help:
if action.help and action.help.strip():
help_text = self._expand_help(action)
help_lines = self._split_lines(help_text, help_width)
parts.append('%*s%s\n' % (indent_first, '', help_lines[0]))
for line in help_lines[1:]:
parts.append('%*s%s\n' % (help_position, '', line))
if help_text:
help_lines = self._split_lines(help_text, help_width)
parts.append('%*s%s\n' % (indent_first, '', help_lines[0]))
for line in help_lines[1:]:
parts.append('%*s%s\n' % (help_position, '', line))
# or add a newline if the description doesn't end with one
elif not action_header.endswith('\n'):
@ -586,7 +595,11 @@ class HelpFormatter(object):
elif action.nargs == OPTIONAL:
result = '[%s]' % get_metavar(1)
elif action.nargs == ZERO_OR_MORE:
result = '[%s [%s ...]]' % get_metavar(2)
metavar = get_metavar(1)
if len(metavar) == 2:
result = '[%s [%s ...]]' % metavar
else:
result = '[%s ...]' % metavar
elif action.nargs == ONE_OR_MORE:
result = '%s [%s ...]' % get_metavar(2)
elif action.nargs == REMAINDER:
@ -596,7 +609,10 @@ class HelpFormatter(object):
elif action.nargs == SUPPRESS:
result = ''
else:
formats = ['%s' for _ in range(action.nargs)]
try:
formats = ['%s' for _ in range(action.nargs)]
except TypeError:
raise ValueError("invalid nargs value") from None
result = ' '.join(formats) % get_metavar(action.nargs)
return result
@ -710,11 +726,13 @@ def _get_action_name(argument):
if argument is None:
return None
elif argument.option_strings:
return '/'.join(argument.option_strings)
return '/'.join(argument.option_strings)
elif argument.metavar not in (None, SUPPRESS):
return argument.metavar
elif argument.dest not in (None, SUPPRESS):
return argument.dest
elif argument.choices:
return '{' + ','.join(argument.choices) + '}'
else:
return None
@ -830,14 +848,58 @@ class Action(_AttributeHolder):
'default',
'type',
'choices',
'required',
'help',
'metavar',
]
return [(name, getattr(self, name)) for name in names]
def format_usage(self):
return self.option_strings[0]
def __call__(self, parser, namespace, values, option_string=None):
raise NotImplementedError(_('.__call__() not defined'))
class BooleanOptionalAction(Action):
def __init__(self,
option_strings,
dest,
default=None,
type=None,
choices=None,
required=False,
help=None,
metavar=None):
_option_strings = []
for option_string in option_strings:
_option_strings.append(option_string)
if option_string.startswith('--'):
option_string = '--no-' + option_string[2:]
_option_strings.append(option_string)
if help is not None and default is not None and default is not SUPPRESS:
help += " (default: %(default)s)"
super().__init__(
option_strings=_option_strings,
dest=dest,
nargs=0,
default=default,
type=type,
choices=choices,
required=required,
help=help,
metavar=metavar)
def __call__(self, parser, namespace, values, option_string=None):
if option_string in self.option_strings:
setattr(namespace, self.dest, not option_string.startswith('--no-'))
def format_usage(self):
return ' | '.join(self.option_strings)
class _StoreAction(Action):
@ -853,7 +915,7 @@ class _StoreAction(Action):
help=None,
metavar=None):
if nargs == 0:
raise ValueError('nargs for store actions must be > 0; if you '
raise ValueError('nargs for store actions must be != 0; if you '
'have nothing to store, actions such as store '
'true or store const may be more appropriate')
if const is not None and nargs != OPTIONAL:
@ -945,7 +1007,7 @@ class _AppendAction(Action):
help=None,
metavar=None):
if nargs == 0:
raise ValueError('nargs for append actions must be > 0; if arg '
raise ValueError('nargs for append actions must be != 0; if arg '
'strings are not supplying the value to append, '
'the append const action may be more appropriate')
if const is not None and nargs != OPTIONAL:
@ -1157,6 +1219,12 @@ class _SubParsersAction(Action):
vars(namespace).setdefault(_UNRECOGNIZED_ARGS_ATTR, [])
getattr(namespace, _UNRECOGNIZED_ARGS_ATTR).extend(arg_strings)
class _ExtendAction(_AppendAction):
def __call__(self, parser, namespace, values, option_string=None):
items = getattr(namespace, self.dest, None)
items = _copy_items(items)
items.extend(values)
setattr(namespace, self.dest, items)
# ==============
# Type classes
@ -1189,9 +1257,9 @@ class FileType(object):
# the special argument "-" means sys.std{in,out}
if string == '-':
if 'r' in self._mode:
return _sys.stdin
elif 'w' in self._mode:
return _sys.stdout
return _sys.stdin.buffer if 'b' in self._mode else _sys.stdin
elif any(c in self._mode for c in 'wax'):
return _sys.stdout.buffer if 'b' in self._mode else _sys.stdout
else:
msg = _('argument "-" with mode %r') % self._mode
raise ValueError(msg)
@ -1201,8 +1269,9 @@ class FileType(object):
return open(string, self._mode, self._bufsize, self._encoding,
self._errors)
except OSError as e:
message = _("can't open '%s': %s")
raise ArgumentTypeError(message % (string, e))
args = {'filename': string, 'error': e}
message = _("can't open '%(filename)s': %(error)s")
raise ArgumentTypeError(message % args)
def __repr__(self):
args = self._mode, self._bufsize
@ -1265,6 +1334,7 @@ class _ActionsContainer(object):
self.register('action', 'help', _HelpAction)
self.register('action', 'version', _VersionAction)
self.register('action', 'parsers', _SubParsersAction)
self.register('action', 'extend', _ExtendAction)
# raise an exception if the conflict handler is invalid
self._get_handler()
@ -1357,6 +1427,10 @@ class _ActionsContainer(object):
if not callable(type_func):
raise ValueError('%r is not callable' % (type_func,))
if type_func is FileType:
raise ValueError('%r is a FileType class object, instance of it'
' must be passed' % (type_func,))
# raise an error if the metavar does not match the type
if hasattr(self, "_get_formatter"):
try:
@ -1471,10 +1545,8 @@ class _ActionsContainer(object):
# strings starting with two prefix characters are long options
option_strings.append(option_string)
if option_string[0] in self.prefix_chars:
if len(option_string) > 1:
if option_string[1] in self.prefix_chars:
long_option_strings.append(option_string)
if len(option_string) > 1 and option_string[1] in self.prefix_chars:
long_option_strings.append(option_string)
# infer destination, '--foo-bar' -> 'foo_bar' and '-x' -> 'x'
dest = kwargs.pop('dest', None)
@ -1614,6 +1686,8 @@ class ArgumentParser(_AttributeHolder, _ActionsContainer):
- conflict_handler -- String indicating how to handle conflicts
- add_help -- Add a -h/-help option
- allow_abbrev -- Allow long options to be abbreviated unambiguously
- exit_on_error -- Determines whether or not ArgumentParser exits with
error info when an error occurs
"""
def __init__(self,
@ -1628,7 +1702,8 @@ class ArgumentParser(_AttributeHolder, _ActionsContainer):
argument_default=None,
conflict_handler='error',
add_help=True,
allow_abbrev=True):
allow_abbrev=True,
exit_on_error=True):
superinit = super(ArgumentParser, self).__init__
superinit(description=description,
@ -1647,6 +1722,7 @@ class ArgumentParser(_AttributeHolder, _ActionsContainer):
self.fromfile_prefix_chars = fromfile_prefix_chars
self.add_help = add_help
self.allow_abbrev = allow_abbrev
self.exit_on_error = exit_on_error
add_group = self.add_argument_group
self._positionals = add_group(_('positional arguments'))
@ -1777,15 +1853,19 @@ class ArgumentParser(_AttributeHolder, _ActionsContainer):
setattr(namespace, dest, self._defaults[dest])
# parse the arguments and exit if there are any errors
try:
if self.exit_on_error:
try:
namespace, args = self._parse_known_args(args, namespace)
except ArgumentError:
err = _sys.exc_info()[1]
self.error(str(err))
else:
namespace, args = self._parse_known_args(args, namespace)
if hasattr(namespace, _UNRECOGNIZED_ARGS_ATTR):
args.extend(getattr(namespace, _UNRECOGNIZED_ARGS_ATTR))
delattr(namespace, _UNRECOGNIZED_ARGS_ATTR)
return namespace, args
except ArgumentError:
err = _sys.exc_info()[1]
self.error(str(err))
if hasattr(namespace, _UNRECOGNIZED_ARGS_ATTR):
args.extend(getattr(namespace, _UNRECOGNIZED_ARGS_ATTR))
delattr(namespace, _UNRECOGNIZED_ARGS_ATTR)
return namespace, args
def _parse_known_args(self, arg_strings, namespace):
# replace arg strings that are file references
@ -2074,10 +2154,11 @@ class ArgumentParser(_AttributeHolder, _ActionsContainer):
OPTIONAL: _('expected at most one argument'),
ONE_OR_MORE: _('expected at least one argument'),
}
default = ngettext('expected %s argument',
msg = nargs_errors.get(action.nargs)
if msg is None:
msg = ngettext('expected %s argument',
'expected %s arguments',
action.nargs) % action.nargs
msg = nargs_errors.get(action.nargs, default)
raise ArgumentError(action, msg)
# return the number of arguments matched
@ -2124,24 +2205,23 @@ class ArgumentParser(_AttributeHolder, _ActionsContainer):
action = self._option_string_actions[option_string]
return action, option_string, explicit_arg
if self.allow_abbrev:
# search through all possible prefixes of the option string
# and all actions in the parser for possible interpretations
option_tuples = self._get_option_tuples(arg_string)
# search through all possible prefixes of the option string
# and all actions in the parser for possible interpretations
option_tuples = self._get_option_tuples(arg_string)
# if multiple actions match, the option string was ambiguous
if len(option_tuples) > 1:
options = ', '.join([option_string
for action, option_string, explicit_arg in option_tuples])
args = {'option': arg_string, 'matches': options}
msg = _('ambiguous option: %(option)s could match %(matches)s')
self.error(msg % args)
# if multiple actions match, the option string was ambiguous
if len(option_tuples) > 1:
options = ', '.join([option_string
for action, option_string, explicit_arg in option_tuples])
args = {'option': arg_string, 'matches': options}
msg = _('ambiguous option: %(option)s could match %(matches)s')
self.error(msg % args)
# if exactly one action matched, this segmentation is good,
# so return the parsed action
elif len(option_tuples) == 1:
option_tuple, = option_tuples
return option_tuple
# if exactly one action matched, this segmentation is good,
# so return the parsed action
elif len(option_tuples) == 1:
option_tuple, = option_tuples
return option_tuple
# if it was not found as an option, but it looks like a negative
# number, it was meant to be positional
@ -2165,16 +2245,17 @@ class ArgumentParser(_AttributeHolder, _ActionsContainer):
# split at the '='
chars = self.prefix_chars
if option_string[0] in chars and option_string[1] in chars:
if '=' in option_string:
option_prefix, explicit_arg = option_string.split('=', 1)
else:
option_prefix = option_string
explicit_arg = None
for option_string in self._option_string_actions:
if option_string.startswith(option_prefix):
action = self._option_string_actions[option_string]
tup = action, option_string, explicit_arg
result.append(tup)
if self.allow_abbrev:
if '=' in option_string:
option_prefix, explicit_arg = option_string.split('=', 1)
else:
option_prefix = option_string
explicit_arg = None
for option_string in self._option_string_actions:
if option_string.startswith(option_prefix):
action = self._option_string_actions[option_string]
tup = action, option_string, explicit_arg
result.append(tup)
# single character options can be concatenated with their arguments
# but multiple character options always have to have their argument

File diff suppressed because it is too large Load Diff

View File

@ -117,7 +117,7 @@ class async_chat(asyncore.dispatcher):
data = self.recv(self.ac_in_buffer_size)
except BlockingIOError:
return
except OSError as why:
except OSError:
self.handle_error()
return

View File

@ -8,6 +8,7 @@ import sys
from .base_events import *
from .coroutines import *
from .events import *
from .exceptions import *
from .futures import *
from .locks import *
from .protocols import *
@ -16,6 +17,7 @@ from .queues import *
from .streams import *
from .subprocess import *
from .tasks import *
from .threads import *
from .transports import *
# Exposed for _asynciomodule.c to implement now deprecated
@ -25,6 +27,7 @@ from .tasks import _all_tasks_compat # NoQA
__all__ = (base_events.__all__ +
coroutines.__all__ +
events.__all__ +
exceptions.__all__ +
futures.__all__ +
locks.__all__ +
protocols.__all__ +
@ -33,6 +36,7 @@ __all__ = (base_events.__all__ +
streams.__all__ +
subprocess.__all__ +
tasks.__all__ +
threads.__all__ +
transports.__all__)
if sys.platform == 'win32': # pragma: no cover

View File

@ -16,11 +16,12 @@ to modify the meaning of the API call itself.
import collections
import collections.abc
import concurrent.futures
import functools
import heapq
import itertools
import logging
import os
import socket
import stat
import subprocess
import threading
import time
@ -37,15 +38,18 @@ except ImportError: # pragma: no cover
from . import constants
from . import coroutines
from . import events
from . import exceptions
from . import futures
from . import protocols
from . import sslproto
from . import staggered
from . import tasks
from . import transports
from . import trsock
from .log import logger
__all__ = 'BaseEventLoop',
__all__ = 'BaseEventLoop','Server',
# Minimum number of _scheduled timer handles before cleanup of
@ -56,10 +60,15 @@ _MIN_SCHEDULED_TIMER_HANDLES = 100
# before cleanup of cancelled handles is performed.
_MIN_CANCELLED_TIMER_HANDLES_FRACTION = 0.5
# Exceptions which must not call the exception handler in fatal error
# methods (_fatal_error())
_FATAL_ERROR_IGNORE = (BrokenPipeError,
ConnectionResetError, ConnectionAbortedError)
_HAS_IPv6 = hasattr(socket, 'AF_INET6')
# Maximum timeout passed to select to avoid OS limitations
MAXIMUM_SELECT_TIMEOUT = 24 * 3600
# Used for deprecation and removal of `loop.create_datagram_endpoint()`'s
# *reuse_address* parameter
_unset = object()
def _format_handle(handle):
@ -91,7 +100,7 @@ def _set_reuseport(sock):
'SO_REUSEPORT defined but not implemented.')
def _ipaddr_info(host, port, family, type, proto):
def _ipaddr_info(host, port, family, type, proto, flowinfo=0, scopeid=0):
# Try to skip getaddrinfo if "host" is already an IP. Users might have
# handled name resolution in their own code and pass in resolved IPs.
if not hasattr(socket, 'inet_pton'):
@ -123,7 +132,7 @@ def _ipaddr_info(host, port, family, type, proto):
if family == socket.AF_UNSPEC:
afs = [socket.AF_INET]
if hasattr(socket, 'AF_INET6'):
if _HAS_IPv6:
afs.append(socket.AF_INET6)
else:
afs = [family]
@ -139,7 +148,10 @@ def _ipaddr_info(host, port, family, type, proto):
try:
socket.inet_pton(af, host)
# The host has already been resolved.
return af, type, proto, '', (host, port)
if _HAS_IPv6 and af == socket.AF_INET6:
return af, type, proto, '', (host, port, flowinfo, scopeid)
else:
return af, type, proto, '', (host, port)
except OSError:
pass
@ -147,16 +159,54 @@ def _ipaddr_info(host, port, family, type, proto):
return None
def _interleave_addrinfos(addrinfos, first_address_family_count=1):
"""Interleave list of addrinfo tuples by family."""
# Group addresses by family
addrinfos_by_family = collections.OrderedDict()
for addr in addrinfos:
family = addr[0]
if family not in addrinfos_by_family:
addrinfos_by_family[family] = []
addrinfos_by_family[family].append(addr)
addrinfos_lists = list(addrinfos_by_family.values())
reordered = []
if first_address_family_count > 1:
reordered.extend(addrinfos_lists[0][:first_address_family_count - 1])
del addrinfos_lists[0][:first_address_family_count - 1]
reordered.extend(
a for a in itertools.chain.from_iterable(
itertools.zip_longest(*addrinfos_lists)
) if a is not None)
return reordered
def _run_until_complete_cb(fut):
if not fut.cancelled():
exc = fut.exception()
if isinstance(exc, BaseException) and not isinstance(exc, Exception):
if isinstance(exc, (SystemExit, KeyboardInterrupt)):
# Issue #22429: run_forever() already finished, no need to
# stop it.
return
futures._get_loop(fut).stop()
if hasattr(socket, 'TCP_NODELAY'):
def _set_nodelay(sock):
if (sock.family in {socket.AF_INET, socket.AF_INET6} and
sock.type == socket.SOCK_STREAM and
sock.proto == socket.IPPROTO_TCP):
sock.setsockopt(socket.IPPROTO_TCP, socket.TCP_NODELAY, 1)
else:
def _set_nodelay(sock):
pass
def _check_ssl_socket(sock):
if ssl is not None and isinstance(sock, ssl.SSLSocket):
raise TypeError("Socket cannot be of type SSLSocket")
class _SendfileFallbackProtocol(protocols.Protocol):
def __init__(self, transp):
if not isinstance(transp, transports._FlowControlMixin):
@ -279,8 +329,8 @@ class Server(events.AbstractServer):
@property
def sockets(self):
if self._sockets is None:
return []
return list(self._sockets)
return ()
return tuple(trsock.TransportSocket(s) for s in self._sockets)
def close(self):
sockets = self._sockets
@ -305,7 +355,7 @@ class Server(events.AbstractServer):
self._start_serving()
# Skip one loop iteration so that all 'loop.add_reader'
# go through.
await tasks.sleep(0, loop=self._loop)
await tasks.sleep(0)
async def serve_forever(self):
if self._serving_forever_fut is not None:
@ -319,7 +369,7 @@ class Server(events.AbstractServer):
try:
await self._serving_forever_fut
except futures.CancelledError:
except exceptions.CancelledError:
try:
self.close()
await self.wait_closed()
@ -365,6 +415,8 @@ class BaseEventLoop(events.AbstractEventLoop):
self._asyncgens = weakref.WeakSet()
# Set to True when `loop.shutdown_asyncgens` is called.
self._asyncgens_shutdown_called = False
# Set to True when `loop.shutdown_default_executor` is called.
self._executor_shutdown_called = False
def __repr__(self):
return (
@ -376,18 +428,20 @@ class BaseEventLoop(events.AbstractEventLoop):
"""Create a Future object attached to the loop."""
return futures.Future(loop=self)
def create_task(self, coro):
def create_task(self, coro, *, name=None):
"""Schedule a coroutine object.
Return a task object.
"""
self._check_closed()
if self._task_factory is None:
task = tasks.Task(coro, loop=self)
task = tasks.Task(coro, loop=self, name=name)
if task._source_traceback:
del task._source_traceback[-1]
else:
task = self._task_factory(self, coro)
tasks._set_task_name(task, name)
return task
def set_task_factory(self, factory):
@ -460,13 +514,14 @@ class BaseEventLoop(events.AbstractEventLoop):
if self._closed:
raise RuntimeError('Event loop is closed')
def _check_default_executor(self):
if self._executor_shutdown_called:
raise RuntimeError('Executor shutdown has been called')
def _asyncgen_finalizer_hook(self, agen):
self._asyncgens.discard(agen)
if not self.is_closed():
self.create_task(agen.aclose())
# Wake up the loop if the finalizer was called from
# a different thread.
self._write_to_self()
self.call_soon_threadsafe(self.create_task, agen.aclose())
def _asyncgen_firstiter_hook(self, agen):
if self._asyncgens_shutdown_called:
@ -489,7 +544,7 @@ class BaseEventLoop(events.AbstractEventLoop):
closing_agens = list(self._asyncgens)
self._asyncgens.clear()
results = await tasks.gather(
results = await tasks._gather(
*[ag.aclose() for ag in closing_agens],
return_exceptions=True,
loop=self)
@ -503,14 +558,37 @@ class BaseEventLoop(events.AbstractEventLoop):
'asyncgen': agen
})
def run_forever(self):
"""Run until stop() is called."""
self._check_closed()
async def shutdown_default_executor(self):
"""Schedule the shutdown of the default executor."""
self._executor_shutdown_called = True
if self._default_executor is None:
return
future = self.create_future()
thread = threading.Thread(target=self._do_shutdown, args=(future,))
thread.start()
try:
await future
finally:
thread.join()
def _do_shutdown(self, future):
try:
self._default_executor.shutdown(wait=True)
self.call_soon_threadsafe(future.set_result, None)
except Exception as ex:
self.call_soon_threadsafe(future.set_exception, ex)
def _check_running(self):
if self.is_running():
raise RuntimeError('This event loop is already running')
if events._get_running_loop() is not None:
raise RuntimeError(
'Cannot run the event loop while another loop is running')
def run_forever(self):
"""Run until stop() is called."""
self._check_closed()
self._check_running()
self._set_coroutine_origin_tracking(self._debug)
self._thread_id = threading.get_ident()
@ -542,6 +620,7 @@ class BaseEventLoop(events.AbstractEventLoop):
Return the Future's result, or raise its exception.
"""
self._check_closed()
self._check_running()
new_task = not futures.isfuture(future)
future = tasks.ensure_future(future, loop=self)
@ -592,6 +671,7 @@ class BaseEventLoop(events.AbstractEventLoop):
self._closed = True
self._ready.clear()
self._scheduled.clear()
self._executor_shutdown_called = True
executor = self._default_executor
if executor is not None:
self._default_executor = None
@ -601,10 +681,9 @@ class BaseEventLoop(events.AbstractEventLoop):
"""Returns True if the event loop was closed."""
return self._closed
def __del__(self):
def __del__(self, _warn=warnings.warn):
if not self.is_closed():
warnings.warn(f"unclosed event loop {self!r}", ResourceWarning,
source=self)
_warn(f"unclosed event loop {self!r}", ResourceWarning, source=self)
if not self.is_running():
self.close()
@ -729,13 +808,23 @@ class BaseEventLoop(events.AbstractEventLoop):
self._check_callback(func, 'run_in_executor')
if executor is None:
executor = self._default_executor
# Only check when the default executor is being used
self._check_default_executor()
if executor is None:
executor = concurrent.futures.ThreadPoolExecutor()
executor = concurrent.futures.ThreadPoolExecutor(
thread_name_prefix='asyncio'
)
self._default_executor = executor
return futures.wrap_future(
executor.submit(func, *args), loop=self)
def set_default_executor(self, executor):
if not isinstance(executor, concurrent.futures.ThreadPoolExecutor):
warnings.warn(
'Using the default executor that is not an instance of '
'ThreadPoolExecutor is deprecated and will be prohibited '
'in Python 3.9',
DeprecationWarning, 2)
self._default_executor = executor
def _getaddrinfo_debug(self, host, port, family, type, proto, flags):
@ -780,11 +869,12 @@ class BaseEventLoop(events.AbstractEventLoop):
*, fallback=True):
if self._debug and sock.gettimeout() != 0:
raise ValueError("the socket must be non-blocking")
_check_ssl_socket(sock)
self._check_sendfile_params(sock, file, offset, count)
try:
return await self._sock_sendfile_native(sock, file,
offset, count)
except events.SendfileNotAvailableError as exc:
except exceptions.SendfileNotAvailableError as exc:
if not fallback:
raise
return await self._sock_sendfile_fallback(sock, file,
@ -793,9 +883,9 @@ class BaseEventLoop(events.AbstractEventLoop):
async def _sock_sendfile_native(self, sock, file, offset, count):
# NB: sendfile syscall is not supported for SSL sockets and
# non-mmap files even if sendfile is supported by OS
raise events.SendfileNotAvailableError(
raise exceptions.SendfileNotAvailableError(
f"syscall sendfile is not available for socket {sock!r} "
"and file {file!r} combination")
f"and file {file!r} combination")
async def _sock_sendfile_fallback(self, sock, file, offset, count):
if offset:
@ -816,7 +906,7 @@ class BaseEventLoop(events.AbstractEventLoop):
read = await self.run_in_executor(None, file.readinto, view)
if not read:
break # EOF
await self.sock_sendall(sock, view)
await self.sock_sendall(sock, view[:read])
total_sent += read
return total_sent
finally:
@ -844,12 +934,49 @@ class BaseEventLoop(events.AbstractEventLoop):
"offset must be a non-negative integer (got {!r})".format(
offset))
async def _connect_sock(self, exceptions, addr_info, local_addr_infos=None):
"""Create, bind and connect one socket."""
my_exceptions = []
exceptions.append(my_exceptions)
family, type_, proto, _, address = addr_info
sock = None
try:
sock = socket.socket(family=family, type=type_, proto=proto)
sock.setblocking(False)
if local_addr_infos is not None:
for _, _, _, _, laddr in local_addr_infos:
try:
sock.bind(laddr)
break
except OSError as exc:
msg = (
f'error while attempting to bind on '
f'address {laddr!r}: '
f'{exc.strerror.lower()}'
)
exc = OSError(exc.errno, msg)
my_exceptions.append(exc)
else: # all bind attempts failed
raise my_exceptions.pop()
await self.sock_connect(sock, address)
return sock
except OSError as exc:
my_exceptions.append(exc)
if sock is not None:
sock.close()
raise
except:
if sock is not None:
sock.close()
raise
async def create_connection(
self, protocol_factory, host=None, port=None,
*, ssl=None, family=0,
proto=0, flags=0, sock=None,
local_addr=None, server_hostname=None,
ssl_handshake_timeout=None):
ssl_handshake_timeout=None,
happy_eyeballs_delay=None, interleave=None):
"""Connect to a TCP server.
Create a streaming transport connection to a given Internet host and
@ -884,6 +1011,13 @@ class BaseEventLoop(events.AbstractEventLoop):
raise ValueError(
'ssl_handshake_timeout is only meaningful with ssl')
if sock is not None:
_check_ssl_socket(sock)
if happy_eyeballs_delay is not None and interleave is None:
# If using happy eyeballs, default to interleave addresses by family
interleave = 1
if host is not None or port is not None:
if sock is not None:
raise ValueError(
@ -902,43 +1036,31 @@ class BaseEventLoop(events.AbstractEventLoop):
flags=flags, loop=self)
if not laddr_infos:
raise OSError('getaddrinfo() returned empty list')
else:
laddr_infos = None
if interleave:
infos = _interleave_addrinfos(infos, interleave)
exceptions = []
for family, type, proto, cname, address in infos:
try:
sock = socket.socket(family=family, type=type, proto=proto)
sock.setblocking(False)
if local_addr is not None:
for _, _, _, _, laddr in laddr_infos:
try:
sock.bind(laddr)
break
except OSError as exc:
msg = (
f'error while attempting to bind on '
f'address {laddr!r}: '
f'{exc.strerror.lower()}'
)
exc = OSError(exc.errno, msg)
exceptions.append(exc)
else:
sock.close()
sock = None
continue
if self._debug:
logger.debug("connect %r to %r", sock, address)
await self.sock_connect(sock, address)
except OSError as exc:
if sock is not None:
sock.close()
exceptions.append(exc)
except:
if sock is not None:
sock.close()
raise
else:
break
else:
if happy_eyeballs_delay is None:
# not using happy eyeballs
for addrinfo in infos:
try:
sock = await self._connect_sock(
exceptions, addrinfo, laddr_infos)
break
except OSError:
continue
else: # using happy eyeballs
sock, _, _ = await staggered.staggered_race(
(functools.partial(self._connect_sock,
exceptions, addrinfo, laddr_infos)
for addrinfo in infos),
happy_eyeballs_delay, loop=self)
if sock is None:
exceptions = [exc for sub in exceptions for exc in sub]
if len(exceptions) == 1:
raise exceptions[0]
else:
@ -1037,7 +1159,7 @@ class BaseEventLoop(events.AbstractEventLoop):
try:
return await self._sendfile_native(transport, file,
offset, count)
except events.SendfileNotAvailableError as exc:
except exceptions.SendfileNotAvailableError as exc:
if not fallback:
raise
@ -1050,7 +1172,7 @@ class BaseEventLoop(events.AbstractEventLoop):
offset, count)
async def _sendfile_native(self, transp, file, offset, count):
raise events.SendfileNotAvailableError(
raise exceptions.SendfileNotAvailableError(
"sendfile syscall is not supported")
async def _sendfile_fallback(self, transp, file, offset, count):
@ -1067,11 +1189,11 @@ class BaseEventLoop(events.AbstractEventLoop):
if blocksize <= 0:
return total_sent
view = memoryview(buf)[:blocksize]
read = file.readinto(view)
read = await self.run_in_executor(None, file.readinto, view)
if not read:
return total_sent # EOF
await proto.drain()
transp.write(view)
transp.write(view[:read])
total_sent += read
finally:
if total_sent > 0 and hasattr(file, 'seek'):
@ -1116,7 +1238,7 @@ class BaseEventLoop(events.AbstractEventLoop):
try:
await waiter
except Exception:
except BaseException:
transport.close()
conmade_cb.cancel()
resume_cb.cancel()
@ -1127,7 +1249,7 @@ class BaseEventLoop(events.AbstractEventLoop):
async def create_datagram_endpoint(self, protocol_factory,
local_addr=None, remote_addr=None, *,
family=0, proto=0, flags=0,
reuse_address=None, reuse_port=None,
reuse_address=_unset, reuse_port=None,
allow_broadcast=None, sock=None):
"""Create datagram connection."""
if sock is not None:
@ -1136,7 +1258,7 @@ class BaseEventLoop(events.AbstractEventLoop):
f'A UDP Socket was expected, got {sock!r}')
if (local_addr or remote_addr or
family or proto or flags or
reuse_address or reuse_port or allow_broadcast):
reuse_port or allow_broadcast):
# show the problematic kwargs in exception msg
opts = dict(local_addr=local_addr, remote_addr=remote_addr,
family=family, proto=proto, flags=flags,
@ -1157,15 +1279,28 @@ class BaseEventLoop(events.AbstractEventLoop):
for addr in (local_addr, remote_addr):
if addr is not None and not isinstance(addr, str):
raise TypeError('string is expected')
if local_addr and local_addr[0] not in (0, '\x00'):
try:
if stat.S_ISSOCK(os.stat(local_addr).st_mode):
os.remove(local_addr)
except FileNotFoundError:
pass
except OSError as err:
# Directory may have permissions only to create socket.
logger.error('Unable to check or remove stale UNIX '
'socket %r: %r',
local_addr, err)
addr_pairs_info = (((family, proto),
(local_addr, remote_addr)), )
else:
# join address by (family, protocol)
addr_infos = collections.OrderedDict()
addr_infos = {} # Using order preserving dict
for idx, addr in ((0, local_addr), (1, remote_addr)):
if addr is not None:
assert isinstance(addr, tuple) and len(addr) == 2, (
'2-tuple is expected')
if not (isinstance(addr, tuple) and len(addr) == 2):
raise TypeError('2-tuple is expected')
infos = await self._ensure_resolved(
addr, family=family, type=socket.SOCK_DGRAM,
@ -1190,8 +1325,18 @@ class BaseEventLoop(events.AbstractEventLoop):
exceptions = []
if reuse_address is None:
reuse_address = os.name == 'posix' and sys.platform != 'cygwin'
# bpo-37228
if reuse_address is not _unset:
if reuse_address:
raise ValueError("Passing `reuse_address=True` is no "
"longer supported, as the usage of "
"SO_REUSEPORT in UDP poses a significant "
"security concern.")
else:
warnings.warn("The *reuse_address* parameter has been "
"deprecated as of 3.5.10 and is scheduled "
"for removal in 3.11.", DeprecationWarning,
stacklevel=2)
for ((family, proto),
(local_address, remote_address)) in addr_pairs_info:
@ -1200,9 +1345,6 @@ class BaseEventLoop(events.AbstractEventLoop):
try:
sock = socket.socket(
family=family, type=socket.SOCK_DGRAM, proto=proto)
if reuse_address:
sock.setsockopt(
socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
if reuse_port:
_set_reuseport(sock)
if allow_broadcast:
@ -1213,7 +1355,8 @@ class BaseEventLoop(events.AbstractEventLoop):
if local_addr:
sock.bind(local_address)
if remote_addr:
await self.sock_connect(sock, remote_address)
if not allow_broadcast:
await self.sock_connect(sock, remote_address)
r_addr = remote_address
except OSError as exc:
if sock is not None:
@ -1254,7 +1397,7 @@ class BaseEventLoop(events.AbstractEventLoop):
family=0, type=socket.SOCK_STREAM,
proto=0, flags=0, loop):
host, port = address[:2]
info = _ipaddr_info(host, port, family, type, proto)
info = _ipaddr_info(host, port, family, type, proto, *address[2:])
if info is not None:
# "host" is already a resolved IP.
return [info]
@ -1304,12 +1447,14 @@ class BaseEventLoop(events.AbstractEventLoop):
raise ValueError(
'ssl_handshake_timeout is only meaningful with ssl')
if sock is not None:
_check_ssl_socket(sock)
if host is not None or port is not None:
if sock is not None:
raise ValueError(
'host/port and sock can not be specified at the same time')
AF_INET6 = getattr(socket, 'AF_INET6', 0)
if reuse_address is None:
reuse_address = os.name == 'posix' and sys.platform != 'cygwin'
sockets = []
@ -1324,7 +1469,7 @@ class BaseEventLoop(events.AbstractEventLoop):
fs = [self._create_server_getaddrinfo(host, port, family=family,
flags=flags)
for host in hosts]
infos = await tasks.gather(*fs, loop=self)
infos = await tasks._gather(*fs, loop=self)
infos = set(itertools.chain.from_iterable(infos))
completed = False
@ -1349,7 +1494,9 @@ class BaseEventLoop(events.AbstractEventLoop):
# Disable IPv4/IPv6 dual stack support (enabled by
# default on Linux) which makes a single socket
# listen on both address families.
if af == AF_INET6 and hasattr(socket, 'IPPROTO_IPV6'):
if (_HAS_IPv6 and
af == socket.AF_INET6 and
hasattr(socket, 'IPPROTO_IPV6')):
sock.setsockopt(socket.IPPROTO_IPV6,
socket.IPV6_V6ONLY,
True)
@ -1380,7 +1527,7 @@ class BaseEventLoop(events.AbstractEventLoop):
server._start_serving()
# Skip one loop iteration so that all 'loop.add_reader'
# go through.
await tasks.sleep(0, loop=self)
await tasks.sleep(0)
if self._debug:
logger.info("%r is serving", server)
@ -1405,6 +1552,9 @@ class BaseEventLoop(events.AbstractEventLoop):
raise ValueError(
'ssl_handshake_timeout is only meaningful with ssl')
if sock is not None:
_check_ssl_socket(sock)
transport, protocol = await self._create_connection_transport(
sock, protocol_factory, ssl, '', server_side=True,
ssl_handshake_timeout=ssl_handshake_timeout)
@ -1466,6 +1616,7 @@ class BaseEventLoop(events.AbstractEventLoop):
stderr=subprocess.PIPE,
universal_newlines=False,
shell=True, bufsize=0,
encoding=None, errors=None, text=None,
**kwargs):
if not isinstance(cmd, (bytes, str)):
raise ValueError("cmd must be a string")
@ -1475,6 +1626,13 @@ class BaseEventLoop(events.AbstractEventLoop):
raise ValueError("shell must be True")
if bufsize != 0:
raise ValueError("bufsize must be 0")
if text:
raise ValueError("text must be False")
if encoding is not None:
raise ValueError("encoding must be None")
if errors is not None:
raise ValueError("errors must be None")
protocol = protocol_factory()
debug_log = None
if self._debug:
@ -1491,19 +1649,23 @@ class BaseEventLoop(events.AbstractEventLoop):
async def subprocess_exec(self, protocol_factory, program, *args,
stdin=subprocess.PIPE, stdout=subprocess.PIPE,
stderr=subprocess.PIPE, universal_newlines=False,
shell=False, bufsize=0, **kwargs):
shell=False, bufsize=0,
encoding=None, errors=None, text=None,
**kwargs):
if universal_newlines:
raise ValueError("universal_newlines must be False")
if shell:
raise ValueError("shell must be False")
if bufsize != 0:
raise ValueError("bufsize must be 0")
if text:
raise ValueError("text must be False")
if encoding is not None:
raise ValueError("encoding must be None")
if errors is not None:
raise ValueError("errors must be None")
popen_args = (program,) + args
for arg in popen_args:
if not isinstance(arg, (str, bytes)):
raise TypeError(
f"program arguments must be a bytes or text string, "
f"not {type(arg).__name__}")
protocol = protocol_factory()
debug_log = None
if self._debug:
@ -1615,7 +1777,9 @@ class BaseEventLoop(events.AbstractEventLoop):
if self._exception_handler is None:
try:
self.default_exception_handler(context)
except Exception:
except (SystemExit, KeyboardInterrupt):
raise
except BaseException:
# Second protection layer for unexpected errors
# in the default implementation, as well as for subclassed
# event loops with overloaded "default_exception_handler".
@ -1624,7 +1788,9 @@ class BaseEventLoop(events.AbstractEventLoop):
else:
try:
self._exception_handler(self, context)
except Exception as exc:
except (SystemExit, KeyboardInterrupt):
raise
except BaseException as exc:
# Exception in the user set custom exception handler.
try:
# Let's try default handler.
@ -1633,7 +1799,9 @@ class BaseEventLoop(events.AbstractEventLoop):
'exception': exc,
'context': context,
})
except Exception:
except (SystemExit, KeyboardInterrupt):
raise
except BaseException:
# Guard 'default_exception_handler' in case it is
# overloaded.
logger.error('Exception in default exception handler '
@ -1696,30 +1864,9 @@ class BaseEventLoop(events.AbstractEventLoop):
elif self._scheduled:
# Compute the desired timeout.
when = self._scheduled[0]._when
timeout = max(0, when - self.time())
timeout = min(max(0, when - self.time()), MAXIMUM_SELECT_TIMEOUT)
if self._debug and timeout != 0:
t0 = self.time()
event_list = self._selector.select(timeout)
dt = self.time() - t0
if dt >= 1.0:
level = logging.INFO
else:
level = logging.DEBUG
nevent = len(event_list)
if timeout is None:
logger.log(level, 'poll took %.3f ms: %s events',
dt * 1e3, nevent)
elif nevent:
logger.log(level,
'poll %.3f ms took %.3f ms: %s events',
timeout * 1e3, dt * 1e3, nevent)
elif dt >= 1.0:
logger.log(level,
'poll %.3f ms took %.3f ms: timeout',
timeout * 1e3, dt * 1e3)
else:
event_list = self._selector.select(timeout)
event_list = self._selector.select(timeout)
self._process_events(event_list)
# Handle 'later' callbacks that are ready.

View File

@ -1,19 +1,10 @@
__all__ = ()
import concurrent.futures._base
import reprlib
from _thread import get_ident
from . import format_helpers
Error = concurrent.futures._base.Error
CancelledError = concurrent.futures.CancelledError
TimeoutError = concurrent.futures.TimeoutError
class InvalidStateError(Error):
"""The operation is not allowed in this state."""
# States for Future.
_PENDING = 'PENDING'
_CANCELLED = 'CANCELLED'
@ -51,6 +42,16 @@ def _format_callbacks(cb):
return f'cb=[{cb}]'
# bpo-42183: _repr_running is needed for repr protection
# when a Future or Task result contains itself directly or indirectly.
# The logic is borrowed from @reprlib.recursive_repr decorator.
# Unfortunately, the direct decorator usage is impossible because of
# AttributeError: '_asyncio.Task' object has no attribute '__module__' error.
#
# After fixing this thing we can return to the decorator based approach.
_repr_running = set()
def _future_repr_info(future):
# (Future) -> str
"""helper function for Future.__repr__"""
@ -59,9 +60,17 @@ def _future_repr_info(future):
if future._exception is not None:
info.append(f'exception={future._exception!r}')
else:
# use reprlib to limit the length of the output, especially
# for very long strings
result = reprlib.repr(future._result)
key = id(future), get_ident()
if key in _repr_running:
result = '...'
else:
_repr_running.add(key)
try:
# use reprlib to limit the length of the output, especially
# for very long strings
result = reprlib.repr(future._result)
finally:
_repr_running.discard(key)
info.append(f'result={result}')
if future._callbacks:
info.append(_format_callbacks(future._callbacks))

View File

@ -120,10 +120,9 @@ class BaseSubprocessTransport(transports.SubprocessTransport):
# Don't clear the _proc reference yet: _post_init() may still run
def __del__(self):
def __del__(self, _warn=warnings.warn):
if not self._closed:
warnings.warn(f"unclosed transport {self!r}", ResourceWarning,
source=self)
_warn(f"unclosed transport {self!r}", ResourceWarning, source=self)
self.close()
def get_pid(self):
@ -183,7 +182,9 @@ class BaseSubprocessTransport(transports.SubprocessTransport):
for callback, data in self._pending_calls:
loop.call_soon(callback, *data)
self._pending_calls = None
except Exception as exc:
except (SystemExit, KeyboardInterrupt):
raise
except BaseException as exc:
if waiter is not None and not waiter.cancelled():
waiter.set_exception(exc)
else:

View File

@ -12,21 +12,30 @@ def _task_repr_info(task):
# replace status
info[0] = 'cancelling'
info.insert(1, 'name=%r' % task.get_name())
coro = coroutines._format_coroutine(task._coro)
info.insert(1, f'coro=<{coro}>')
info.insert(2, f'coro=<{coro}>')
if task._fut_waiter is not None:
info.insert(2, f'wait_for={task._fut_waiter!r}')
info.insert(3, f'wait_for={task._fut_waiter!r}')
return info
def _task_get_stack(task, limit):
frames = []
try:
# 'async def' coroutines
if hasattr(task._coro, 'cr_frame'):
# case 1: 'async def' coroutines
f = task._coro.cr_frame
except AttributeError:
elif hasattr(task._coro, 'gi_frame'):
# case 2: legacy coroutines
f = task._coro.gi_frame
elif hasattr(task._coro, 'ag_frame'):
# case 3: async generators
f = task._coro.ag_frame
else:
# case 4: unknown objects
f = None
if f is not None:
while f is not None:
if limit is not None:

View File

@ -7,6 +7,7 @@ import os
import sys
import traceback
import types
import warnings
from . import base_futures
from . import constants
@ -107,6 +108,9 @@ def coroutine(func):
If the coroutine is not yielded from before it is destroyed,
an error message is logged.
"""
warnings.warn('"@coroutine" decorator is deprecated since Python 3.8, use "async def" instead',
DeprecationWarning,
stacklevel=2)
if inspect.iscoroutinefunction(func):
# In Python 3.5 that's all we need to do for coroutines
# defined with "async def".

View File

@ -3,7 +3,7 @@
__all__ = (
'AbstractEventLoopPolicy',
'AbstractEventLoop', 'AbstractServer',
'Handle', 'TimerHandle', 'SendfileNotAvailableError',
'Handle', 'TimerHandle',
'get_event_loop_policy', 'set_event_loop_policy',
'get_event_loop', 'set_event_loop', 'new_event_loop',
'get_child_watcher', 'set_child_watcher',
@ -21,14 +21,6 @@ import threading
from . import format_helpers
class SendfileNotAvailableError(RuntimeError):
"""Sendfile syscall is not available.
Raised if OS does not support sendfile syscall for given socket or
file type.
"""
class Handle:
"""Object returned by callback registration methods."""
@ -86,7 +78,9 @@ class Handle:
def _run(self):
try:
self._context.run(self._callback, *self._args)
except Exception as exc:
except (SystemExit, KeyboardInterrupt):
raise
except BaseException as exc:
cb = format_helpers._format_callback_source(
self._callback, self._args)
msg = f'Exception in callback {cb}'
@ -124,20 +118,24 @@ class TimerHandle(Handle):
return hash(self._when)
def __lt__(self, other):
return self._when < other._when
if isinstance(other, TimerHandle):
return self._when < other._when
return NotImplemented
def __le__(self, other):
if self._when < other._when:
return True
return self.__eq__(other)
if isinstance(other, TimerHandle):
return self._when < other._when or self.__eq__(other)
return NotImplemented
def __gt__(self, other):
return self._when > other._when
if isinstance(other, TimerHandle):
return self._when > other._when
return NotImplemented
def __ge__(self, other):
if self._when > other._when:
return True
return self.__eq__(other)
if isinstance(other, TimerHandle):
return self._when > other._when or self.__eq__(other)
return NotImplemented
def __eq__(self, other):
if isinstance(other, TimerHandle):
@ -147,10 +145,6 @@ class TimerHandle(Handle):
self._cancelled == other._cancelled)
return NotImplemented
def __ne__(self, other):
equal = self.__eq__(other)
return NotImplemented if equal is NotImplemented else not equal
def cancel(self):
if not self._cancelled:
self._loop._timer_handle_cancelled(self)
@ -254,19 +248,23 @@ class AbstractEventLoop:
"""Shutdown all active asynchronous generators."""
raise NotImplementedError
async def shutdown_default_executor(self):
"""Schedule the shutdown of the default executor."""
raise NotImplementedError
# Methods scheduling callbacks. All these return Handles.
def _timer_handle_cancelled(self, handle):
"""Notification that a TimerHandle has been cancelled."""
raise NotImplementedError
def call_soon(self, callback, *args):
return self.call_later(0, callback, *args)
def call_soon(self, callback, *args, context=None):
return self.call_later(0, callback, *args, context=context)
def call_later(self, delay, callback, *args):
def call_later(self, delay, callback, *args, context=None):
raise NotImplementedError
def call_at(self, when, callback, *args):
def call_at(self, when, callback, *args, context=None):
raise NotImplementedError
def time(self):
@ -277,15 +275,15 @@ class AbstractEventLoop:
# Method scheduling a coroutine object: create a task.
def create_task(self, coro):
def create_task(self, coro, *, name=None):
raise NotImplementedError
# Methods for interacting with threads.
def call_soon_threadsafe(self, callback, *args):
def call_soon_threadsafe(self, callback, *args, context=None):
raise NotImplementedError
async def run_in_executor(self, executor, func, *args):
def run_in_executor(self, executor, func, *args):
raise NotImplementedError
def set_default_executor(self, executor):
@ -305,7 +303,8 @@ class AbstractEventLoop:
*, ssl=None, family=0, proto=0,
flags=0, sock=None, local_addr=None,
server_hostname=None,
ssl_handshake_timeout=None):
ssl_handshake_timeout=None,
happy_eyeballs_delay=None, interleave=None):
raise NotImplementedError
async def create_server(
@ -397,7 +396,7 @@ class AbstractEventLoop:
The return value is a Server object, which can be used to stop
the service.
path is a str, representing a file systsem path to bind the
path is a str, representing a file system path to bind the
server socket to.
sock can optionally be specified in order to use a preexisting
@ -466,7 +465,7 @@ class AbstractEventLoop:
# The reason to accept file-like object instead of just file descriptor
# is: we need to own pipe and close it at transport finishing
# Can got complicated errors if pass f.fileno(),
# close fd in pipe transport then close f and vise versa.
# close fd in pipe transport then close f and vice versa.
raise NotImplementedError
async def connect_write_pipe(self, protocol_factory, pipe):
@ -479,7 +478,7 @@ class AbstractEventLoop:
# The reason to accept file-like object instead of just file descriptor
# is: we need to own pipe and close it at transport finishing
# Can got complicated errors if pass f.fileno(),
# close fd in pipe transport then close f and vise versa.
# close fd in pipe transport then close f and vice versa.
raise NotImplementedError
async def subprocess_shell(self, protocol_factory, cmd, *,
@ -630,13 +629,13 @@ class BaseDefaultEventLoopPolicy(AbstractEventLoopPolicy):
self._local = self._Local()
def get_event_loop(self):
"""Get the event loop.
"""Get the event loop for the current context.
This may be None or an instance of EventLoop.
Returns an instance of EventLoop or raises an exception.
"""
if (self._local._loop is None and
not self._local._set_called and
isinstance(threading.current_thread(), threading._MainThread)):
threading.current_thread() is threading.main_thread()):
self.set_event_loop(self.new_event_loop())
if self._local._loop is None:

View File

@ -1,7 +1,6 @@
"""A Future class similar to the one in PEP 3148."""
__all__ = (
'CancelledError', 'TimeoutError', 'InvalidStateError',
'Future', 'wrap_future', 'isfuture',
)
@ -9,15 +8,14 @@ import concurrent.futures
import contextvars
import logging
import sys
from types import GenericAlias
from . import base_futures
from . import events
from . import exceptions
from . import format_helpers
CancelledError = base_futures.CancelledError
InvalidStateError = base_futures.InvalidStateError
TimeoutError = base_futures.TimeoutError
isfuture = base_futures.isfuture
@ -54,6 +52,9 @@ class Future:
_exception = None
_loop = None
_source_traceback = None
_cancel_message = None
# A saved CancelledError for later chaining as an exception context.
_cancelled_exc = None
# This field is used for a dual purpose:
# - Its presence is a marker to declare that a class implements
@ -106,6 +107,8 @@ class Future:
context['source_traceback'] = self._source_traceback
self._loop.call_exception_handler(context)
__class_getitem__ = classmethod(GenericAlias)
@property
def _log_traceback(self):
return self.__log_traceback
@ -118,9 +121,27 @@ class Future:
def get_loop(self):
"""Return the event loop the Future is bound to."""
return self._loop
loop = self._loop
if loop is None:
raise RuntimeError("Future object is not initialized.")
return loop
def cancel(self):
def _make_cancelled_error(self):
"""Create the CancelledError to raise if the Future is cancelled.
This should only be called once when handling a cancellation since
it erases the saved context exception value.
"""
if self._cancel_message is None:
exc = exceptions.CancelledError()
else:
exc = exceptions.CancelledError(self._cancel_message)
exc.__context__ = self._cancelled_exc
# Remove the reference since we don't need this anymore.
self._cancelled_exc = None
return exc
def cancel(self, msg=None):
"""Cancel the future and schedule callbacks.
If the future is already done or cancelled, return False. Otherwise,
@ -131,6 +152,7 @@ class Future:
if self._state != _PENDING:
return False
self._state = _CANCELLED
self._cancel_message = msg
self.__schedule_callbacks()
return True
@ -170,9 +192,10 @@ class Future:
the future is done and has an exception set, this exception is raised.
"""
if self._state == _CANCELLED:
raise CancelledError
exc = self._make_cancelled_error()
raise exc
if self._state != _FINISHED:
raise InvalidStateError('Result is not ready.')
raise exceptions.InvalidStateError('Result is not ready.')
self.__log_traceback = False
if self._exception is not None:
raise self._exception
@ -187,9 +210,10 @@ class Future:
InvalidStateError.
"""
if self._state == _CANCELLED:
raise CancelledError
exc = self._make_cancelled_error()
raise exc
if self._state != _FINISHED:
raise InvalidStateError('Exception is not set.')
raise exceptions.InvalidStateError('Exception is not set.')
self.__log_traceback = False
return self._exception
@ -231,7 +255,7 @@ class Future:
InvalidStateError.
"""
if self._state != _PENDING:
raise InvalidStateError('{}: {!r}'.format(self._state, self))
raise exceptions.InvalidStateError(f'{self._state}: {self!r}')
self._result = result
self._state = _FINISHED
self.__schedule_callbacks()
@ -243,7 +267,7 @@ class Future:
InvalidStateError.
"""
if self._state != _PENDING:
raise InvalidStateError('{}: {!r}'.format(self._state, self))
raise exceptions.InvalidStateError(f'{self._state}: {self!r}')
if isinstance(exception, type):
exception = exception()
if type(exception) is StopIteration:
@ -288,6 +312,18 @@ def _set_result_unless_cancelled(fut, result):
fut.set_result(result)
def _convert_future_exc(exc):
exc_class = type(exc)
if exc_class is concurrent.futures.CancelledError:
return exceptions.CancelledError(*exc.args)
elif exc_class is concurrent.futures.TimeoutError:
return exceptions.TimeoutError(*exc.args)
elif exc_class is concurrent.futures.InvalidStateError:
return exceptions.InvalidStateError(*exc.args)
else:
return exc
def _set_concurrent_future_state(concurrent, source):
"""Copy state from a future to a concurrent.futures.Future."""
assert source.done()
@ -297,7 +333,7 @@ def _set_concurrent_future_state(concurrent, source):
return
exception = source.exception()
if exception is not None:
concurrent.set_exception(exception)
concurrent.set_exception(_convert_future_exc(exception))
else:
result = source.result()
concurrent.set_result(result)
@ -317,7 +353,7 @@ def _copy_future_state(source, dest):
else:
exception = source.exception()
if exception is not None:
dest.set_exception(exception)
dest.set_exception(_convert_future_exc(exception))
else:
result = source.result()
dest.set_result(result)

View File

@ -6,88 +6,10 @@ import collections
import warnings
from . import events
from . import futures
from .coroutines import coroutine
class _ContextManager:
"""Context manager.
This enables the following idiom for acquiring and releasing a
lock around a block:
with (yield from lock):
<block>
while failing loudly when accidentally using:
with lock:
<block>
Deprecated, use 'async with' statement:
async with lock:
<block>
"""
def __init__(self, lock):
self._lock = lock
def __enter__(self):
# We have no use for the "as ..." clause in the with
# statement for locks.
return None
def __exit__(self, *args):
try:
self._lock.release()
finally:
self._lock = None # Crudely prevent reuse.
from . import exceptions
class _ContextManagerMixin:
def __enter__(self):
raise RuntimeError(
'"yield from" should be used as context manager expression')
def __exit__(self, *args):
# This must exist because __enter__ exists, even though that
# always raises; that's how the with-statement works.
pass
@coroutine
def __iter__(self):
# This is not a coroutine. It is meant to enable the idiom:
#
# with (yield from lock):
# <block>
#
# as an alternative to:
#
# yield from lock.acquire()
# try:
# <block>
# finally:
# lock.release()
# Deprecated, use 'async with' statement:
# async with lock:
# <block>
warnings.warn("'with (yield from lock)' is deprecated "
"use 'async with lock' instead",
DeprecationWarning, stacklevel=2)
yield from self.acquire()
return _ContextManager(self)
async def __acquire_ctx(self):
await self.acquire()
return _ContextManager(self)
def __await__(self):
warnings.warn("'with await lock' is deprecated "
"use 'async with lock' instead",
DeprecationWarning, stacklevel=2)
# To make "with await lock" work.
return self.__acquire_ctx().__await__()
async def __aenter__(self):
await self.acquire()
# We have no use for the "as ..." clause in the with
@ -153,12 +75,15 @@ class Lock(_ContextManagerMixin):
"""
def __init__(self, *, loop=None):
self._waiters = collections.deque()
self._waiters = None
self._locked = False
if loop is not None:
self._loop = loop
else:
if loop is None:
self._loop = events.get_event_loop()
else:
self._loop = loop
warnings.warn("The loop argument is deprecated since Python 3.8, "
"and scheduled for removal in Python 3.10.",
DeprecationWarning, stacklevel=2)
def __repr__(self):
res = super().__repr__()
@ -177,10 +102,13 @@ class Lock(_ContextManagerMixin):
This method blocks until the lock is unlocked, then sets it to
locked and returns True.
"""
if not self._locked and all(w.cancelled() for w in self._waiters):
if (not self._locked and (self._waiters is None or
all(w.cancelled() for w in self._waiters))):
self._locked = True
return True
if self._waiters is None:
self._waiters = collections.deque()
fut = self._loop.create_future()
self._waiters.append(fut)
@ -192,7 +120,7 @@ class Lock(_ContextManagerMixin):
await fut
finally:
self._waiters.remove(fut)
except futures.CancelledError:
except exceptions.CancelledError:
if not self._locked:
self._wake_up_first()
raise
@ -219,6 +147,8 @@ class Lock(_ContextManagerMixin):
def _wake_up_first(self):
"""Wake up the first waiter if it isn't done."""
if not self._waiters:
return
try:
fut = next(iter(self._waiters))
except StopIteration:
@ -243,10 +173,13 @@ class Event:
def __init__(self, *, loop=None):
self._waiters = collections.deque()
self._value = False
if loop is not None:
self._loop = loop
else:
if loop is None:
self._loop = events.get_event_loop()
else:
self._loop = loop
warnings.warn("The loop argument is deprecated since Python 3.8, "
"and scheduled for removal in Python 3.10.",
DeprecationWarning, stacklevel=2)
def __repr__(self):
res = super().__repr__()
@ -307,13 +240,16 @@ class Condition(_ContextManagerMixin):
"""
def __init__(self, lock=None, *, loop=None):
if loop is not None:
self._loop = loop
else:
if loop is None:
self._loop = events.get_event_loop()
else:
self._loop = loop
warnings.warn("The loop argument is deprecated since Python 3.8, "
"and scheduled for removal in Python 3.10.",
DeprecationWarning, stacklevel=2)
if lock is None:
lock = Lock(loop=self._loop)
lock = Lock(loop=loop)
elif lock._loop is not self._loop:
raise ValueError("loop argument must agree with lock")
@ -363,11 +299,11 @@ class Condition(_ContextManagerMixin):
try:
await self.acquire()
break
except futures.CancelledError:
except exceptions.CancelledError:
cancelled = True
if cancelled:
raise futures.CancelledError
raise exceptions.CancelledError
async def wait_for(self, predicate):
"""Wait until a predicate becomes true.
@ -435,10 +371,14 @@ class Semaphore(_ContextManagerMixin):
raise ValueError("Semaphore initial value must be >= 0")
self._value = value
self._waiters = collections.deque()
if loop is not None:
self._loop = loop
else:
if loop is None:
self._loop = events.get_event_loop()
else:
self._loop = loop
warnings.warn("The loop argument is deprecated since Python 3.8, "
"and scheduled for removal in Python 3.10.",
DeprecationWarning, stacklevel=2)
self._wakeup_scheduled = False
def __repr__(self):
res = super().__repr__()
@ -452,6 +392,7 @@ class Semaphore(_ContextManagerMixin):
waiter = self._waiters.popleft()
if not waiter.done():
waiter.set_result(None)
self._wakeup_scheduled = True
return
def locked(self):
@ -467,16 +408,17 @@ class Semaphore(_ContextManagerMixin):
called release() to make it larger than 0, and then return
True.
"""
while self._value <= 0:
# _wakeup_scheduled is set if *another* task is scheduled to wakeup
# but its acquire() is not resumed yet
while self._wakeup_scheduled or self._value <= 0:
fut = self._loop.create_future()
self._waiters.append(fut)
try:
await fut
except:
# See the similar code in Queue.get.
fut.cancel()
if self._value > 0 and not fut.cancelled():
self._wake_up_next()
# reset _wakeup_scheduled *after* waiting for a future
self._wakeup_scheduled = False
except exceptions.CancelledError:
self._wake_up_next()
raise
self._value -= 1
return True
@ -498,6 +440,11 @@ class BoundedSemaphore(Semaphore):
"""
def __init__(self, value=1, *, loop=None):
if loop:
warnings.warn("The loop argument is deprecated since Python 3.8, "
"and scheduled for removal in Python 3.10.",
DeprecationWarning, stacklevel=2)
self._bound_value = value
super().__init__(value, loop=loop)

View File

@ -10,17 +10,39 @@ import io
import os
import socket
import warnings
import signal
import threading
import collections
from . import base_events
from . import constants
from . import events
from . import futures
from . import exceptions
from . import protocols
from . import sslproto
from . import transports
from . import trsock
from .log import logger
def _set_socket_extra(transport, sock):
transport._extra['socket'] = trsock.TransportSocket(sock)
try:
transport._extra['sockname'] = sock.getsockname()
except socket.error:
if transport._loop.get_debug():
logger.warning(
"getsockname() failed on %r", sock, exc_info=True)
if 'peername' not in transport._extra:
try:
transport._extra['peername'] = sock.getpeername()
except socket.error:
# UDP sockets may not have a peer name
transport._extra['peername'] = None
class _ProactorBasePipeTransport(transports._FlowControlMixin,
transports.BaseTransport):
"""Base class for pipe and socket transports."""
@ -88,15 +110,14 @@ class _ProactorBasePipeTransport(transports._FlowControlMixin,
self._read_fut.cancel()
self._read_fut = None
def __del__(self):
def __del__(self, _warn=warnings.warn):
if self._sock is not None:
warnings.warn(f"unclosed transport {self!r}", ResourceWarning,
source=self)
_warn(f"unclosed transport {self!r}", ResourceWarning, source=self)
self.close()
def _fatal_error(self, exc, message='Fatal error on pipe transport'):
try:
if isinstance(exc, base_events._FATAL_ERROR_IGNORE):
if isinstance(exc, OSError):
if self._loop.get_debug():
logger.debug("%r: %s", self, message, exc_info=True)
else:
@ -110,7 +131,7 @@ class _ProactorBasePipeTransport(transports._FlowControlMixin,
self._force_close(exc)
def _force_close(self, exc):
if self._empty_waiter is not None:
if self._empty_waiter is not None and not self._empty_waiter.done():
if exc is None:
self._empty_waiter.set_result(None)
else:
@ -137,7 +158,7 @@ class _ProactorBasePipeTransport(transports._FlowControlMixin,
# end then it may fail with ERROR_NETNAME_DELETED if we
# just close our end. First calling shutdown() seems to
# cure it, but maybe using DisconnectEx() would be better.
if hasattr(self._sock, 'shutdown'):
if hasattr(self._sock, 'shutdown') and self._sock.fileno() != -1:
self._sock.shutdown(socket.SHUT_RDWR)
self._sock.close()
self._sock = None
@ -212,7 +233,9 @@ class _ProactorReadPipeTransport(_ProactorBasePipeTransport,
try:
keep_open = self._protocol.eof_received()
except Exception as exc:
except (SystemExit, KeyboardInterrupt):
raise
except BaseException as exc:
self._fatal_error(
exc, 'Fatal error: protocol.eof_received() call failed.')
return
@ -235,7 +258,9 @@ class _ProactorReadPipeTransport(_ProactorBasePipeTransport,
if isinstance(self._protocol, protocols.BufferedProtocol):
try:
protocols._feed_data_to_buffered_proto(self._protocol, data)
except Exception as exc:
except (SystemExit, KeyboardInterrupt):
raise
except BaseException as exc:
self._fatal_error(exc,
'Fatal error: protocol.buffer_updated() '
'call failed.')
@ -282,7 +307,7 @@ class _ProactorReadPipeTransport(_ProactorBasePipeTransport,
self._force_close(exc)
except OSError as exc:
self._fatal_error(exc, 'Fatal read error on pipe transport')
except futures.CancelledError:
except exceptions.CancelledError:
if not self._closing:
raise
else:
@ -343,6 +368,10 @@ class _ProactorBaseWritePipeTransport(_ProactorBasePipeTransport,
def _loop_writing(self, f=None, data=None):
try:
if f is not None and self._write_fut is None and self._closing:
# XXX most likely self._force_close() has been called, and
# it has set self._write_fut to None.
return
assert f is self._write_fut
self._write_fut = None
self._pending_write = 0
@ -421,6 +450,135 @@ class _ProactorWritePipeTransport(_ProactorBaseWritePipeTransport):
self.close()
class _ProactorDatagramTransport(_ProactorBasePipeTransport,
transports.DatagramTransport):
max_size = 256 * 1024
def __init__(self, loop, sock, protocol, address=None,
waiter=None, extra=None):
self._address = address
self._empty_waiter = None
# We don't need to call _protocol.connection_made() since our base
# constructor does it for us.
super().__init__(loop, sock, protocol, waiter=waiter, extra=extra)
# The base constructor sets _buffer = None, so we set it here
self._buffer = collections.deque()
self._loop.call_soon(self._loop_reading)
def _set_extra(self, sock):
_set_socket_extra(self, sock)
def get_write_buffer_size(self):
return sum(len(data) for data, _ in self._buffer)
def abort(self):
self._force_close(None)
def sendto(self, data, addr=None):
if not isinstance(data, (bytes, bytearray, memoryview)):
raise TypeError('data argument must be bytes-like object (%r)',
type(data))
if not data:
return
if self._address is not None and addr not in (None, self._address):
raise ValueError(
f'Invalid address: must be None or {self._address}')
if self._conn_lost and self._address:
if self._conn_lost >= constants.LOG_THRESHOLD_FOR_CONNLOST_WRITES:
logger.warning('socket.sendto() raised exception.')
self._conn_lost += 1
return
# Ensure that what we buffer is immutable.
self._buffer.append((bytes(data), addr))
if self._write_fut is None:
# No current write operations are active, kick one off
self._loop_writing()
# else: A write operation is already kicked off
self._maybe_pause_protocol()
def _loop_writing(self, fut=None):
try:
if self._conn_lost:
return
assert fut is self._write_fut
self._write_fut = None
if fut:
# We are in a _loop_writing() done callback, get the result
fut.result()
if not self._buffer or (self._conn_lost and self._address):
# The connection has been closed
if self._closing:
self._loop.call_soon(self._call_connection_lost, None)
return
data, addr = self._buffer.popleft()
if self._address is not None:
self._write_fut = self._loop._proactor.send(self._sock,
data)
else:
self._write_fut = self._loop._proactor.sendto(self._sock,
data,
addr=addr)
except OSError as exc:
self._protocol.error_received(exc)
except Exception as exc:
self._fatal_error(exc, 'Fatal write error on datagram transport')
else:
self._write_fut.add_done_callback(self._loop_writing)
self._maybe_resume_protocol()
def _loop_reading(self, fut=None):
data = None
try:
if self._conn_lost:
return
assert self._read_fut is fut or (self._read_fut is None and
self._closing)
self._read_fut = None
if fut is not None:
res = fut.result()
if self._closing:
# since close() has been called we ignore any read data
data = None
return
if self._address is not None:
data, addr = res, self._address
else:
data, addr = res
if self._conn_lost:
return
if self._address is not None:
self._read_fut = self._loop._proactor.recv(self._sock,
self.max_size)
else:
self._read_fut = self._loop._proactor.recvfrom(self._sock,
self.max_size)
except OSError as exc:
self._protocol.error_received(exc)
except exceptions.CancelledError:
if not self._closing:
raise
else:
if self._read_fut is not None:
self._read_fut.add_done_callback(self._loop_reading)
finally:
if data:
self._protocol.datagram_received(data, addr)
class _ProactorDuplexPipeTransport(_ProactorReadPipeTransport,
_ProactorBaseWritePipeTransport,
transports.Transport):
@ -440,23 +598,13 @@ class _ProactorSocketTransport(_ProactorReadPipeTransport,
_sendfile_compatible = constants._SendfileMode.TRY_NATIVE
def __init__(self, loop, sock, protocol, waiter=None,
extra=None, server=None):
super().__init__(loop, sock, protocol, waiter, extra, server)
base_events._set_nodelay(sock)
def _set_extra(self, sock):
self._extra['socket'] = sock
try:
self._extra['sockname'] = sock.getsockname()
except (socket.error, AttributeError):
if self._loop.get_debug():
logger.warning(
"getsockname() failed on %r", sock, exc_info=True)
if 'peername' not in self._extra:
try:
self._extra['peername'] = sock.getpeername()
except (socket.error, AttributeError):
if self._loop.get_debug():
logger.warning("getpeername() failed on %r",
sock, exc_info=True)
_set_socket_extra(self, sock)
def can_write_eof(self):
return True
@ -480,6 +628,9 @@ class BaseProactorEventLoop(base_events.BaseEventLoop):
self._accept_futures = {} # socket file descriptor => Future
proactor.set_loop(self)
self._make_self_pipe()
if threading.current_thread() is threading.main_thread():
# wakeup fd can only be installed to a file descriptor from the main thread
signal.set_wakeup_fd(self._csock.fileno())
def _make_socket_transport(self, sock, protocol, waiter=None,
extra=None, server=None):
@ -499,6 +650,11 @@ class BaseProactorEventLoop(base_events.BaseEventLoop):
extra=extra, server=server)
return ssl_protocol._app_transport
def _make_datagram_transport(self, sock, protocol,
address=None, waiter=None, extra=None):
return _ProactorDatagramTransport(self, sock, protocol, address,
waiter, extra)
def _make_duplex_pipe_transport(self, sock, protocol, waiter=None,
extra=None):
return _ProactorDuplexPipeTransport(self,
@ -520,6 +676,8 @@ class BaseProactorEventLoop(base_events.BaseEventLoop):
if self.is_closed():
return
if threading.current_thread() is threading.main_thread():
signal.set_wakeup_fd(-1)
# Call these methods before closing the event loop (before calling
# BaseEventLoop.close), because they can schedule callbacks with
# call_soon(), which is forbidden when the event loop is closed.
@ -551,11 +709,11 @@ class BaseProactorEventLoop(base_events.BaseEventLoop):
try:
fileno = file.fileno()
except (AttributeError, io.UnsupportedOperation) as err:
raise events.SendfileNotAvailableError("not a regular file")
raise exceptions.SendfileNotAvailableError("not a regular file")
try:
fsize = os.fstat(fileno).st_size
except OSError as err:
raise events.SendfileNotAvailableError("not a regular file")
except OSError:
raise exceptions.SendfileNotAvailableError("not a regular file")
blocksize = count if count else fsize
if not blocksize:
return 0 # empty file
@ -604,17 +762,26 @@ class BaseProactorEventLoop(base_events.BaseEventLoop):
self._ssock.setblocking(False)
self._csock.setblocking(False)
self._internal_fds += 1
self.call_soon(self._loop_self_reading)
def _loop_self_reading(self, f=None):
try:
if f is not None:
f.result() # may raise
if self._self_reading_future is not f:
# When we scheduled this Future, we assigned it to
# _self_reading_future. If it's not there now, something has
# tried to cancel the loop while this callback was still in the
# queue (see windows_events.ProactorEventLoop.run_forever). In
# that case stop here instead of continuing to schedule a new
# iteration.
return
f = self._proactor.recv(self._ssock, 4096)
except futures.CancelledError:
except exceptions.CancelledError:
# _close_self_pipe() has been called, stop waiting for data
return
except Exception as exc:
except (SystemExit, KeyboardInterrupt):
raise
except BaseException as exc:
self.call_exception_handler({
'message': 'Error on reading from the event loop self pipe',
'exception': exc,
@ -625,7 +792,22 @@ class BaseProactorEventLoop(base_events.BaseEventLoop):
f.add_done_callback(self._loop_self_reading)
def _write_to_self(self):
self._csock.send(b'\0')
# This may be called from a different thread, possibly after
# _close_self_pipe() has been called or even while it is
# running. Guard for self._csock being None or closed. When
# a socket is closed, send() raises OSError (with errno set to
# EBADF, but let's not rely on the exact error code).
csock = self._csock
if csock is None:
return
try:
csock.send(b'\0')
except OSError:
if self._debug:
logger.debug("Fail to write a null byte into the "
"self-pipe socket",
exc_info=True)
def _start_serving(self, protocol_factory, sock,
sslcontext=None, server=None, backlog=100,
@ -656,13 +838,13 @@ class BaseProactorEventLoop(base_events.BaseEventLoop):
self.call_exception_handler({
'message': 'Accept failed on a socket',
'exception': exc,
'socket': sock,
'socket': trsock.TransportSocket(sock),
})
sock.close()
elif self._debug:
logger.debug("Accept failed on socket %r",
sock, exc_info=True)
except futures.CancelledError:
except exceptions.CancelledError:
sock.close()
else:
self._accept_futures[sock.fileno()] = f

View File

@ -16,6 +16,8 @@ class BaseProtocol:
write-only transport like write pipe
"""
__slots__ = ()
def connection_made(self, transport):
"""Called when a connection is made.
@ -87,6 +89,8 @@ class Protocol(BaseProtocol):
* CL: connection_lost()
"""
__slots__ = ()
def data_received(self, data):
"""Called when some data is received.
@ -105,10 +109,6 @@ class Protocol(BaseProtocol):
class BufferedProtocol(BaseProtocol):
"""Interface for stream protocol with manual buffer control.
Important: this has been added to asyncio in Python 3.7
*on a provisional basis*! Consider it as an experimental API that
might be changed or removed in Python 3.8.
Event methods, such as `create_server` and `create_connection`,
accept factories that return protocols that implement this interface.
@ -130,6 +130,8 @@ class BufferedProtocol(BaseProtocol):
* CL: connection_lost()
"""
__slots__ = ()
def get_buffer(self, sizehint):
"""Called to allocate a new receive buffer.
@ -160,6 +162,8 @@ class BufferedProtocol(BaseProtocol):
class DatagramProtocol(BaseProtocol):
"""Interface for datagram protocol."""
__slots__ = ()
def datagram_received(self, data, addr):
"""Called when some datagram is received."""
@ -173,6 +177,8 @@ class DatagramProtocol(BaseProtocol):
class SubprocessProtocol(BaseProtocol):
"""Interface for protocol for subprocess calls."""
__slots__ = ()
def pipe_data_received(self, fd, data):
"""Called when the subprocess writes data into stdout/stderr pipe.

View File

@ -2,6 +2,8 @@ __all__ = ('Queue', 'PriorityQueue', 'LifoQueue', 'QueueFull', 'QueueEmpty')
import collections
import heapq
import warnings
from types import GenericAlias
from . import events
from . import locks
@ -34,6 +36,9 @@ class Queue:
self._loop = events.get_event_loop()
else:
self._loop = loop
warnings.warn("The loop argument is deprecated since Python 3.8, "
"and scheduled for removal in Python 3.10.",
DeprecationWarning, stacklevel=2)
self._maxsize = maxsize
# Futures.
@ -41,7 +46,7 @@ class Queue:
# Futures.
self._putters = collections.deque()
self._unfinished_tasks = 0
self._finished = locks.Event(loop=self._loop)
self._finished = locks.Event(loop=loop)
self._finished.set()
self._init(maxsize)
@ -72,6 +77,8 @@ class Queue:
def __str__(self):
return f'<{type(self).__name__} {self._format()}>'
__class_getitem__ = classmethod(GenericAlias)
def _format(self):
result = f'maxsize={self._maxsize!r}'
if getattr(self, '_queue', None):

View File

@ -5,8 +5,8 @@ from . import events
from . import tasks
def run(main, *, debug=False):
"""Run a coroutine.
def run(main, *, debug=None):
"""Execute the coroutine and return the result.
This function runs the passed coroutine, taking care of
managing the asyncio event loop and finalizing asynchronous
@ -39,12 +39,14 @@ def run(main, *, debug=False):
loop = events.new_event_loop()
try:
events.set_event_loop(loop)
loop.set_debug(debug)
if debug is not None:
loop.set_debug(debug)
return loop.run_until_complete(main)
finally:
try:
_cancel_all_tasks(loop)
loop.run_until_complete(loop.shutdown_asyncgens())
loop.run_until_complete(loop.shutdown_default_executor())
finally:
events.set_event_loop(None)
loop.close()
@ -59,7 +61,7 @@ def _cancel_all_tasks(loop):
task.cancel()
loop.run_until_complete(
tasks.gather(*to_cancel, loop=loop, return_exceptions=True))
tasks._gather(*to_cancel, loop=loop, return_exceptions=True))
for task in to_cancel:
if task.cancelled():

View File

@ -25,6 +25,7 @@ from . import futures
from . import protocols
from . import sslproto
from . import transports
from . import trsock
from .log import logger
@ -39,17 +40,6 @@ def _test_selector_event(selector, fd, event):
return bool(key.events & event)
if hasattr(socket, 'TCP_NODELAY'):
def _set_nodelay(sock):
if (sock.family in {socket.AF_INET, socket.AF_INET6} and
sock.type == socket.SOCK_STREAM and
sock.proto == socket.IPPROTO_TCP):
sock.setsockopt(socket.IPPROTO_TCP, socket.TCP_NODELAY, 1)
else:
def _set_nodelay(sock):
pass
class BaseSelectorEventLoop(base_events.BaseEventLoop):
"""Selector event loop.
@ -138,14 +128,16 @@ class BaseSelectorEventLoop(base_events.BaseEventLoop):
# a socket is closed, send() raises OSError (with errno set to
# EBADF, but let's not rely on the exact error code).
csock = self._csock
if csock is not None:
try:
csock.send(b'\0')
except OSError:
if self._debug:
logger.debug("Fail to write a null byte into the "
"self-pipe socket",
exc_info=True)
if csock is None:
return
try:
csock.send(b'\0')
except OSError:
if self._debug:
logger.debug("Fail to write a null byte into the "
"self-pipe socket",
exc_info=True)
def _start_serving(self, protocol_factory, sock,
sslcontext=None, server=None, backlog=100,
@ -182,7 +174,7 @@ class BaseSelectorEventLoop(base_events.BaseEventLoop):
self.call_exception_handler({
'message': 'socket.accept() out of system resource',
'exception': exc,
'socket': sock,
'socket': trsock.TransportSocket(sock),
})
self._remove_reader(sock.fileno())
self.call_later(constants.ACCEPT_RETRY_DELAY,
@ -219,12 +211,14 @@ class BaseSelectorEventLoop(base_events.BaseEventLoop):
try:
await waiter
except:
except BaseException:
transport.close()
raise
# It's now up to the protocol to handle the connection.
# It's now up to the protocol to handle the connection.
except Exception as exc:
except (SystemExit, KeyboardInterrupt):
raise
except BaseException as exc:
if self._debug:
context = {
'message':
@ -269,6 +263,7 @@ class BaseSelectorEventLoop(base_events.BaseEventLoop):
(handle, writer))
if reader is not None:
reader.cancel()
return handle
def _remove_reader(self, fd):
if self.is_closed():
@ -305,6 +300,7 @@ class BaseSelectorEventLoop(base_events.BaseEventLoop):
(reader, handle))
if writer is not None:
writer.cancel()
return handle
def _remove_writer(self, fd):
"""Remove a writer callback."""
@ -332,7 +328,7 @@ class BaseSelectorEventLoop(base_events.BaseEventLoop):
def add_reader(self, fd, callback, *args):
"""Add a reader callback."""
self._ensure_fd_no_transport(fd)
return self._add_reader(fd, callback, *args)
self._add_reader(fd, callback, *args)
def remove_reader(self, fd):
"""Remove a reader callback."""
@ -342,7 +338,7 @@ class BaseSelectorEventLoop(base_events.BaseEventLoop):
def add_writer(self, fd, callback, *args):
"""Add a writer callback.."""
self._ensure_fd_no_transport(fd)
return self._add_writer(fd, callback, *args)
self._add_writer(fd, callback, *args)
def remove_writer(self, fd):
"""Remove a writer callback."""
@ -356,29 +352,37 @@ class BaseSelectorEventLoop(base_events.BaseEventLoop):
The maximum amount of data to be received at once is specified by
nbytes.
"""
base_events._check_ssl_socket(sock)
if self._debug and sock.gettimeout() != 0:
raise ValueError("the socket must be non-blocking")
try:
return sock.recv(n)
except (BlockingIOError, InterruptedError):
pass
fut = self.create_future()
self._sock_recv(fut, None, sock, n)
fd = sock.fileno()
self._ensure_fd_no_transport(fd)
handle = self._add_reader(fd, self._sock_recv, fut, sock, n)
fut.add_done_callback(
functools.partial(self._sock_read_done, fd, handle=handle))
return await fut
def _sock_recv(self, fut, registered_fd, sock, n):
def _sock_read_done(self, fd, fut, handle=None):
if handle is None or not handle.cancelled():
self.remove_reader(fd)
def _sock_recv(self, fut, sock, n):
# _sock_recv() can add itself as an I/O callback if the operation can't
# be done immediately. Don't use it directly, call sock_recv().
if registered_fd is not None:
# Remove the callback early. It should be rare that the
# selector says the fd is ready but the call still returns
# EAGAIN, and I am willing to take a hit in that case in
# order to simplify the common case.
self.remove_reader(registered_fd)
if fut.cancelled():
if fut.done():
return
try:
data = sock.recv(n)
except (BlockingIOError, InterruptedError):
fd = sock.fileno()
self.add_reader(fd, self._sock_recv, fut, fd, sock, n)
except Exception as exc:
return # try again next time
except (SystemExit, KeyboardInterrupt):
raise
except BaseException as exc:
fut.set_exception(exc)
else:
fut.set_result(data)
@ -389,30 +393,34 @@ class BaseSelectorEventLoop(base_events.BaseEventLoop):
The received data is written into *buf* (a writable buffer).
The return value is the number of bytes written.
"""
base_events._check_ssl_socket(sock)
if self._debug and sock.gettimeout() != 0:
raise ValueError("the socket must be non-blocking")
try:
return sock.recv_into(buf)
except (BlockingIOError, InterruptedError):
pass
fut = self.create_future()
self._sock_recv_into(fut, None, sock, buf)
fd = sock.fileno()
self._ensure_fd_no_transport(fd)
handle = self._add_reader(fd, self._sock_recv_into, fut, sock, buf)
fut.add_done_callback(
functools.partial(self._sock_read_done, fd, handle=handle))
return await fut
def _sock_recv_into(self, fut, registered_fd, sock, buf):
def _sock_recv_into(self, fut, sock, buf):
# _sock_recv_into() can add itself as an I/O callback if the operation
# can't be done immediately. Don't use it directly, call
# sock_recv_into().
if registered_fd is not None:
# Remove the callback early. It should be rare that the
# selector says the FD is ready but the call still returns
# EAGAIN, and I am willing to take a hit in that case in
# order to simplify the common case.
self.remove_reader(registered_fd)
if fut.cancelled():
if fut.done():
return
try:
nbytes = sock.recv_into(buf)
except (BlockingIOError, InterruptedError):
fd = sock.fileno()
self.add_reader(fd, self._sock_recv_into, fut, fd, sock, buf)
except Exception as exc:
return # try again next time
except (SystemExit, KeyboardInterrupt):
raise
except BaseException as exc:
fut.set_exception(exc)
else:
fut.set_result(nbytes)
@ -426,48 +434,65 @@ class BaseSelectorEventLoop(base_events.BaseEventLoop):
raised, and there is no way to determine how much data, if any, was
successfully processed by the receiving end of the connection.
"""
base_events._check_ssl_socket(sock)
if self._debug and sock.gettimeout() != 0:
raise ValueError("the socket must be non-blocking")
fut = self.create_future()
if data:
self._sock_sendall(fut, None, sock, data)
else:
fut.set_result(None)
return await fut
def _sock_sendall(self, fut, registered_fd, sock, data):
if registered_fd is not None:
self.remove_writer(registered_fd)
if fut.cancelled():
return
try:
n = sock.send(data)
except (BlockingIOError, InterruptedError):
n = 0
except Exception as exc:
if n == len(data):
# all data sent
return
fut = self.create_future()
fd = sock.fileno()
self._ensure_fd_no_transport(fd)
# use a trick with a list in closure to store a mutable state
handle = self._add_writer(fd, self._sock_sendall, fut, sock,
memoryview(data), [n])
fut.add_done_callback(
functools.partial(self._sock_write_done, fd, handle=handle))
return await fut
def _sock_sendall(self, fut, sock, view, pos):
if fut.done():
# Future cancellation can be scheduled on previous loop iteration
return
start = pos[0]
try:
n = sock.send(view[start:])
except (BlockingIOError, InterruptedError):
return
except (SystemExit, KeyboardInterrupt):
raise
except BaseException as exc:
fut.set_exception(exc)
return
if n == len(data):
start += n
if start == len(view):
fut.set_result(None)
else:
if n:
data = data[n:]
fd = sock.fileno()
self.add_writer(fd, self._sock_sendall, fut, fd, sock, data)
pos[0] = start
async def sock_connect(self, sock, address):
"""Connect to a remote socket at address.
This method is a coroutine.
"""
base_events._check_ssl_socket(sock)
if self._debug and sock.gettimeout() != 0:
raise ValueError("the socket must be non-blocking")
if not hasattr(socket, 'AF_UNIX') or sock.family != socket.AF_UNIX:
if sock.family == socket.AF_INET or (
base_events._HAS_IPv6 and sock.family == socket.AF_INET6):
resolved = await self._ensure_resolved(
address, family=sock.family, proto=sock.proto, loop=self)
address, family=sock.family, type=sock.type, proto=sock.proto,
loop=self,
)
_, _, _, _, address = resolved[0]
fut = self.create_future()
@ -483,19 +508,24 @@ class BaseSelectorEventLoop(base_events.BaseEventLoop):
# connection runs in background. We have to wait until the socket
# becomes writable to be notified when the connection succeed or
# fails.
self._ensure_fd_no_transport(fd)
handle = self._add_writer(
fd, self._sock_connect_cb, fut, sock, address)
fut.add_done_callback(
functools.partial(self._sock_connect_done, fd))
self.add_writer(fd, self._sock_connect_cb, fut, sock, address)
except Exception as exc:
functools.partial(self._sock_write_done, fd, handle=handle))
except (SystemExit, KeyboardInterrupt):
raise
except BaseException as exc:
fut.set_exception(exc)
else:
fut.set_result(None)
def _sock_connect_done(self, fd, fut):
self.remove_writer(fd)
def _sock_write_done(self, fd, fut, handle=None):
if handle is None or not handle.cancelled():
self.remove_writer(fd)
def _sock_connect_cb(self, fut, sock, address):
if fut.cancelled():
if fut.done():
return
try:
@ -506,7 +536,9 @@ class BaseSelectorEventLoop(base_events.BaseEventLoop):
except (BlockingIOError, InterruptedError):
# socket is still registered, the callback will be retried later
pass
except Exception as exc:
except (SystemExit, KeyboardInterrupt):
raise
except BaseException as exc:
fut.set_exception(exc)
else:
fut.set_result(None)
@ -519,24 +551,26 @@ class BaseSelectorEventLoop(base_events.BaseEventLoop):
object usable to send and receive data on the connection, and address
is the address bound to the socket on the other end of the connection.
"""
base_events._check_ssl_socket(sock)
if self._debug and sock.gettimeout() != 0:
raise ValueError("the socket must be non-blocking")
fut = self.create_future()
self._sock_accept(fut, False, sock)
self._sock_accept(fut, sock)
return await fut
def _sock_accept(self, fut, registered, sock):
def _sock_accept(self, fut, sock):
fd = sock.fileno()
if registered:
self.remove_reader(fd)
if fut.cancelled():
return
try:
conn, address = sock.accept()
conn.setblocking(False)
except (BlockingIOError, InterruptedError):
self.add_reader(fd, self._sock_accept, fut, True, sock)
except Exception as exc:
self._ensure_fd_no_transport(fd)
handle = self._add_reader(fd, self._sock_accept, fut, sock)
fut.add_done_callback(
functools.partial(self._sock_read_done, fd, handle=handle))
except (SystemExit, KeyboardInterrupt):
raise
except BaseException as exc:
fut.set_exception(exc)
else:
fut.set_result((conn, address))
@ -588,8 +622,11 @@ class _SelectorTransport(transports._FlowControlMixin,
def __init__(self, loop, sock, protocol, extra=None, server=None):
super().__init__(extra, loop)
self._extra['socket'] = sock
self._extra['sockname'] = sock.getsockname()
self._extra['socket'] = trsock.TransportSocket(sock)
try:
self._extra['sockname'] = sock.getsockname()
except OSError:
self._extra['sockname'] = None
if 'peername' not in self._extra:
try:
self._extra['peername'] = sock.getpeername()
@ -660,15 +697,14 @@ class _SelectorTransport(transports._FlowControlMixin,
self._loop._remove_writer(self._sock_fd)
self._loop.call_soon(self._call_connection_lost, None)
def __del__(self):
def __del__(self, _warn=warnings.warn):
if self._sock is not None:
warnings.warn(f"unclosed transport {self!r}", ResourceWarning,
source=self)
_warn(f"unclosed transport {self!r}", ResourceWarning, source=self)
self._sock.close()
def _fatal_error(self, exc, message='Fatal error on transport'):
# Should be called from exception handler only.
if isinstance(exc, base_events._FATAL_ERROR_IGNORE):
if isinstance(exc, OSError):
if self._loop.get_debug():
logger.debug("%r: %s", self, message, exc_info=True)
else:
@ -733,7 +769,7 @@ class _SelectorSocketTransport(_SelectorTransport):
# Disable the Nagle algorithm -- small writes will be
# sent without waiting for the TCP ACK. This generally
# decreases the latency (in some cases significantly.)
_set_nodelay(self._sock)
base_events._set_nodelay(self._sock)
self._loop.call_soon(self._protocol.connection_made, self)
# only start reading when connection_made() has been called
@ -782,7 +818,9 @@ class _SelectorSocketTransport(_SelectorTransport):
buf = self._protocol.get_buffer(-1)
if not len(buf):
raise RuntimeError('get_buffer() returned an empty buffer')
except Exception as exc:
except (SystemExit, KeyboardInterrupt):
raise
except BaseException as exc:
self._fatal_error(
exc, 'Fatal error: protocol.get_buffer() call failed.')
return
@ -791,7 +829,9 @@ class _SelectorSocketTransport(_SelectorTransport):
nbytes = self._sock.recv_into(buf)
except (BlockingIOError, InterruptedError):
return
except Exception as exc:
except (SystemExit, KeyboardInterrupt):
raise
except BaseException as exc:
self._fatal_error(exc, 'Fatal read error on socket transport')
return
@ -801,7 +841,9 @@ class _SelectorSocketTransport(_SelectorTransport):
try:
self._protocol.buffer_updated(nbytes)
except Exception as exc:
except (SystemExit, KeyboardInterrupt):
raise
except BaseException as exc:
self._fatal_error(
exc, 'Fatal error: protocol.buffer_updated() call failed.')
@ -812,7 +854,9 @@ class _SelectorSocketTransport(_SelectorTransport):
data = self._sock.recv(self.max_size)
except (BlockingIOError, InterruptedError):
return
except Exception as exc:
except (SystemExit, KeyboardInterrupt):
raise
except BaseException as exc:
self._fatal_error(exc, 'Fatal read error on socket transport')
return
@ -822,7 +866,9 @@ class _SelectorSocketTransport(_SelectorTransport):
try:
self._protocol.data_received(data)
except Exception as exc:
except (SystemExit, KeyboardInterrupt):
raise
except BaseException as exc:
self._fatal_error(
exc, 'Fatal error: protocol.data_received() call failed.')
@ -832,7 +878,9 @@ class _SelectorSocketTransport(_SelectorTransport):
try:
keep_open = self._protocol.eof_received()
except Exception as exc:
except (SystemExit, KeyboardInterrupt):
raise
except BaseException as exc:
self._fatal_error(
exc, 'Fatal error: protocol.eof_received() call failed.')
return
@ -868,7 +916,9 @@ class _SelectorSocketTransport(_SelectorTransport):
n = self._sock.send(data)
except (BlockingIOError, InterruptedError):
pass
except Exception as exc:
except (SystemExit, KeyboardInterrupt):
raise
except BaseException as exc:
self._fatal_error(exc, 'Fatal write error on socket transport')
return
else:
@ -891,7 +941,9 @@ class _SelectorSocketTransport(_SelectorTransport):
n = self._sock.send(self._buffer)
except (BlockingIOError, InterruptedError):
pass
except Exception as exc:
except (SystemExit, KeyboardInterrupt):
raise
except BaseException as exc:
self._loop._remove_writer(self._sock_fd)
self._buffer.clear()
self._fatal_error(exc, 'Fatal write error on socket transport')
@ -967,7 +1019,9 @@ class _SelectorDatagramTransport(_SelectorTransport):
pass
except OSError as exc:
self._protocol.error_received(exc)
except Exception as exc:
except (SystemExit, KeyboardInterrupt):
raise
except BaseException as exc:
self._fatal_error(exc, 'Fatal read error on datagram transport')
else:
self._protocol.datagram_received(data, addr)
@ -979,9 +1033,11 @@ class _SelectorDatagramTransport(_SelectorTransport):
if not data:
return
if self._address and addr not in (None, self._address):
raise ValueError(
f'Invalid address: must be None or {self._address}')
if self._address:
if addr not in (None, self._address):
raise ValueError(
f'Invalid address: must be None or {self._address}')
addr = self._address
if self._conn_lost and self._address:
if self._conn_lost >= constants.LOG_THRESHOLD_FOR_CONNLOST_WRITES:
@ -992,7 +1048,7 @@ class _SelectorDatagramTransport(_SelectorTransport):
if not self._buffer:
# Attempt to send it right away first.
try:
if self._address:
if self._extra['peername']:
self._sock.send(data)
else:
self._sock.sendto(data, addr)
@ -1002,7 +1058,9 @@ class _SelectorDatagramTransport(_SelectorTransport):
except OSError as exc:
self._protocol.error_received(exc)
return
except Exception as exc:
except (SystemExit, KeyboardInterrupt):
raise
except BaseException as exc:
self._fatal_error(
exc, 'Fatal write error on datagram transport')
return
@ -1015,7 +1073,7 @@ class _SelectorDatagramTransport(_SelectorTransport):
while self._buffer:
data, addr = self._buffer.popleft()
try:
if self._address:
if self._extra['peername']:
self._sock.send(data)
else:
self._sock.sendto(data, addr)
@ -1025,7 +1083,9 @@ class _SelectorDatagramTransport(_SelectorTransport):
except OSError as exc:
self._protocol.error_received(exc)
return
except Exception as exc:
except (SystemExit, KeyboardInterrupt):
raise
except BaseException as exc:
self._fatal_error(
exc, 'Fatal write error on datagram transport')
return

View File

@ -5,7 +5,6 @@ try:
except ImportError: # pragma: no cover
ssl = None
from . import base_events
from . import constants
from . import protocols
from . import transports
@ -316,10 +315,9 @@ class _SSLProtocolTransport(transports._FlowControlMixin,
self._closed = True
self._ssl_protocol._start_shutdown()
def __del__(self):
def __del__(self, _warn=warnings.warn):
if not self._closed:
warnings.warn(f"unclosed transport {self!r}", ResourceWarning,
source=self)
_warn(f"unclosed transport {self!r}", ResourceWarning, source=self)
self.close()
def is_reading(self):
@ -369,6 +367,12 @@ class _SSLProtocolTransport(transports._FlowControlMixin,
"""Return the current size of the write buffer."""
return self._ssl_protocol._transport.get_write_buffer_size()
def get_write_buffer_limits(self):
"""Get the high and low watermarks for write flow control.
Return a tuple (low, high) where low and high are
positive number of bytes."""
return self._ssl_protocol._transport.get_write_buffer_limits()
@property
def _protocol_paused(self):
# Required for sendfile fallback pause_writing/resume_writing logic
@ -499,7 +503,11 @@ class SSLProtocol(protocols.Protocol):
self._app_transport._closed = True
self._transport = None
self._app_transport = None
if getattr(self, '_handshake_timeout_handle', None):
self._handshake_timeout_handle.cancel()
self._wakeup_waiter(exc)
self._app_protocol = None
self._sslpipe = None
def pause_writing(self):
"""Called when the low-level transport's buffer goes over
@ -524,7 +532,9 @@ class SSLProtocol(protocols.Protocol):
try:
ssldata, appdata = self._sslpipe.feed_ssldata(data)
except Exception as e:
except (SystemExit, KeyboardInterrupt):
raise
except BaseException as e:
self._fatal_error(e, 'SSL error in data received')
return
@ -539,7 +549,9 @@ class SSLProtocol(protocols.Protocol):
self._app_protocol, chunk)
else:
self._app_protocol.data_received(chunk)
except Exception as ex:
except (SystemExit, KeyboardInterrupt):
raise
except BaseException as ex:
self._fatal_error(
ex, 'application protocol failed to receive SSL data')
return
@ -625,7 +637,9 @@ class SSLProtocol(protocols.Protocol):
raise handshake_exc
peercert = sslobj.getpeercert()
except Exception as exc:
except (SystemExit, KeyboardInterrupt):
raise
except BaseException as exc:
if isinstance(exc, ssl.CertificateError):
msg = 'SSL handshake failed on verifying the certificate'
else:
@ -688,7 +702,9 @@ class SSLProtocol(protocols.Protocol):
# delete it and reduce the outstanding buffer size.
del self._write_backlog[0]
self._write_buffer_size -= len(data)
except Exception as exc:
except (SystemExit, KeyboardInterrupt):
raise
except BaseException as exc:
if self._in_handshake:
# Exceptions will be re-raised in _on_handshake_complete.
self._on_handshake_complete(exc)
@ -696,7 +712,7 @@ class SSLProtocol(protocols.Protocol):
self._fatal_error(exc, 'Fatal error on SSL transport')
def _fatal_error(self, exc, message='Fatal error on transport'):
if isinstance(exc, base_events._FATAL_ERROR_IGNORE):
if isinstance(exc, OSError):
if self._loop.get_debug():
logger.debug("%r: %s", self, message, exc_info=True)
else:

View File

@ -1,16 +1,19 @@
__all__ = (
'StreamReader', 'StreamWriter', 'StreamReaderProtocol',
'open_connection', 'start_server',
'IncompleteReadError', 'LimitOverrunError',
)
'open_connection', 'start_server')
import socket
import sys
import warnings
import weakref
if hasattr(socket, 'AF_UNIX'):
__all__ += ('open_unix_connection', 'start_unix_server')
from . import coroutines
from . import events
from . import exceptions
from . import format_helpers
from . import protocols
from .log import logger
from .tasks import sleep
@ -19,37 +22,6 @@ from .tasks import sleep
_DEFAULT_LIMIT = 2 ** 16 # 64 KiB
class IncompleteReadError(EOFError):
"""
Incomplete read error. Attributes:
- partial: read bytes string before the end of stream was reached
- expected: total number of expected bytes (or None if unknown)
"""
def __init__(self, partial, expected):
super().__init__(f'{len(partial)} bytes read on a total of '
f'{expected!r} expected bytes')
self.partial = partial
self.expected = expected
def __reduce__(self):
return type(self), (self.partial, self.expected)
class LimitOverrunError(Exception):
"""Reached the buffer limit while looking for a separator.
Attributes:
- consumed: total number of to be consumed bytes.
"""
def __init__(self, message, consumed):
super().__init__(message)
self.consumed = consumed
def __reduce__(self):
return type(self), (self.args[0], self.consumed)
async def open_connection(host=None, port=None, *,
loop=None, limit=_DEFAULT_LIMIT, **kwds):
"""A wrapper for create_connection() returning a (reader, writer) pair.
@ -71,6 +43,10 @@ async def open_connection(host=None, port=None, *,
"""
if loop is None:
loop = events.get_event_loop()
else:
warnings.warn("The loop argument is deprecated since Python 3.8, "
"and scheduled for removal in Python 3.10.",
DeprecationWarning, stacklevel=2)
reader = StreamReader(limit=limit, loop=loop)
protocol = StreamReaderProtocol(reader, loop=loop)
transport, _ = await loop.create_connection(
@ -104,6 +80,10 @@ async def start_server(client_connected_cb, host=None, port=None, *,
"""
if loop is None:
loop = events.get_event_loop()
else:
warnings.warn("The loop argument is deprecated since Python 3.8, "
"and scheduled for removal in Python 3.10.",
DeprecationWarning, stacklevel=2)
def factory():
reader = StreamReader(limit=limit, loop=loop)
@ -122,6 +102,10 @@ if hasattr(socket, 'AF_UNIX'):
"""Similar to `open_connection` but works with UNIX Domain Sockets."""
if loop is None:
loop = events.get_event_loop()
else:
warnings.warn("The loop argument is deprecated since Python 3.8, "
"and scheduled for removal in Python 3.10.",
DeprecationWarning, stacklevel=2)
reader = StreamReader(limit=limit, loop=loop)
protocol = StreamReaderProtocol(reader, loop=loop)
transport, _ = await loop.create_unix_connection(
@ -134,6 +118,10 @@ if hasattr(socket, 'AF_UNIX'):
"""Similar to `start_server` but works with UNIX Domain Sockets."""
if loop is None:
loop = events.get_event_loop()
else:
warnings.warn("The loop argument is deprecated since Python 3.8, "
"and scheduled for removal in Python 3.10.",
DeprecationWarning, stacklevel=2)
def factory():
reader = StreamReader(limit=limit, loop=loop)
@ -208,6 +196,9 @@ class FlowControlMixin(protocols.Protocol):
self._drain_waiter = waiter
await waiter
def _get_close_waiter(self, stream):
raise NotImplementedError
class StreamReaderProtocol(FlowControlMixin, protocols.Protocol):
"""Helper class to adapt between Protocol and StreamReader.
@ -218,46 +209,86 @@ class StreamReaderProtocol(FlowControlMixin, protocols.Protocol):
call inappropriate methods of the protocol.)
"""
_source_traceback = None
def __init__(self, stream_reader, client_connected_cb=None, loop=None):
super().__init__(loop=loop)
self._stream_reader = stream_reader
if stream_reader is not None:
self._stream_reader_wr = weakref.ref(stream_reader)
self._source_traceback = stream_reader._source_traceback
else:
self._stream_reader_wr = None
if client_connected_cb is not None:
# This is a stream created by the `create_server()` function.
# Keep a strong reference to the reader until a connection
# is established.
self._strong_reader = stream_reader
self._reject_connection = False
self._stream_writer = None
self._transport = None
self._client_connected_cb = client_connected_cb
self._over_ssl = False
self._closed = self._loop.create_future()
@property
def _stream_reader(self):
if self._stream_reader_wr is None:
return None
return self._stream_reader_wr()
def connection_made(self, transport):
self._stream_reader.set_transport(transport)
if self._reject_connection:
context = {
'message': ('An open stream was garbage collected prior to '
'establishing network connection; '
'call "stream.close()" explicitly.')
}
if self._source_traceback:
context['source_traceback'] = self._source_traceback
self._loop.call_exception_handler(context)
transport.abort()
return
self._transport = transport
reader = self._stream_reader
if reader is not None:
reader.set_transport(transport)
self._over_ssl = transport.get_extra_info('sslcontext') is not None
if self._client_connected_cb is not None:
self._stream_writer = StreamWriter(transport, self,
self._stream_reader,
reader,
self._loop)
res = self._client_connected_cb(self._stream_reader,
res = self._client_connected_cb(reader,
self._stream_writer)
if coroutines.iscoroutine(res):
self._loop.create_task(res)
self._strong_reader = None
def connection_lost(self, exc):
if self._stream_reader is not None:
reader = self._stream_reader
if reader is not None:
if exc is None:
self._stream_reader.feed_eof()
reader.feed_eof()
else:
self._stream_reader.set_exception(exc)
reader.set_exception(exc)
if not self._closed.done():
if exc is None:
self._closed.set_result(None)
else:
self._closed.set_exception(exc)
super().connection_lost(exc)
self._stream_reader = None
self._stream_reader_wr = None
self._stream_writer = None
self._transport = None
def data_received(self, data):
self._stream_reader.feed_data(data)
reader = self._stream_reader
if reader is not None:
reader.feed_data(data)
def eof_received(self):
self._stream_reader.feed_eof()
reader = self._stream_reader
if reader is not None:
reader.feed_eof()
if self._over_ssl:
# Prevent a warning in SSLProtocol.eof_received:
# "returning true from eof_received()
@ -265,6 +296,9 @@ class StreamReaderProtocol(FlowControlMixin, protocols.Protocol):
return False
return True
def _get_close_waiter(self, stream):
return self._closed
def __del__(self):
# Prevent reports about unhandled exceptions.
# Better than self._closed._log_traceback = False hack
@ -290,6 +324,8 @@ class StreamWriter:
assert reader is None or isinstance(reader, StreamReader)
self._reader = reader
self._loop = loop
self._complete_fut = self._loop.create_future()
self._complete_fut.set_result(None)
def __repr__(self):
info = [self.__class__.__name__, f'transport={self._transport!r}']
@ -320,7 +356,7 @@ class StreamWriter:
return self._transport.is_closing()
async def wait_closed(self):
await self._protocol._closed
await self._protocol._get_close_waiter(self)
def get_extra_info(self, name, default=None):
return self._transport.get_extra_info(name, default)
@ -338,18 +374,23 @@ class StreamWriter:
if exc is not None:
raise exc
if self._transport.is_closing():
# Wait for protocol.connection_lost() call
# Raise connection closing error if any,
# ConnectionResetError otherwise
# Yield to the event loop so connection_lost() may be
# called. Without this, _drain_helper() would return
# immediately, and code that calls
# write(...); await drain()
# in a loop would never call connection_lost(), so it
# would not see an error when the socket is closed.
await sleep(0, loop=self._loop)
await sleep(0)
await self._protocol._drain_helper()
class StreamReader:
_source_traceback = None
def __init__(self, limit=_DEFAULT_LIMIT, loop=None):
# The line length limit is a security feature;
# it also doubles as half the buffer limit.
@ -368,6 +409,9 @@ class StreamReader:
self._exception = None
self._transport = None
self._paused = False
if self._loop.get_debug():
self._source_traceback = format_helpers.extract_stack(
sys._getframe(1))
def __repr__(self):
info = ['StreamReader']
@ -494,9 +538,9 @@ class StreamReader:
seplen = len(sep)
try:
line = await self.readuntil(sep)
except IncompleteReadError as e:
except exceptions.IncompleteReadError as e:
return e.partial
except LimitOverrunError as e:
except exceptions.LimitOverrunError as e:
if self._buffer.startswith(sep, e.consumed):
del self._buffer[:e.consumed + seplen]
else:
@ -571,7 +615,7 @@ class StreamReader:
# see upper comment for explanation.
offset = buflen + 1 - seplen
if offset > self._limit:
raise LimitOverrunError(
raise exceptions.LimitOverrunError(
'Separator is not found, and chunk exceed the limit',
offset)
@ -582,13 +626,13 @@ class StreamReader:
if self._eof:
chunk = bytes(self._buffer)
self._buffer.clear()
raise IncompleteReadError(chunk, None)
raise exceptions.IncompleteReadError(chunk, None)
# _wait_for_data() will resume reading if stream was paused.
await self._wait_for_data('readuntil')
if isep > self._limit:
raise LimitOverrunError(
raise exceptions.LimitOverrunError(
'Separator is found, but chunk is longer than limit', isep)
chunk = self._buffer[:isep + seplen]
@ -674,7 +718,7 @@ class StreamReader:
if self._eof:
incomplete = bytes(self._buffer)
self._buffer.clear()
raise IncompleteReadError(incomplete, n)
raise exceptions.IncompleteReadError(incomplete, n)
await self._wait_for_data('readexactly')

View File

@ -1,6 +1,7 @@
__all__ = 'create_subprocess_exec', 'create_subprocess_shell'
import subprocess
import warnings
from . import events
from . import protocols
@ -25,6 +26,7 @@ class SubprocessStreamProtocol(streams.FlowControlMixin,
self._transport = None
self._process_exited = False
self._pipe_fds = []
self._stdin_closed = self._loop.create_future()
def __repr__(self):
info = [self.__class__.__name__]
@ -76,6 +78,10 @@ class SubprocessStreamProtocol(streams.FlowControlMixin,
if pipe is not None:
pipe.close()
self.connection_lost(exc)
if exc is None:
self._stdin_closed.set_result(None)
else:
self._stdin_closed.set_exception(exc)
return
if fd == 1:
reader = self.stdout
@ -102,6 +108,10 @@ class SubprocessStreamProtocol(streams.FlowControlMixin,
self._transport.close()
self._transport = None
def _get_close_waiter(self, stream):
if stream is self.stdin:
return self._stdin_closed
class Process:
def __init__(self, transport, protocol, loop):
@ -183,8 +193,8 @@ class Process:
stderr = self._read_stream(2)
else:
stderr = self._noop()
stdin, stdout, stderr = await tasks.gather(stdin, stdout, stderr,
loop=self._loop)
stdin, stdout, stderr = await tasks._gather(stdin, stdout, stderr,
loop=self._loop)
await self.wait()
return (stdout, stderr)
@ -194,6 +204,13 @@ async def create_subprocess_shell(cmd, stdin=None, stdout=None, stderr=None,
**kwds):
if loop is None:
loop = events.get_event_loop()
else:
warnings.warn("The loop argument is deprecated since Python 3.8 "
"and scheduled for removal in Python 3.10.",
DeprecationWarning,
stacklevel=2
)
protocol_factory = lambda: SubprocessStreamProtocol(limit=limit,
loop=loop)
transport, protocol = await loop.subprocess_shell(
@ -208,6 +225,12 @@ async def create_subprocess_exec(program, *args, stdin=None, stdout=None,
limit=streams._DEFAULT_LIMIT, **kwds):
if loop is None:
loop = events.get_event_loop()
else:
warnings.warn("The loop argument is deprecated since Python 3.8 "
"and scheduled for removal in Python 3.10.",
DeprecationWarning,
stacklevel=2
)
protocol_factory = lambda: SubprocessStreamProtocol(limit=limit,
loop=loop)
transport, protocol = await loop.subprocess_exec(

View File

@ -13,15 +13,23 @@ import concurrent.futures
import contextvars
import functools
import inspect
import itertools
import types
import warnings
import weakref
from types import GenericAlias
from . import base_tasks
from . import coroutines
from . import events
from . import exceptions
from . import futures
from .coroutines import coroutine
from .coroutines import _is_coroutine
# Helper to generate new task names
# This uses itertools.count() instead of a "+= 1" operation because the latter
# is not thread safe. See bpo-11866 for a longer explanation.
_task_name_counter = itertools.count(1).__next__
def current_task(loop=None):
@ -35,7 +43,22 @@ def all_tasks(loop=None):
"""Return a set of all tasks for the loop."""
if loop is None:
loop = events.get_running_loop()
return {t for t in _all_tasks
# Looping over a WeakSet (_all_tasks) isn't safe as it can be updated from another
# thread while we do so. Therefore we cast it to list prior to filtering. The list
# cast itself requires iteration, so we repeat it several times ignoring
# RuntimeErrors (which are not very likely to occur). See issues 34970 and 36607 for
# details.
i = 0
while True:
try:
tasks = list(_all_tasks)
except RuntimeError:
i += 1
if i >= 1000:
raise
else:
break
return {t for t in tasks
if futures._get_loop(t) is loop and not t.done()}
@ -45,7 +68,32 @@ def _all_tasks_compat(loop=None):
# method.
if loop is None:
loop = events.get_event_loop()
return {t for t in _all_tasks if futures._get_loop(t) is loop}
# Looping over a WeakSet (_all_tasks) isn't safe as it can be updated from another
# thread while we do so. Therefore we cast it to list prior to filtering. The list
# cast itself requires iteration, so we repeat it several times ignoring
# RuntimeErrors (which are not very likely to occur). See issues 34970 and 36607 for
# details.
i = 0
while True:
try:
tasks = list(_all_tasks)
except RuntimeError:
i += 1
if i >= 1000:
raise
else:
break
return {t for t in tasks if futures._get_loop(t) is loop}
def _set_task_name(task, name):
if name is not None:
try:
set_name = task.set_name
except AttributeError:
pass
else:
set_name(name)
class Task(futures._PyFuture): # Inherit Python Task implementation
@ -66,35 +114,7 @@ class Task(futures._PyFuture): # Inherit Python Task implementation
# status is still pending
_log_destroy_pending = True
@classmethod
def current_task(cls, loop=None):
"""Return the currently running task in an event loop or None.
By default the current task for the current event loop is returned.
None is returned when called not in the context of a Task.
"""
warnings.warn("Task.current_task() is deprecated, "
"use asyncio.current_task() instead",
PendingDeprecationWarning,
stacklevel=2)
if loop is None:
loop = events.get_event_loop()
return current_task(loop)
@classmethod
def all_tasks(cls, loop=None):
"""Return a set of all tasks for an event loop.
By default all tasks for the current event loop are returned.
"""
warnings.warn("Task.all_tasks() is deprecated, "
"use asyncio.all_tasks() instead",
PendingDeprecationWarning,
stacklevel=2)
return _all_tasks_compat(loop)
def __init__(self, coro, *, loop=None):
def __init__(self, coro, *, loop=None, name=None):
super().__init__(loop=loop)
if self._source_traceback:
del self._source_traceback[-1]
@ -104,6 +124,11 @@ class Task(futures._PyFuture): # Inherit Python Task implementation
self._log_destroy_pending = False
raise TypeError(f"a coroutine was expected, got {coro!r}")
if name is None:
self._name = f'Task-{_task_name_counter()}'
else:
self._name = str(name)
self._must_cancel = False
self._fut_waiter = None
self._coro = coro
@ -123,9 +148,20 @@ class Task(futures._PyFuture): # Inherit Python Task implementation
self._loop.call_exception_handler(context)
super().__del__()
__class_getitem__ = classmethod(GenericAlias)
def _repr_info(self):
return base_tasks._task_repr_info(self)
def get_coro(self):
return self._coro
def get_name(self):
return self._name
def set_name(self, value):
self._name = str(value)
def set_result(self, result):
raise RuntimeError('Task does not support set_result operation')
@ -166,7 +202,7 @@ class Task(futures._PyFuture): # Inherit Python Task implementation
"""
return base_tasks._task_print_stack(self, limit, file)
def cancel(self):
def cancel(self, msg=None):
"""Request that this task cancel itself.
This arranges for a CancelledError to be thrown into the
@ -190,22 +226,23 @@ class Task(futures._PyFuture): # Inherit Python Task implementation
if self.done():
return False
if self._fut_waiter is not None:
if self._fut_waiter.cancel():
if self._fut_waiter.cancel(msg=msg):
# Leave self._fut_waiter; it may be a Task that
# catches and ignores the cancellation so we may have
# to cancel it again later.
return True
# It must be the case that self.__step is already scheduled.
self._must_cancel = True
self._cancel_message = msg
return True
def __step(self, exc=None):
if self.done():
raise futures.InvalidStateError(
raise exceptions.InvalidStateError(
f'_step(): already done: {self!r}, {exc!r}')
if self._must_cancel:
if not isinstance(exc, futures.CancelledError):
exc = futures.CancelledError()
if not isinstance(exc, exceptions.CancelledError):
exc = self._make_cancelled_error()
self._must_cancel = False
coro = self._coro
self._fut_waiter = None
@ -223,16 +260,18 @@ class Task(futures._PyFuture): # Inherit Python Task implementation
if self._must_cancel:
# Task is cancelled right before coro stops.
self._must_cancel = False
super().set_exception(futures.CancelledError())
super().cancel(msg=self._cancel_message)
else:
super().set_result(exc.value)
except futures.CancelledError:
except exceptions.CancelledError as exc:
# Save the original exception so we can chain it later.
self._cancelled_exc = exc
super().cancel() # I.e., Future.cancel(self).
except Exception as exc:
super().set_exception(exc)
except BaseException as exc:
except (KeyboardInterrupt, SystemExit) as exc:
super().set_exception(exc)
raise
except BaseException as exc:
super().set_exception(exc)
else:
blocking = getattr(result, '_asyncio_future_blocking', None)
if blocking is not None:
@ -255,7 +294,8 @@ class Task(futures._PyFuture): # Inherit Python Task implementation
self.__wakeup, context=self._context)
self._fut_waiter = result
if self._must_cancel:
if self._fut_waiter.cancel():
if self._fut_waiter.cancel(
msg=self._cancel_message):
self._must_cancel = False
else:
new_exc = RuntimeError(
@ -286,7 +326,7 @@ class Task(futures._PyFuture): # Inherit Python Task implementation
def __wakeup(self, future):
try:
future.result()
except Exception as exc:
except BaseException as exc:
# This may also be a cancellation.
self.__step(exc)
else:
@ -312,13 +352,15 @@ else:
Task = _CTask = _asyncio.Task
def create_task(coro):
def create_task(coro, *, name=None):
"""Schedule the execution of a coroutine object in a spawn task.
Return a Task object.
"""
loop = events.get_running_loop()
return loop.create_task(coro)
task = loop.create_task(coro)
_set_task_name(task, name)
return task
# wait() and as_completed() similar to those in PEP 3148.
@ -331,7 +373,7 @@ ALL_COMPLETED = concurrent.futures.ALL_COMPLETED
async def wait(fs, *, loop=None, timeout=None, return_when=ALL_COMPLETED):
"""Wait for the Futures and coroutines given by fs to complete.
The sequence futures must not be empty.
The fs iterable must not be empty.
Coroutines will be wrapped in Tasks.
@ -352,9 +394,21 @@ async def wait(fs, *, loop=None, timeout=None, return_when=ALL_COMPLETED):
raise ValueError(f'Invalid return_when value: {return_when}')
if loop is None:
loop = events.get_event_loop()
loop = events.get_running_loop()
else:
warnings.warn("The loop argument is deprecated since Python 3.8, "
"and scheduled for removal in Python 3.10.",
DeprecationWarning, stacklevel=2)
fs = {ensure_future(f, loop=loop) for f in set(fs)}
fs = set(fs)
if any(coroutines.iscoroutine(f) for f in fs):
warnings.warn("The explicit passing of coroutine objects to "
"asyncio.wait() is deprecated since Python 3.8, and "
"scheduled for removal in Python 3.11.",
DeprecationWarning, stacklevel=2)
fs = {ensure_future(f, loop=loop) for f in fs}
return await _wait(fs, timeout, return_when, loop)
@ -378,7 +432,11 @@ async def wait_for(fut, timeout, *, loop=None):
This function is a coroutine.
"""
if loop is None:
loop = events.get_event_loop()
loop = events.get_running_loop()
else:
warnings.warn("The loop argument is deprecated since Python 3.8, "
"and scheduled for removal in Python 3.10.",
DeprecationWarning, stacklevel=2)
if timeout is None:
return await fut
@ -389,8 +447,11 @@ async def wait_for(fut, timeout, *, loop=None):
if fut.done():
return fut.result()
fut.cancel()
raise futures.TimeoutError()
await _cancel_and_wait(fut, loop=loop)
try:
return fut.result()
except exceptions.CancelledError as exc:
raise exceptions.TimeoutError() from exc
waiter = loop.create_future()
timeout_handle = loop.call_later(timeout, _release_waiter, waiter)
@ -403,10 +464,16 @@ async def wait_for(fut, timeout, *, loop=None):
# wait until the future completes or the timeout
try:
await waiter
except futures.CancelledError:
fut.remove_done_callback(cb)
fut.cancel()
raise
except exceptions.CancelledError:
if fut.done():
return fut.result()
else:
fut.remove_done_callback(cb)
# We must ensure that the task is not running
# after wait_for() returns.
# See https://bugs.python.org/issue32751
await _cancel_and_wait(fut, loop=loop)
raise
if fut.done():
return fut.result()
@ -416,7 +483,13 @@ async def wait_for(fut, timeout, *, loop=None):
# after wait_for() returns.
# See https://bugs.python.org/issue32751
await _cancel_and_wait(fut, loop=loop)
raise futures.TimeoutError()
# In case task cancellation failed with some
# exception, we should re-raise it
# See https://bugs.python.org/issue40607
try:
return fut.result()
except exceptions.CancelledError as exc:
raise exceptions.TimeoutError() from exc
finally:
timeout_handle.cancel()
@ -453,10 +526,11 @@ async def _wait(fs, timeout, return_when, loop):
finally:
if timeout_handle is not None:
timeout_handle.cancel()
for f in fs:
f.remove_done_callback(_on_completion)
done, pending = set(), set()
for f in fs:
f.remove_done_callback(_on_completion)
if f.done():
done.add(f)
else:
@ -500,11 +574,19 @@ def as_completed(fs, *, loop=None, timeout=None):
Note: The futures 'f' are not necessarily members of fs.
"""
if futures.isfuture(fs) or coroutines.iscoroutine(fs):
raise TypeError(f"expect a list of futures, not {type(fs).__name__}")
loop = loop if loop is not None else events.get_event_loop()
todo = {ensure_future(f, loop=loop) for f in set(fs)}
raise TypeError(f"expect an iterable of futures, not {type(fs).__name__}")
if loop is not None:
warnings.warn("The loop argument is deprecated since Python 3.8, "
"and scheduled for removal in Python 3.10.",
DeprecationWarning, stacklevel=2)
from .queues import Queue # Import here to avoid circular import problem.
done = Queue(loop=loop)
if loop is None:
loop = events.get_event_loop()
todo = {ensure_future(f, loop=loop) for f in set(fs)}
timeout_handle = None
def _on_timeout():
@ -525,7 +607,7 @@ def as_completed(fs, *, loop=None, timeout=None):
f = await done.get()
if f is None:
# Dummy value from _on_timeout().
raise futures.TimeoutError
raise exceptions.TimeoutError
return f.result() # May raise f.exception().
for f in todo:
@ -550,12 +632,18 @@ def __sleep0():
async def sleep(delay, result=None, *, loop=None):
"""Coroutine that completes after a given time (in seconds)."""
if loop is not None:
warnings.warn("The loop argument is deprecated since Python 3.8, "
"and scheduled for removal in Python 3.10.",
DeprecationWarning, stacklevel=2)
if delay <= 0:
await __sleep0()
return result
if loop is None:
loop = events.get_event_loop()
loop = events.get_running_loop()
future = loop.create_future()
h = loop.call_later(delay,
futures._set_result_unless_cancelled,
@ -580,7 +668,8 @@ def ensure_future(coro_or_future, *, loop=None):
return task
elif futures.isfuture(coro_or_future):
if loop is not None and loop is not futures._get_loop(coro_or_future):
raise ValueError('loop argument must agree with Future')
raise ValueError('The future belongs to a different loop than '
'the one specified as the loop argument')
return coro_or_future
elif inspect.isawaitable(coro_or_future):
return ensure_future(_wrap_awaitable(coro_or_future), loop=loop)
@ -589,7 +678,7 @@ def ensure_future(coro_or_future, *, loop=None):
'required')
@coroutine
@types.coroutine
def _wrap_awaitable(awaitable):
"""Helper for asyncio.ensure_future().
@ -598,6 +687,8 @@ def _wrap_awaitable(awaitable):
"""
return (yield from awaitable.__await__())
_wrap_awaitable._is_coroutine = _is_coroutine
class _GatheringFuture(futures.Future):
"""Helper for gather().
@ -612,12 +703,12 @@ class _GatheringFuture(futures.Future):
self._children = children
self._cancel_requested = False
def cancel(self):
def cancel(self, msg=None):
if self.done():
return False
ret = False
for child in self._children:
if child.cancel():
if child.cancel(msg=msg):
ret = True
if ret:
# If any child tasks were actually cancelled, we should
@ -649,7 +740,23 @@ def gather(*coros_or_futures, loop=None, return_exceptions=False):
the outer Future is *not* cancelled in this case. (This is to
prevent the cancellation of one child to cause other children to
be cancelled.)
If *return_exceptions* is False, cancelling gather() after it
has been marked done won't cancel any submitted awaitables.
For instance, gather can be marked done after propagating an
exception to the caller, therefore, calling ``gather.cancel()``
after catching an exception (raised by one of the awaitables) from
gather won't cancel any other awaitables.
"""
if loop is not None:
warnings.warn("The loop argument is deprecated since Python 3.8, "
"and scheduled for removal in Python 3.10.",
DeprecationWarning, stacklevel=2)
return _gather(*coros_or_futures, loop=loop, return_exceptions=return_exceptions)
def _gather(*coros_or_futures, loop=None, return_exceptions=False):
if not coros_or_futures:
if loop is None:
loop = events.get_event_loop()
@ -661,7 +768,7 @@ def gather(*coros_or_futures, loop=None, return_exceptions=False):
nonlocal nfinished
nfinished += 1
if outer.done():
if outer is None or outer.done():
if not fut.cancelled():
# Mark exception retrieved.
fut.exception()
@ -672,7 +779,7 @@ def gather(*coros_or_futures, loop=None, return_exceptions=False):
# Check if 'fut' is cancelled first, as
# 'fut.exception()' will *raise* a CancelledError
# instead of returning it.
exc = futures.CancelledError()
exc = fut._make_cancelled_error()
outer.set_exception(exc)
return
else:
@ -688,10 +795,15 @@ def gather(*coros_or_futures, loop=None, return_exceptions=False):
for fut in children:
if fut.cancelled():
# Check if 'fut' is cancelled first, as
# 'fut.exception()' will *raise* a CancelledError
# instead of returning it.
res = futures.CancelledError()
# Check if 'fut' is cancelled first, as 'fut.exception()'
# will *raise* a CancelledError instead of returning it.
# Also, since we're adding the exception return value
# to 'results' instead of raising it, don't bother
# setting __context__. This also lets us preserve
# calling '_make_cancelled_error()' at most once.
res = exceptions.CancelledError(
'' if fut._cancel_message is None else
fut._cancel_message)
else:
res = fut.exception()
if res is None:
@ -702,7 +814,8 @@ def gather(*coros_or_futures, loop=None, return_exceptions=False):
# If gather is being cancelled we must propagate the
# cancellation regardless of *return_exceptions* argument.
# See issue 32684.
outer.set_exception(futures.CancelledError())
exc = fut._make_cancelled_error()
outer.set_exception(exc)
else:
outer.set_result(results)
@ -710,6 +823,7 @@ def gather(*coros_or_futures, loop=None, return_exceptions=False):
children = []
nfuts = 0
nfinished = 0
outer = None # bpo-46672
for arg in coros_or_futures:
if arg not in arg_to_fut:
fut = ensure_future(arg, loop=loop)
@ -762,6 +876,10 @@ def shield(arg, *, loop=None):
except CancelledError:
res = None
"""
if loop is not None:
warnings.warn("The loop argument is deprecated since Python 3.8, "
"and scheduled for removal in Python 3.10.",
DeprecationWarning, stacklevel=2)
inner = ensure_future(arg, loop=loop)
if inner.done():
# Shortcut.
@ -769,7 +887,7 @@ def shield(arg, *, loop=None):
loop = futures._get_loop(inner)
outer = loop.create_future()
def _done_callback(inner):
def _inner_done_callback(inner):
if outer.cancelled():
if not inner.cancelled():
# Mark inner's result as retrieved.
@ -785,7 +903,13 @@ def shield(arg, *, loop=None):
else:
outer.set_result(inner.result())
inner.add_done_callback(_done_callback)
def _outer_done_callback(outer):
if not inner.done():
inner.remove_done_callback(_inner_done_callback)
inner.add_done_callback(_inner_done_callback)
outer.add_done_callback(_outer_done_callback)
return outer
@ -801,7 +925,9 @@ def run_coroutine_threadsafe(coro, loop):
def callback():
try:
futures._chain_future(ensure_future(coro, loop=loop), future)
except Exception as exc:
except (SystemExit, KeyboardInterrupt):
raise
except BaseException as exc:
if future.set_running_or_notify_cancel():
future.set_exception(exc)
raise

View File

@ -9,6 +9,8 @@ __all__ = (
class BaseTransport:
"""Base class for transports."""
__slots__ = ('_extra',)
def __init__(self, extra=None):
if extra is None:
extra = {}
@ -27,8 +29,8 @@ class BaseTransport:
Buffered data will be flushed asynchronously. No more data
will be received. After all buffered data is flushed, the
protocol's connection_lost() method will (eventually) called
with None as its argument.
protocol's connection_lost() method will (eventually) be
called with None as its argument.
"""
raise NotImplementedError
@ -44,6 +46,8 @@ class BaseTransport:
class ReadTransport(BaseTransport):
"""Interface for read-only transports."""
__slots__ = ()
def is_reading(self):
"""Return True if the transport is receiving."""
raise NotImplementedError
@ -68,6 +72,8 @@ class ReadTransport(BaseTransport):
class WriteTransport(BaseTransport):
"""Interface for write-only transports."""
__slots__ = ()
def set_write_buffer_limits(self, high=None, low=None):
"""Set the high- and low-water limits for write flow control.
@ -93,6 +99,12 @@ class WriteTransport(BaseTransport):
"""Return the current size of the write buffer."""
raise NotImplementedError
def get_write_buffer_limits(self):
"""Get the high and low watermarks for write flow control.
Return a tuple (low, high) where low and high are
positive number of bytes."""
raise NotImplementedError
def write(self, data):
"""Write some data bytes to the transport.
@ -154,10 +166,14 @@ class Transport(ReadTransport, WriteTransport):
except writelines(), which calls write() in a loop.
"""
__slots__ = ()
class DatagramTransport(BaseTransport):
"""Interface for datagram (UDP) transports."""
__slots__ = ()
def sendto(self, data, addr=None):
"""Send data to the transport.
@ -180,6 +196,8 @@ class DatagramTransport(BaseTransport):
class SubprocessTransport(BaseTransport):
__slots__ = ()
def get_pid(self):
"""Get subprocess id."""
raise NotImplementedError
@ -247,6 +265,8 @@ class _FlowControlMixin(Transport):
resume_writing() may be called.
"""
__slots__ = ('_loop', '_protocol_paused', '_high_water', '_low_water')
def __init__(self, extra=None, loop=None):
super().__init__(extra)
assert loop is not None
@ -262,7 +282,9 @@ class _FlowControlMixin(Transport):
self._protocol_paused = True
try:
self._protocol.pause_writing()
except Exception as exc:
except (SystemExit, KeyboardInterrupt):
raise
except BaseException as exc:
self._loop.call_exception_handler({
'message': 'protocol.pause_writing() failed',
'exception': exc,
@ -276,7 +298,9 @@ class _FlowControlMixin(Transport):
self._protocol_paused = False
try:
self._protocol.resume_writing()
except Exception as exc:
except (SystemExit, KeyboardInterrupt):
raise
except BaseException as exc:
self._loop.call_exception_handler({
'message': 'protocol.resume_writing() failed',
'exception': exc,

View File

@ -2,6 +2,7 @@
import errno
import io
import itertools
import os
import selectors
import signal
@ -12,12 +13,12 @@ import sys
import threading
import warnings
from . import base_events
from . import base_subprocess
from . import constants
from . import coroutines
from . import events
from . import exceptions
from . import futures
from . import selector_events
from . import tasks
@ -28,7 +29,9 @@ from .log import logger
__all__ = (
'SelectorEventLoop',
'AbstractChildWatcher', 'SafeChildWatcher',
'FastChildWatcher', 'DefaultEventLoopPolicy',
'FastChildWatcher', 'PidfdChildWatcher',
'MultiLoopChildWatcher', 'ThreadedChildWatcher',
'DefaultEventLoopPolicy',
)
@ -98,7 +101,7 @@ class _UnixSelectorEventLoop(selector_events.BaseSelectorEventLoop):
try:
# Register a dummy signal handler to ask Python to write the signal
# number in the wakup file descriptor. _process_self_data() will
# number in the wakeup file descriptor. _process_self_data() will
# read signal numbers from this file descriptor to handle signals.
signal.signal(sig, _sighandler_noop)
@ -168,8 +171,8 @@ class _UnixSelectorEventLoop(selector_events.BaseSelectorEventLoop):
if not isinstance(sig, int):
raise TypeError(f'sig must be an int, not {sig!r}')
if not (1 <= sig < signal.NSIG):
raise ValueError(f'sig {sig} out of range(1, {signal.NSIG})')
if sig not in signal.valid_signals():
raise ValueError(f'invalid signal number {sig}')
def _make_read_pipe_transport(self, pipe, protocol, waiter=None,
extra=None):
@ -183,6 +186,13 @@ class _UnixSelectorEventLoop(selector_events.BaseSelectorEventLoop):
stdin, stdout, stderr, bufsize,
extra=None, **kwargs):
with events.get_child_watcher() as watcher:
if not watcher.is_active():
# Check early.
# Raising exception before process creation
# prevents subprocess execution if the watcher
# is not ready to handle it.
raise RuntimeError("asyncio.get_child_watcher() is not activated, "
"subprocess support is not installed.")
waiter = self.create_future()
transp = _UnixSubprocessTransport(self, protocol, args, shell,
stdin, stdout, stderr, bufsize,
@ -193,7 +203,9 @@ class _UnixSelectorEventLoop(selector_events.BaseSelectorEventLoop):
self._child_watcher_callback, transp)
try:
await waiter
except Exception:
except (SystemExit, KeyboardInterrupt):
raise
except BaseException:
transp.close()
await transp._wait()
raise
@ -311,24 +323,24 @@ class _UnixSelectorEventLoop(selector_events.BaseSelectorEventLoop):
server._start_serving()
# Skip one loop iteration so that all 'loop.add_reader'
# go through.
await tasks.sleep(0, loop=self)
await tasks.sleep(0)
return server
async def _sock_sendfile_native(self, sock, file, offset, count):
try:
os.sendfile
except AttributeError as exc:
raise events.SendfileNotAvailableError(
except AttributeError:
raise exceptions.SendfileNotAvailableError(
"os.sendfile() is not available")
try:
fileno = file.fileno()
except (AttributeError, io.UnsupportedOperation) as err:
raise events.SendfileNotAvailableError("not a regular file")
raise exceptions.SendfileNotAvailableError("not a regular file")
try:
fsize = os.fstat(fileno).st_size
except OSError as err:
raise events.SendfileNotAvailableError("not a regular file")
except OSError:
raise exceptions.SendfileNotAvailableError("not a regular file")
blocksize = count if count else fsize
if not blocksize:
return 0 # empty file
@ -382,14 +394,16 @@ class _UnixSelectorEventLoop(selector_events.BaseSelectorEventLoop):
# one being 'file' is not a regular mmap(2)-like
# file, in which case we'll fall back on using
# plain send().
err = events.SendfileNotAvailableError(
err = exceptions.SendfileNotAvailableError(
"os.sendfile call failed")
self._sock_sendfile_update_filepos(fileno, offset, total_sent)
fut.set_exception(err)
else:
self._sock_sendfile_update_filepos(fileno, offset, total_sent)
fut.set_exception(exc)
except Exception as exc:
except (SystemExit, KeyboardInterrupt):
raise
except BaseException as exc:
self._sock_sendfile_update_filepos(fileno, offset, total_sent)
fut.set_exception(exc)
else:
@ -431,6 +445,7 @@ class _UnixReadPipeTransport(transports.ReadTransport):
self._fileno = pipe.fileno()
self._protocol = protocol
self._closing = False
self._paused = False
mode = os.fstat(self._fileno).st_mode
if not (stat.S_ISFIFO(mode) or
@ -492,10 +507,20 @@ class _UnixReadPipeTransport(transports.ReadTransport):
self._loop.call_soon(self._call_connection_lost, None)
def pause_reading(self):
if self._closing or self._paused:
return
self._paused = True
self._loop._remove_reader(self._fileno)
if self._loop.get_debug():
logger.debug("%r pauses reading", self)
def resume_reading(self):
if self._closing or not self._paused:
return
self._paused = False
self._loop._add_reader(self._fileno, self._read_ready)
if self._loop.get_debug():
logger.debug("%r resumes reading", self)
def set_protocol(self, protocol):
self._protocol = protocol
@ -510,10 +535,9 @@ class _UnixReadPipeTransport(transports.ReadTransport):
if not self._closing:
self._close(None)
def __del__(self):
def __del__(self, _warn=warnings.warn):
if self._pipe is not None:
warnings.warn(f"unclosed transport {self!r}", ResourceWarning,
source=self)
_warn(f"unclosed transport {self!r}", ResourceWarning, source=self)
self._pipe.close()
def _fatal_error(self, exc, message='Fatal error on pipe transport'):
@ -641,7 +665,9 @@ class _UnixWritePipeTransport(transports._FlowControlMixin,
n = os.write(self._fileno, data)
except (BlockingIOError, InterruptedError):
n = 0
except Exception as exc:
except (SystemExit, KeyboardInterrupt):
raise
except BaseException as exc:
self._conn_lost += 1
self._fatal_error(exc, 'Fatal write error on pipe transport')
return
@ -661,7 +687,9 @@ class _UnixWritePipeTransport(transports._FlowControlMixin,
n = os.write(self._fileno, self._buffer)
except (BlockingIOError, InterruptedError):
pass
except Exception as exc:
except (SystemExit, KeyboardInterrupt):
raise
except BaseException as exc:
self._buffer.clear()
self._conn_lost += 1
# Remove writer here, _fatal_error() doesn't it
@ -706,10 +734,9 @@ class _UnixWritePipeTransport(transports._FlowControlMixin,
# write_eof is all what we needed to close the write pipe
self.write_eof()
def __del__(self):
def __del__(self, _warn=warnings.warn):
if self._pipe is not None:
warnings.warn(f"unclosed transport {self!r}", ResourceWarning,
source=self)
_warn(f"unclosed transport {self!r}", ResourceWarning, source=self)
self._pipe.close()
def abort(self):
@ -717,7 +744,7 @@ class _UnixWritePipeTransport(transports._FlowControlMixin,
def _fatal_error(self, exc, message='Fatal error on pipe transport'):
# should be called by exception handler only
if isinstance(exc, base_events._FATAL_ERROR_IGNORE):
if isinstance(exc, OSError):
if self._loop.get_debug():
logger.debug("%r: %s", self, message, exc_info=True)
else:
@ -758,12 +785,18 @@ class _UnixSubprocessTransport(base_subprocess.BaseSubprocessTransport):
# other end). Notably this is needed on AIX, and works
# just fine on other platforms.
stdin, stdin_w = socket.socketpair()
self._proc = subprocess.Popen(
args, shell=shell, stdin=stdin, stdout=stdout, stderr=stderr,
universal_newlines=False, bufsize=bufsize, **kwargs)
if stdin_w is not None:
stdin.close()
self._proc.stdin = open(stdin_w.detach(), 'wb', buffering=bufsize)
try:
self._proc = subprocess.Popen(
args, shell=shell, stdin=stdin, stdout=stdout, stderr=stderr,
universal_newlines=False, bufsize=bufsize, **kwargs)
if stdin_w is not None:
stdin.close()
self._proc.stdin = open(stdin_w.detach(), 'wb', buffering=bufsize)
stdin_w = None
finally:
if stdin_w is not None:
stdin.close()
stdin_w.close()
class AbstractChildWatcher:
@ -825,6 +858,15 @@ class AbstractChildWatcher:
"""
raise NotImplementedError()
def is_active(self):
"""Return ``True`` if the watcher is active and is used by the event loop.
Return True if the watcher is installed and ready to handle process exit
notifications.
"""
raise NotImplementedError()
def __enter__(self):
"""Enter the watcher's context and allow starting new processes
@ -836,6 +878,98 @@ class AbstractChildWatcher:
raise NotImplementedError()
class PidfdChildWatcher(AbstractChildWatcher):
"""Child watcher implementation using Linux's pid file descriptors.
This child watcher polls process file descriptors (pidfds) to await child
process termination. In some respects, PidfdChildWatcher is a "Goldilocks"
child watcher implementation. It doesn't require signals or threads, doesn't
interfere with any processes launched outside the event loop, and scales
linearly with the number of subprocesses launched by the event loop. The
main disadvantage is that pidfds are specific to Linux, and only work on
recent (5.3+) kernels.
"""
def __init__(self):
self._loop = None
self._callbacks = {}
def __enter__(self):
return self
def __exit__(self, exc_type, exc_value, exc_traceback):
pass
def is_active(self):
return self._loop is not None and self._loop.is_running()
def close(self):
self.attach_loop(None)
def attach_loop(self, loop):
if self._loop is not None and loop is None and self._callbacks:
warnings.warn(
'A loop is being detached '
'from a child watcher with pending handlers',
RuntimeWarning)
for pidfd, _, _ in self._callbacks.values():
self._loop._remove_reader(pidfd)
os.close(pidfd)
self._callbacks.clear()
self._loop = loop
def add_child_handler(self, pid, callback, *args):
existing = self._callbacks.get(pid)
if existing is not None:
self._callbacks[pid] = existing[0], callback, args
else:
pidfd = os.pidfd_open(pid)
self._loop._add_reader(pidfd, self._do_wait, pid)
self._callbacks[pid] = pidfd, callback, args
def _do_wait(self, pid):
pidfd, callback, args = self._callbacks.pop(pid)
self._loop._remove_reader(pidfd)
try:
_, status = os.waitpid(pid, 0)
except ChildProcessError:
# The child process is already reaped
# (may happen if waitpid() is called elsewhere).
returncode = 255
logger.warning(
"child process pid %d exit status already read: "
" will report returncode 255",
pid)
else:
returncode = _compute_returncode(status)
os.close(pidfd)
callback(pid, returncode, *args)
def remove_child_handler(self, pid):
try:
pidfd, _, _ = self._callbacks.pop(pid)
except KeyError:
return False
self._loop._remove_reader(pidfd)
os.close(pidfd)
return True
def _compute_returncode(status):
if os.WIFSIGNALED(status):
# The child process died because of a signal.
return -os.WTERMSIG(status)
elif os.WIFEXITED(status):
# The child process exited (e.g sys.exit()).
return os.WEXITSTATUS(status)
else:
# The child exited, but we don't understand its status.
# This shouldn't happen, but if it does, let's just
# return that status; perhaps that helps debug it.
return status
class BaseChildWatcher(AbstractChildWatcher):
def __init__(self):
@ -845,6 +979,9 @@ class BaseChildWatcher(AbstractChildWatcher):
def close(self):
self.attach_loop(None)
def is_active(self):
return self._loop is not None and self._loop.is_running()
def _do_waitpid(self, expected_pid):
raise NotImplementedError()
@ -874,7 +1011,9 @@ class BaseChildWatcher(AbstractChildWatcher):
def _sig_chld(self):
try:
self._do_waitpid_all()
except Exception as exc:
except (SystemExit, KeyboardInterrupt):
raise
except BaseException as exc:
# self._loop should always be available here
# as '_sig_chld' is added as a signal handler
# in 'attach_loop'
@ -883,19 +1022,6 @@ class BaseChildWatcher(AbstractChildWatcher):
'exception': exc,
})
def _compute_returncode(self, status):
if os.WIFSIGNALED(status):
# The child process died because of a signal.
return -os.WTERMSIG(status)
elif os.WIFEXITED(status):
# The child process exited (e.g sys.exit()).
return os.WEXITSTATUS(status)
else:
# The child exited, but we don't understand its status.
# This shouldn't happen, but if it does, let's just
# return that status; perhaps that helps debug it.
return status
class SafeChildWatcher(BaseChildWatcher):
"""'Safe' child watcher implementation.
@ -919,11 +1045,6 @@ class SafeChildWatcher(BaseChildWatcher):
pass
def add_child_handler(self, pid, callback, *args):
if self._loop is None:
raise RuntimeError(
"Cannot add child handler, "
"the child watcher does not have a loop attached")
self._callbacks[pid] = (callback, args)
# Prevent a race condition in case the child is already terminated.
@ -959,7 +1080,7 @@ class SafeChildWatcher(BaseChildWatcher):
# The child process is still alive.
return
returncode = self._compute_returncode(status)
returncode = _compute_returncode(status)
if self._loop.get_debug():
logger.debug('process %s exited with returncode %s',
expected_pid, returncode)
@ -1020,11 +1141,6 @@ class FastChildWatcher(BaseChildWatcher):
def add_child_handler(self, pid, callback, *args):
assert self._forks, "Must use the context manager"
if self._loop is None:
raise RuntimeError(
"Cannot add child handler, "
"the child watcher does not have a loop attached")
with self._lock:
try:
returncode = self._zombies.pop(pid)
@ -1057,7 +1173,7 @@ class FastChildWatcher(BaseChildWatcher):
# A child process is still alive.
return
returncode = self._compute_returncode(status)
returncode = _compute_returncode(status)
with self._lock:
try:
@ -1086,6 +1202,220 @@ class FastChildWatcher(BaseChildWatcher):
callback(pid, returncode, *args)
class MultiLoopChildWatcher(AbstractChildWatcher):
"""A watcher that doesn't require running loop in the main thread.
This implementation registers a SIGCHLD signal handler on
instantiation (which may conflict with other code that
install own handler for this signal).
The solution is safe but it has a significant overhead when
handling a big number of processes (*O(n)* each time a
SIGCHLD is received).
"""
# Implementation note:
# The class keeps compatibility with AbstractChildWatcher ABC
# To achieve this it has empty attach_loop() method
# and doesn't accept explicit loop argument
# for add_child_handler()/remove_child_handler()
# but retrieves the current loop by get_running_loop()
def __init__(self):
self._callbacks = {}
self._saved_sighandler = None
def is_active(self):
return self._saved_sighandler is not None
def close(self):
self._callbacks.clear()
if self._saved_sighandler is None:
return
handler = signal.getsignal(signal.SIGCHLD)
if handler != self._sig_chld:
logger.warning("SIGCHLD handler was changed by outside code")
else:
signal.signal(signal.SIGCHLD, self._saved_sighandler)
self._saved_sighandler = None
def __enter__(self):
return self
def __exit__(self, exc_type, exc_val, exc_tb):
pass
def add_child_handler(self, pid, callback, *args):
loop = events.get_running_loop()
self._callbacks[pid] = (loop, callback, args)
# Prevent a race condition in case the child is already terminated.
self._do_waitpid(pid)
def remove_child_handler(self, pid):
try:
del self._callbacks[pid]
return True
except KeyError:
return False
def attach_loop(self, loop):
# Don't save the loop but initialize itself if called first time
# The reason to do it here is that attach_loop() is called from
# unix policy only for the main thread.
# Main thread is required for subscription on SIGCHLD signal
if self._saved_sighandler is not None:
return
self._saved_sighandler = signal.signal(signal.SIGCHLD, self._sig_chld)
if self._saved_sighandler is None:
logger.warning("Previous SIGCHLD handler was set by non-Python code, "
"restore to default handler on watcher close.")
self._saved_sighandler = signal.SIG_DFL
# Set SA_RESTART to limit EINTR occurrences.
signal.siginterrupt(signal.SIGCHLD, False)
def _do_waitpid_all(self):
for pid in list(self._callbacks):
self._do_waitpid(pid)
def _do_waitpid(self, expected_pid):
assert expected_pid > 0
try:
pid, status = os.waitpid(expected_pid, os.WNOHANG)
except ChildProcessError:
# The child process is already reaped
# (may happen if waitpid() is called elsewhere).
pid = expected_pid
returncode = 255
logger.warning(
"Unknown child process pid %d, will report returncode 255",
pid)
debug_log = False
else:
if pid == 0:
# The child process is still alive.
return
returncode = _compute_returncode(status)
debug_log = True
try:
loop, callback, args = self._callbacks.pop(pid)
except KeyError: # pragma: no cover
# May happen if .remove_child_handler() is called
# after os.waitpid() returns.
logger.warning("Child watcher got an unexpected pid: %r",
pid, exc_info=True)
else:
if loop.is_closed():
logger.warning("Loop %r that handles pid %r is closed", loop, pid)
else:
if debug_log and loop.get_debug():
logger.debug('process %s exited with returncode %s',
expected_pid, returncode)
loop.call_soon_threadsafe(callback, pid, returncode, *args)
def _sig_chld(self, signum, frame):
try:
self._do_waitpid_all()
except (SystemExit, KeyboardInterrupt):
raise
except BaseException:
logger.warning('Unknown exception in SIGCHLD handler', exc_info=True)
class ThreadedChildWatcher(AbstractChildWatcher):
"""Threaded child watcher implementation.
The watcher uses a thread per process
for waiting for the process finish.
It doesn't require subscription on POSIX signal
but a thread creation is not free.
The watcher has O(1) complexity, its performance doesn't depend
on amount of spawn processes.
"""
def __init__(self):
self._pid_counter = itertools.count(0)
self._threads = {}
def is_active(self):
return True
def close(self):
self._join_threads()
def _join_threads(self):
"""Internal: Join all non-daemon threads"""
threads = [thread for thread in list(self._threads.values())
if thread.is_alive() and not thread.daemon]
for thread in threads:
thread.join()
def __enter__(self):
return self
def __exit__(self, exc_type, exc_val, exc_tb):
pass
def __del__(self, _warn=warnings.warn):
threads = [thread for thread in list(self._threads.values())
if thread.is_alive()]
if threads:
_warn(f"{self.__class__} has registered but not finished child processes",
ResourceWarning,
source=self)
def add_child_handler(self, pid, callback, *args):
loop = events.get_running_loop()
thread = threading.Thread(target=self._do_waitpid,
name=f"waitpid-{next(self._pid_counter)}",
args=(loop, pid, callback, args),
daemon=True)
self._threads[pid] = thread
thread.start()
def remove_child_handler(self, pid):
# asyncio never calls remove_child_handler() !!!
# The method is no-op but is implemented because
# abstract base classes require it.
return True
def attach_loop(self, loop):
pass
def _do_waitpid(self, loop, expected_pid, callback, args):
assert expected_pid > 0
try:
pid, status = os.waitpid(expected_pid, 0)
except ChildProcessError:
# The child process is already reaped
# (may happen if waitpid() is called elsewhere).
pid = expected_pid
returncode = 255
logger.warning(
"Unknown child process pid %d, will report returncode 255",
pid)
else:
returncode = _compute_returncode(status)
if loop.get_debug():
logger.debug('process %s exited with returncode %s',
expected_pid, returncode)
if loop.is_closed():
logger.warning("Loop %r that handles pid %r is closed", loop, pid)
else:
loop.call_soon_threadsafe(callback, pid, returncode, *args)
self._threads.pop(expected_pid)
class _UnixDefaultEventLoopPolicy(events.BaseDefaultEventLoopPolicy):
"""UNIX event loop policy with a watcher for child processes."""
_loop_factory = _UnixSelectorEventLoop
@ -1097,9 +1427,8 @@ class _UnixDefaultEventLoopPolicy(events.BaseDefaultEventLoopPolicy):
def _init_watcher(self):
with events._lock:
if self._watcher is None: # pragma: no branch
self._watcher = SafeChildWatcher()
if isinstance(threading.current_thread(),
threading._MainThread):
self._watcher = ThreadedChildWatcher()
if threading.current_thread() is threading.main_thread():
self._watcher.attach_loop(self._local._loop)
def set_event_loop(self, loop):
@ -1113,13 +1442,13 @@ class _UnixDefaultEventLoopPolicy(events.BaseDefaultEventLoopPolicy):
super().set_event_loop(loop)
if (self._watcher is not None and
isinstance(threading.current_thread(), threading._MainThread)):
threading.current_thread() is threading.main_thread()):
self._watcher.attach_loop(loop)
def get_child_watcher(self):
"""Get the watcher for child processes.
If not yet set, a SafeChildWatcher object is automatically created.
If not yet set, a ThreadedChildWatcher object is automatically created.
"""
if self._watcher is None:
self._init_watcher()

View File

@ -1,5 +1,10 @@
"""Selector and proactor event loops for Windows."""
import sys
if sys.platform != 'win32': # pragma: no cover
raise ImportError('win32 only')
import _overlapped
import _winapi
import errno
@ -7,11 +12,13 @@ import math
import msvcrt
import socket
import struct
import time
import weakref
from . import events
from . import base_subprocess
from . import futures
from . import exceptions
from . import proactor_events
from . import selector_events
from . import tasks
@ -73,9 +80,9 @@ class _OverlappedFuture(futures.Future):
self._loop.call_exception_handler(context)
self._ov = None
def cancel(self):
def cancel(self, msg=None):
self._cancel_overlapped()
return super().cancel()
return super().cancel(msg=msg)
def set_exception(self, exception):
super().set_exception(exception)
@ -147,9 +154,9 @@ class _BaseWaitHandleFuture(futures.Future):
self._unregister_wait_cb(None)
def cancel(self):
def cancel(self, msg=None):
self._unregister_wait()
return super().cancel()
return super().cancel(msg=msg)
def set_exception(self, exception):
self._unregister_wait()
@ -307,6 +314,25 @@ class ProactorEventLoop(proactor_events.BaseProactorEventLoop):
proactor = IocpProactor()
super().__init__(proactor)
def run_forever(self):
try:
assert self._self_reading_future is None
self.call_soon(self._loop_self_reading)
super().run_forever()
finally:
if self._self_reading_future is not None:
ov = self._self_reading_future._ov
self._self_reading_future.cancel()
# self_reading_future was just cancelled so if it hasn't been
# finished yet, it never will be (it's possible that it has
# already finished and its callback is waiting in the queue,
# where it could still happen if the event loop is restarted).
# Unregister it otherwise IocpProactor.close will wait for it
# forever
if ov is not None:
self._proactor._unregister(ov)
self._self_reading_future = None
async def create_pipe_connection(self, protocol_factory, address):
f = self._proactor.connect_pipe(address)
pipe = await f
@ -351,7 +377,7 @@ class ProactorEventLoop(proactor_events.BaseProactorEventLoop):
elif self._debug:
logger.warning("Accept pipe failed on pipe %r",
pipe, exc_info=True)
except futures.CancelledError:
except exceptions.CancelledError:
if pipe:
pipe.close()
else:
@ -371,7 +397,9 @@ class ProactorEventLoop(proactor_events.BaseProactorEventLoop):
**kwargs)
try:
await waiter
except Exception:
except (SystemExit, KeyboardInterrupt):
raise
except BaseException:
transp.close()
await transp._wait()
raise
@ -392,10 +420,16 @@ class IocpProactor:
self._unregistered = []
self._stopped_serving = weakref.WeakSet()
def _check_closed(self):
if self._iocp is None:
raise RuntimeError('IocpProactor is closed')
def __repr__(self):
return ('<%s overlapped#=%s result#=%s>'
% (self.__class__.__name__, len(self._cache),
len(self._results)))
info = ['overlapped#=%s' % len(self._cache),
'result#=%s' % len(self._results)]
if self._iocp is None:
info.append('closed')
return '<%s %s>' % (self.__class__.__name__, " ".join(info))
def set_loop(self, loop):
self._loop = loop
@ -444,7 +478,7 @@ class IocpProactor:
else:
ov.ReadFileInto(conn.fileno(), buf)
except BrokenPipeError:
return self._result(b'')
return self._result(0)
def finish_recv(trans, key, ov):
try:
@ -458,6 +492,44 @@ class IocpProactor:
return self._register(ov, conn, finish_recv)
def recvfrom(self, conn, nbytes, flags=0):
self._register_with_iocp(conn)
ov = _overlapped.Overlapped(NULL)
try:
ov.WSARecvFrom(conn.fileno(), nbytes, flags)
except BrokenPipeError:
return self._result((b'', None))
def finish_recv(trans, key, ov):
try:
return ov.getresult()
except OSError as exc:
if exc.winerror in (_overlapped.ERROR_NETNAME_DELETED,
_overlapped.ERROR_OPERATION_ABORTED):
raise ConnectionResetError(*exc.args)
else:
raise
return self._register(ov, conn, finish_recv)
def sendto(self, conn, buf, flags=0, addr=None):
self._register_with_iocp(conn)
ov = _overlapped.Overlapped(NULL)
ov.WSASendTo(conn.fileno(), buf, flags, addr)
def finish_send(trans, key, ov):
try:
return ov.getresult()
except OSError as exc:
if exc.winerror in (_overlapped.ERROR_NETNAME_DELETED,
_overlapped.ERROR_OPERATION_ABORTED):
raise ConnectionResetError(*exc.args)
else:
raise
return self._register(ov, conn, finish_send)
def send(self, conn, buf, flags=0):
self._register_with_iocp(conn)
ov = _overlapped.Overlapped(NULL)
@ -497,7 +569,7 @@ class IocpProactor:
# Coroutine closing the accept socket if the future is cancelled
try:
await future
except futures.CancelledError:
except exceptions.CancelledError:
conn.close()
raise
@ -507,6 +579,14 @@ class IocpProactor:
return future
def connect(self, conn, address):
if conn.type == socket.SOCK_DGRAM:
# WSAConnect will complete immediately for UDP sockets so we don't
# need to register any IOCP operation
_overlapped.WSAConnect(conn.fileno(), address)
fut = self._loop.create_future()
fut.set_result(None)
return fut
self._register_with_iocp(conn)
# The socket needs to be locally bound before we call ConnectEx().
try:
@ -582,7 +662,7 @@ class IocpProactor:
# ConnectPipe() failed with ERROR_PIPE_BUSY: retry later
delay = min(delay * 2, CONNECT_PIPE_MAX_DELAY)
await tasks.sleep(delay, loop=self._loop)
await tasks.sleep(delay)
return windows_utils.PipeHandle(handle)
@ -602,6 +682,8 @@ class IocpProactor:
return fut
def _wait_for_handle(self, handle, timeout, _is_cancel):
self._check_closed()
if timeout is None:
ms = _winapi.INFINITE
else:
@ -644,6 +726,8 @@ class IocpProactor:
# that succeed immediately.
def _register(self, ov, obj, callback):
self._check_closed()
# Return a future which will be set with the result of the
# operation when it completes. The future's value is actually
# the value returned by callback().
@ -680,6 +764,7 @@ class IocpProactor:
already be signalled (pending in the proactor event queue). It is also
safe if the event is never signalled (because it was cancelled).
"""
self._check_closed()
self._unregistered.append(ov)
def _get_accept_socket(self, family):
@ -749,6 +834,10 @@ class IocpProactor:
self._stopped_serving.add(obj)
def close(self):
if self._iocp is None:
# already closed
return
# Cancel remaining registered operations.
for address, (fut, ov, obj, callback) in list(self._cache.items()):
if fut.cancelled():
@ -771,14 +860,25 @@ class IocpProactor:
context['source_traceback'] = fut._source_traceback
self._loop.call_exception_handler(context)
# Wait until all cancelled overlapped complete: don't exit with running
# overlapped to prevent a crash. Display progress every second if the
# loop is still running.
msg_update = 1.0
start_time = time.monotonic()
next_msg = start_time + msg_update
while self._cache:
if not self._poll(1):
logger.debug('taking long time to close proactor')
if next_msg <= time.monotonic():
logger.debug('%r is running after closing for %.1f seconds',
self, time.monotonic() - start_time)
next_msg = time.monotonic() + msg_update
# handle a few events, or timeout
self._poll(msg_update)
self._results = []
if self._iocp is not None:
_winapi.CloseHandle(self._iocp)
self._iocp = None
_winapi.CloseHandle(self._iocp)
self._iocp = None
def __del__(self):
self.close()
@ -810,4 +910,4 @@ class WindowsProactorEventLoopPolicy(events.BaseDefaultEventLoopPolicy):
_loop_factory = ProactorEventLoop
DefaultEventLoopPolicy = WindowsSelectorEventLoopPolicy
DefaultEventLoopPolicy = WindowsProactorEventLoopPolicy

View File

@ -107,10 +107,9 @@ class PipeHandle:
CloseHandle(self._handle)
self._handle = None
def __del__(self):
def __del__(self, _warn=warnings.warn):
if self._handle is not None:
warnings.warn(f"unclosed {self!r}", ResourceWarning,
source=self)
_warn(f"unclosed {self!r}", ResourceWarning, source=self)
self.close()
def __enter__(self):

View File

@ -228,7 +228,7 @@ class dispatcher:
if sock:
# Set to nonblocking just to make sure for cases where we
# get a socket from a blocking source.
sock.setblocking(0)
sock.setblocking(False)
self.set_socket(sock, map)
self.connected = True
# The constructor no longer requires that the socket
@ -262,8 +262,6 @@ class dispatcher:
status.append(repr(self.addr))
return '<%s at %#x>' % (' '.join(status), id(self))
__str__ = __repr__
def add_channel(self, map=None):
#self.log_info('adding channel %s' % self)
if map is None:
@ -282,7 +280,7 @@ class dispatcher:
def create_socket(self, family=socket.AF_INET, type=socket.SOCK_STREAM):
self.family_and_type = family, type
sock = socket.socket(family, type)
sock.setblocking(0)
sock.setblocking(False)
self.set_socket(sock)
def set_socket(self, sock, map=None):

View File

@ -82,7 +82,7 @@ def b64decode(s, altchars=None, validate=False):
altchars = _bytes_from_decode_data(altchars)
assert len(altchars) == 2, repr(altchars)
s = s.translate(bytes.maketrans(altchars, b'+/'))
if validate and not re.match(b'^[A-Za-z0-9+/]*={0,2}$', s):
if validate and not re.fullmatch(b'[A-Za-z0-9+/]*={0,2}', s):
raise binascii.Error('Non-base64 digit found')
return binascii.a2b_base64(s)
@ -231,23 +231,16 @@ def b32decode(s, casefold=False, map01=None):
raise binascii.Error('Non-base32 digit found') from None
decoded += acc.to_bytes(5, 'big')
# Process the last, partial quanta
if padchars:
if l % 8 or padchars not in {0, 1, 3, 4, 6}:
raise binascii.Error('Incorrect padding')
if padchars and decoded:
acc <<= 5 * padchars
last = acc.to_bytes(5, 'big')
if padchars == 1:
decoded[-5:] = last[:-1]
elif padchars == 3:
decoded[-5:] = last[:-2]
elif padchars == 4:
decoded[-5:] = last[:-3]
elif padchars == 6:
decoded[-5:] = last[:-4]
else:
raise binascii.Error('Incorrect padding')
leftover = (43 - 5 * padchars) // 8 # 1: 4, 3: 3, 4: 2, 6: 1
decoded[-5:] = last[:leftover]
return bytes(decoded)
# RFC 3548, Base 16 Alphabet specifies uppercase, but hexlify() returns
# lowercase. The RFC also recommends against accepting input case
# insensitively.
@ -327,7 +320,7 @@ def a85encode(b, *, foldspaces=False, wrapcol=0, pad=False, adobe=False):
global _a85chars, _a85chars2
# Delay the initialization of tables to not waste memory
# if the function is never called
if _a85chars is None:
if _a85chars2 is None:
_a85chars = [bytes((i,)) for i in range(33, 118)]
_a85chars2 = [(a + b) for a in _a85chars for b in _a85chars]
@ -435,7 +428,7 @@ def b85encode(b, pad=False):
global _b85chars, _b85chars2
# Delay the initialization of tables to not waste memory
# if the function is never called
if _b85chars is None:
if _b85chars2 is None:
_b85chars = [bytes((i,)) for i in _b85alphabet]
_b85chars2 = [(a + b) for a in _b85chars for b in _b85chars]
return _85encode(b, _b85chars, _b85chars2, pad)
@ -538,28 +531,12 @@ def encodebytes(s):
pieces.append(binascii.b2a_base64(chunk))
return b"".join(pieces)
def encodestring(s):
"""Legacy alias of encodebytes()."""
import warnings
warnings.warn("encodestring() is a deprecated alias since 3.1, "
"use encodebytes()",
DeprecationWarning, 2)
return encodebytes(s)
def decodebytes(s):
"""Decode a bytestring of base-64 data into a bytes object."""
_input_type_check(s)
return binascii.a2b_base64(s)
def decodestring(s):
"""Legacy alias of decodebytes()."""
import warnings
warnings.warn("decodestring() is a deprecated alias since Python 3.1, "
"use decodebytes()",
DeprecationWarning, 2)
return decodebytes(s)
# Usable as a script...
def main():

View File

@ -38,7 +38,7 @@ class Bdb:
"""Return canonical form of filename.
For real filenames, the canonical form is a case-normalized (on
case insenstive filesystems) absolute path. 'Filenames' with
case insensitive filesystems) absolute path. 'Filenames' with
angle brackets, such as "<stdin>", generated in interactive
mode, are returned unchanged.
"""
@ -74,7 +74,7 @@ class Bdb:
return: A function or other code block is about to return.
exception: An exception has occurred.
c_call: A C function is about to be called.
c_return: A C functon has returned.
c_return: A C function has returned.
c_exception: A C function has raised an exception.
For the Python events, specialized functions (see the dispatch_*()
@ -117,7 +117,7 @@ class Bdb:
"""Invoke user function and return trace function for call event.
If the debugger stops on this function call, invoke
self.user_call(). Raise BbdQuit if self.quitting is set.
self.user_call(). Raise BdbQuit if self.quitting is set.
Return self.trace_dispatch to continue tracing in this scope.
"""
# XXX 'arg' is no longer used
@ -190,6 +190,8 @@ class Bdb:
def is_skipped_module(self, module_name):
"Return True if module_name matches any skip pattern."
if module_name is None: # some modules do not have names
return False
for pattern in self.skip:
if fnmatch.fnmatch(module_name, pattern):
return True
@ -382,7 +384,7 @@ class Bdb:
return None
def _prune_breaks(self, filename, lineno):
"""Prune breakpoints for filname:lineno.
"""Prune breakpoints for filename:lineno.
A list of breakpoints is maintained in the Bdb instance and in
the Breakpoint class. If a breakpoint in the Bdb instance no
@ -546,14 +548,7 @@ class Bdb:
s += frame.f_code.co_name
else:
s += "<lambda>"
if '__args__' in frame.f_locals:
args = frame.f_locals['__args__']
else:
args = None
if args:
s += reprlib.repr(args)
else:
s += '()'
s += '()'
if '__return__' in frame.f_locals:
rv = frame.f_locals['__return__']
s += '->'
@ -616,7 +611,7 @@ class Bdb:
# This method is more useful to debug a single function call.
def runcall(self, func, *args, **kwds):
def runcall(self, func, /, *args, **kwds):
"""Debug a single function call.
Return the result of the function call.

View File

@ -21,10 +21,16 @@ hexbin(inputfilename, outputfilename)
# input. The resulting code (xx 90 90) would appear to be interpreted as an
# escaped *value* of 0x90. All coders I've seen appear to ignore this nicety...
#
import binascii
import contextlib
import io
import os
import struct
import binascii
import warnings
warnings.warn('the binhex module is deprecated', DeprecationWarning,
stacklevel=2)
__all__ = ["binhex","hexbin","Error"]
@ -76,6 +82,16 @@ class openrsrc:
def close(self):
pass
# DeprecationWarning is already emitted on "import binhex". There is no need
# to repeat the warning at each call to deprecated binascii functions.
@contextlib.contextmanager
def _ignore_deprecation_warning():
with warnings.catch_warnings():
warnings.filterwarnings('ignore', '', DeprecationWarning)
yield
class _Hqxcoderengine:
"""Write data to the coder in 3-byte chunks"""
@ -93,23 +109,25 @@ class _Hqxcoderengine:
self.data = self.data[todo:]
if not data:
return
self.hqxdata = self.hqxdata + binascii.b2a_hqx(data)
with _ignore_deprecation_warning():
self.hqxdata = self.hqxdata + binascii.b2a_hqx(data)
self._flush(0)
def _flush(self, force):
first = 0
while first <= len(self.hqxdata) - self.linelen:
last = first + self.linelen
self.ofp.write(self.hqxdata[first:last] + b'\n')
self.ofp.write(self.hqxdata[first:last] + b'\r')
self.linelen = LINELEN
first = last
self.hqxdata = self.hqxdata[first:]
if force:
self.ofp.write(self.hqxdata + b':\n')
self.ofp.write(self.hqxdata + b':\r')
def close(self):
if self.data:
self.hqxdata = self.hqxdata + binascii.b2a_hqx(self.data)
with _ignore_deprecation_warning():
self.hqxdata = self.hqxdata + binascii.b2a_hqx(self.data)
self._flush(1)
self.ofp.close()
del self.ofp
@ -125,13 +143,15 @@ class _Rlecoderengine:
self.data = self.data + data
if len(self.data) < REASONABLY_LARGE:
return
rledata = binascii.rlecode_hqx(self.data)
with _ignore_deprecation_warning():
rledata = binascii.rlecode_hqx(self.data)
self.ofp.write(rledata)
self.data = b''
def close(self):
if self.data:
rledata = binascii.rlecode_hqx(self.data)
with _ignore_deprecation_warning():
rledata = binascii.rlecode_hqx(self.data)
self.ofp.write(rledata)
self.ofp.close()
del self.ofp
@ -276,7 +296,8 @@ class _Hqxdecoderengine:
#
while True:
try:
decdatacur, self.eof = binascii.a2b_hqx(data)
with _ignore_deprecation_warning():
decdatacur, self.eof = binascii.a2b_hqx(data)
break
except binascii.Incomplete:
pass
@ -312,8 +333,9 @@ class _Rledecoderengine:
def _fill(self, wtd):
self.pre_buffer = self.pre_buffer + self.ifp.read(wtd + 4)
if self.ifp.eof:
self.post_buffer = self.post_buffer + \
binascii.rledecode_hqx(self.pre_buffer)
with _ignore_deprecation_warning():
self.post_buffer = self.post_buffer + \
binascii.rledecode_hqx(self.pre_buffer)
self.pre_buffer = b''
return
@ -340,8 +362,9 @@ class _Rledecoderengine:
else:
mark = mark - 1
self.post_buffer = self.post_buffer + \
binascii.rledecode_hqx(self.pre_buffer[:mark])
with _ignore_deprecation_warning():
self.post_buffer = self.post_buffer + \
binascii.rledecode_hqx(self.pre_buffer[:mark])
self.pre_buffer = self.pre_buffer[mark:]
def close(self):

View File

@ -9,14 +9,7 @@ def insort_right(a, x, lo=0, hi=None):
slice of a to be searched.
"""
if lo < 0:
raise ValueError('lo must be non-negative')
if hi is None:
hi = len(a)
while lo < hi:
mid = (lo+hi)//2
if x < a[mid]: hi = mid
else: lo = mid+1
lo = bisect_right(a, x, lo, hi)
a.insert(lo, x)
def bisect_right(a, x, lo=0, hi=None):
@ -36,6 +29,7 @@ def bisect_right(a, x, lo=0, hi=None):
hi = len(a)
while lo < hi:
mid = (lo+hi)//2
# Use __lt__ to match the logic in list.sort() and in heapq
if x < a[mid]: hi = mid
else: lo = mid+1
return lo
@ -49,14 +43,7 @@ def insort_left(a, x, lo=0, hi=None):
slice of a to be searched.
"""
if lo < 0:
raise ValueError('lo must be non-negative')
if hi is None:
hi = len(a)
while lo < hi:
mid = (lo+hi)//2
if a[mid] < x: lo = mid+1
else: hi = mid
lo = bisect_left(a, x, lo, hi)
a.insert(lo, x)
@ -77,6 +64,7 @@ def bisect_left(a, x, lo=0, hi=None):
hi = len(a)
while lo < hi:
mid = (lo+hi)//2
# Use __lt__ to match the logic in list.sort() and in heapq
if a[mid] < x: lo = mid+1
else: hi = mid
return lo

View File

@ -12,7 +12,6 @@ __author__ = "Nadeem Vawda <nadeem.vawda@gmail.com>"
from builtins import open as _builtin_open
import io
import os
import warnings
import _compression
from threading import RLock
@ -36,7 +35,7 @@ class BZ2File(_compression.BaseStream):
returned as bytes, and data to be written should be given as bytes.
"""
def __init__(self, filename, mode="r", buffering=None, compresslevel=9):
def __init__(self, filename, mode="r", *, compresslevel=9):
"""Open a bzip2-compressed file.
If filename is a str, bytes, or PathLike object, it gives the
@ -47,8 +46,6 @@ class BZ2File(_compression.BaseStream):
'x' for creating exclusively, or 'a' for appending. These can
equivalently be given as 'rb', 'wb', 'xb', and 'ab'.
buffering is ignored. Its use is deprecated.
If mode is 'w', 'x' or 'a', compresslevel can be a number between 1
and 9 specifying the level of compression: 1 produces the least
compression, and 9 (default) produces the most compression.
@ -63,10 +60,6 @@ class BZ2File(_compression.BaseStream):
self._closefp = False
self._mode = _MODE_CLOSED
if buffering is not None:
warnings.warn("Use of 'buffering' argument is deprecated",
DeprecationWarning)
if not (1 <= compresslevel <= 9):
raise ValueError("compresslevel must be between 1 and 9")
@ -233,15 +226,23 @@ class BZ2File(_compression.BaseStream):
"""Write a byte string to the file.
Returns the number of uncompressed bytes written, which is
always len(data). Note that due to buffering, the file on disk
may not reflect the data written until close() is called.
always the length of data in bytes. Note that due to buffering,
the file on disk may not reflect the data written until close()
is called.
"""
with self._lock:
self._check_can_write()
if isinstance(data, (bytes, bytearray)):
length = len(data)
else:
# accept any data that supports the buffer protocol
data = memoryview(data)
length = data.nbytes
compressed = self._compressor.compress(data)
self._fp.write(compressed)
self._pos += len(data)
return len(data)
self._pos += length
return length
def writelines(self, seq):
"""Write a sequence of byte strings to the file.

View File

@ -7,6 +7,7 @@
__all__ = ["run", "runctx", "Profile"]
import _lsprof
import io
import profile as _pyprofile
# ____________________________________________________________
@ -25,11 +26,11 @@ runctx.__doc__ = _pyprofile.runctx.__doc__
# ____________________________________________________________
class Profile(_lsprof.Profiler):
"""Profile(custom_timer=None, time_unit=None, subcalls=True, builtins=True)
"""Profile(timer=None, timeunit=None, subcalls=True, builtins=True)
Builds a profiler object using the specified timer function.
The default timer is a fast built-in one based on real time.
For custom timer functions returning integers, time_unit can
For custom timer functions returning integers, timeunit can
be a float specifying a scale (i.e. how long each integer unit
is, in seconds).
"""
@ -103,13 +104,20 @@ class Profile(_lsprof.Profiler):
return self
# This method is more useful to profile a single function call.
def runcall(self, func, *args, **kw):
def runcall(self, func, /, *args, **kw):
self.enable()
try:
return func(*args, **kw)
finally:
self.disable()
def __enter__(self):
self.enable()
return self
def __exit__(self, *exc_info):
self.disable()
# ____________________________________________________________
def label(code):
@ -124,6 +132,7 @@ def main():
import os
import sys
import runpy
import pstats
from optparse import OptionParser
usage = "cProfile.py [-o output_file_path] [-s sort] [-m module | scriptfile] [arg] ..."
parser = OptionParser(usage=usage)
@ -132,7 +141,8 @@ def main():
help="Save stats to <outfile>", default=None)
parser.add_option('-s', '--sort', dest="sort",
help="Sort order when printing to stdout, based on pstats.Stats class",
default=-1)
default=-1,
choices=sorted(pstats.Stats.sort_arg_dict_default))
parser.add_option('-m', dest="module", action="store_true",
help="Profile a library module", default=False)
@ -143,6 +153,11 @@ def main():
(options, args) = parser.parse_args()
sys.argv[:] = args
# The script that we're profiling may chdir, so capture the absolute path
# to the output file at startup.
if options.outfile is not None:
options.outfile = os.path.abspath(options.outfile)
if len(args) > 0:
if options.module:
code = "run_module(modname, run_name='__main__')"
@ -153,7 +168,7 @@ def main():
else:
progname = args[0]
sys.path.insert(0, os.path.dirname(progname))
with open(progname, 'rb') as fp:
with io.open_code(progname) as fp:
code = compile(fp.read(), progname, 'exec')
globs = {
'__file__': progname,
@ -161,7 +176,12 @@ def main():
'__package__': None,
'__cached__': None,
}
runctx(code, globs, None, options.outfile, options.sort)
try:
runctx(code, globs, None, options.outfile, options.sort)
except BrokenPipeError as exc:
# Prevent "Exception ignored" during interpreter shutdown.
sys.stdout = None
sys.exit(exc.errno)
else:
parser.print_usage()
return parser

View File

@ -127,18 +127,18 @@ def monthrange(year, month):
return day1, ndays
def monthlen(year, month):
def _monthlen(year, month):
return mdays[month] + (month == February and isleap(year))
def prevmonth(year, month):
def _prevmonth(year, month):
if month == 1:
return year-1, 12
else:
return year, month-1
def nextmonth(year, month):
def _nextmonth(year, month):
if month == 12:
return year+1, 1
else:
@ -207,13 +207,13 @@ class Calendar(object):
day1, ndays = monthrange(year, month)
days_before = (day1 - self.firstweekday) % 7
days_after = (self.firstweekday - day1 - ndays) % 7
y, m = prevmonth(year, month)
end = monthlen(y, m) + 1
y, m = _prevmonth(year, month)
end = _monthlen(y, m) + 1
for d in range(end-days_before, end):
yield y, m, d
for d in range(1, ndays + 1):
yield year, month, d
y, m = nextmonth(year, month)
y, m = _nextmonth(year, month)
for d in range(1, days_after + 1):
yield y, m, d

View File

@ -38,16 +38,14 @@ import os
import urllib.parse
from email.parser import FeedParser
from email.message import Message
from warnings import warn
import html
import locale
import tempfile
__all__ = ["MiniFieldStorage", "FieldStorage",
"parse", "parse_qs", "parse_qsl", "parse_multipart",
__all__ = ["MiniFieldStorage", "FieldStorage", "parse", "parse_multipart",
"parse_header", "test", "print_exception", "print_environ",
"print_form", "print_directory", "print_arguments",
"print_environ_usage", "escape"]
"print_environ_usage"]
# Logging support
# ===============
@ -117,7 +115,8 @@ log = initlog # The current logging function
# 0 ==> unlimited input
maxlen = 0
def parse(fp=None, environ=os.environ, keep_blank_values=0, strict_parsing=0):
def parse(fp=None, environ=os.environ, keep_blank_values=0,
strict_parsing=0, separator='&'):
"""Parse a query in the environment or from a file (default stdin)
Arguments, all optional:
@ -136,6 +135,9 @@ def parse(fp=None, environ=os.environ, keep_blank_values=0, strict_parsing=0):
strict_parsing: flag indicating what to do with parsing errors.
If false (the default), errors are silently ignored.
If true, errors raise a ValueError exception.
separator: str. The symbol to use for separating the query arguments.
Defaults to &.
"""
if fp is None:
fp = sys.stdin
@ -156,7 +158,7 @@ def parse(fp=None, environ=os.environ, keep_blank_values=0, strict_parsing=0):
if environ['REQUEST_METHOD'] == 'POST':
ctype, pdict = parse_header(environ['CONTENT_TYPE'])
if ctype == 'multipart/form-data':
return parse_multipart(fp, pdict)
return parse_multipart(fp, pdict, separator=separator)
elif ctype == 'application/x-www-form-urlencoded':
clength = int(environ['CONTENT_LENGTH'])
if maxlen and clength > maxlen:
@ -180,25 +182,10 @@ def parse(fp=None, environ=os.environ, keep_blank_values=0, strict_parsing=0):
qs = ""
environ['QUERY_STRING'] = qs # XXX Shouldn't, really
return urllib.parse.parse_qs(qs, keep_blank_values, strict_parsing,
encoding=encoding)
encoding=encoding, separator=separator)
# parse query string function called from urlparse,
# this is done in order to maintain backward compatibility.
def parse_qs(qs, keep_blank_values=0, strict_parsing=0):
"""Parse a query given as a string argument."""
warn("cgi.parse_qs is deprecated, use urllib.parse.parse_qs instead",
DeprecationWarning, 2)
return urllib.parse.parse_qs(qs, keep_blank_values, strict_parsing)
def parse_qsl(qs, keep_blank_values=0, strict_parsing=0):
"""Parse a query given as a string argument."""
warn("cgi.parse_qsl is deprecated, use urllib.parse.parse_qsl instead",
DeprecationWarning, 2)
return urllib.parse.parse_qsl(qs, keep_blank_values, strict_parsing)
def parse_multipart(fp, pdict, encoding="utf-8", errors="replace"):
def parse_multipart(fp, pdict, encoding="utf-8", errors="replace", separator='&'):
"""Parse multipart input.
Arguments:
@ -217,9 +204,12 @@ def parse_multipart(fp, pdict, encoding="utf-8", errors="replace"):
ctype = "multipart/form-data; boundary={}".format(boundary)
headers = Message()
headers.set_type(ctype)
headers['Content-Length'] = pdict['CONTENT-LENGTH']
try:
headers['Content-Length'] = pdict['CONTENT-LENGTH']
except KeyError:
pass
fs = FieldStorage(fp, headers=headers, encoding=encoding, errors=errors,
environ={'REQUEST_METHOD': 'POST'})
environ={'REQUEST_METHOD': 'POST'}, separator=separator)
return {k: fs.getlist(k) for k in fs}
def _parseparam(s):
@ -328,7 +318,8 @@ class FieldStorage:
"""
def __init__(self, fp=None, headers=None, outerboundary=b'',
environ=os.environ, keep_blank_values=0, strict_parsing=0,
limit=None, encoding='utf-8', errors='replace'):
limit=None, encoding='utf-8', errors='replace',
max_num_fields=None, separator='&'):
"""Constructor. Read multipart/* until last part.
Arguments, all optional:
@ -368,10 +359,15 @@ class FieldStorage:
for the page sending the form (content-type : meta http-equiv or
header)
max_num_fields: int. If set, then __init__ throws a ValueError
if there are more than n fields read by parse_qsl().
"""
method = 'GET'
self.keep_blank_values = keep_blank_values
self.strict_parsing = strict_parsing
self.max_num_fields = max_num_fields
self.separator = separator
if 'REQUEST_METHOD' in environ:
method = environ['REQUEST_METHOD'].upper()
self.qs_on_post = None
@ -473,7 +469,7 @@ class FieldStorage:
if maxlen and clen > maxlen:
raise ValueError('Maximum content length exceeded')
self.length = clen
if self.limit is None and clen:
if self.limit is None and clen >= 0:
self.limit = clen
self.list = self.file = None
@ -595,12 +591,11 @@ class FieldStorage:
qs = qs.decode(self.encoding, self.errors)
if self.qs_on_post:
qs += '&' + self.qs_on_post
self.list = []
query = urllib.parse.parse_qsl(
qs, self.keep_blank_values, self.strict_parsing,
encoding=self.encoding, errors=self.errors)
for key, value in query:
self.list.append(MiniFieldStorage(key, value))
encoding=self.encoding, errors=self.errors,
max_num_fields=self.max_num_fields, separator=self.separator)
self.list = [MiniFieldStorage(key, value) for key, value in query]
self.skip_lines()
FieldStorageClass = None
@ -614,9 +609,9 @@ class FieldStorage:
if self.qs_on_post:
query = urllib.parse.parse_qsl(
self.qs_on_post, self.keep_blank_values, self.strict_parsing,
encoding=self.encoding, errors=self.errors)
for key, value in query:
self.list.append(MiniFieldStorage(key, value))
encoding=self.encoding, errors=self.errors,
max_num_fields=self.max_num_fields, separator=self.separator)
self.list.extend(MiniFieldStorage(key, value) for key, value in query)
klass = self.FieldStorageClass or self.__class__
first_line = self.fp.readline() # bytes
@ -631,6 +626,11 @@ class FieldStorage:
first_line = self.fp.readline()
self.bytes_read += len(first_line)
# Propagate max_num_fields into the sub class appropriately
max_num_fields = self.max_num_fields
if max_num_fields is not None:
max_num_fields -= len(self.list)
while True:
parser = FeedParser()
hdr_text = b""
@ -650,9 +650,19 @@ class FieldStorage:
if 'content-length' in headers:
del headers['content-length']
limit = None if self.limit is None \
else self.limit - self.bytes_read
part = klass(self.fp, headers, ib, environ, keep_blank_values,
strict_parsing,self.limit-self.bytes_read,
self.encoding, self.errors)
strict_parsing, limit,
self.encoding, self.errors, max_num_fields, self.separator)
if max_num_fields is not None:
max_num_fields -= 1
if part.list:
max_num_fields -= len(part.list)
if max_num_fields < 0:
raise ValueError('Max number of fields exceeded')
self.bytes_read += part.bytes_read
self.list.append(part)
if part.done or self.bytes_read >= self.length > 0:
@ -734,7 +744,8 @@ class FieldStorage:
last_line_lfend = True
_read = 0
while 1:
if _read >= self.limit:
if self.limit is not None and 0 <= self.limit <= _read:
break
line = self.fp.readline(1<<16) # bytes
self.bytes_read += len(line)
@ -974,18 +985,6 @@ environment as well. Here are some common variable names:
# Utilities
# =========
def escape(s, quote=None):
"""Deprecated API."""
warn("cgi.escape is deprecated, use html.escape instead",
DeprecationWarning, stacklevel=2)
s = s.replace("&", "&amp;") # Must be done first!
s = s.replace("<", "&lt;")
s = s.replace(">", "&gt;")
if quote:
s = s.replace('"', "&quot;")
return s
def valid_boundary(s):
import re
if isinstance(s, bytes):

View File

@ -124,8 +124,9 @@ function calls leading up to the error, in the order they occurred.</p>'''
args, varargs, varkw, locals = inspect.getargvalues(frame)
call = ''
if func != '?':
call = 'in ' + strong(pydoc.html.escape(func)) + \
inspect.formatargvalues(args, varargs, varkw, locals,
call = 'in ' + strong(pydoc.html.escape(func))
if func != "<module>":
call += inspect.formatargvalues(args, varargs, varkw, locals,
formatvalue=lambda value: '=' + pydoc.html.repr(value))
highlight = {}
@ -207,8 +208,9 @@ function calls leading up to the error, in the order they occurred.
args, varargs, varkw, locals = inspect.getargvalues(frame)
call = ''
if func != '?':
call = 'in ' + func + \
inspect.formatargvalues(args, varargs, varkw, locals,
call = 'in ' + func
if func != "<module>":
call += inspect.formatargvalues(args, varargs, varkw, locals,
formatvalue=lambda value: '=' + pydoc.text.repr(value))
highlight = {}

View File

@ -40,7 +40,7 @@ class InteractiveInterpreter:
Arguments are as for compile_command().
One several things can happen:
One of several things can happen:
1) The input is incorrect; compile_command() raised an
exception (SyntaxError or OverflowError). A syntax traceback

View File

@ -386,7 +386,7 @@ class StreamWriter(Codec):
def reset(self):
""" Flushes and resets the codec buffers used for keeping state.
""" Resets the codec buffers used for keeping internal state.
Calling this method should ensure that the data on the
output is put into a clean state, that allows appending
@ -620,7 +620,7 @@ class StreamReader(Codec):
def reset(self):
""" Resets the codec buffers used for keeping state.
""" Resets the codec buffers used for keeping internal state.
Note that no stream repositioning should take place.
This method is primarily intended to be able to recover
@ -838,7 +838,7 @@ class StreamRecoder:
def writelines(self, list):
data = ''.join(list)
data = b''.join(list)
data, bytesdecoded = self.decode(data, self.errors)
return self.writer.write(data)
@ -847,6 +847,12 @@ class StreamRecoder:
self.reader.reset()
self.writer.reset()
def seek(self, offset, whence=0):
# Seeks must be propagated to both the readers and writers
# as they might need to reset their internal buffers.
self.reader.seek(offset, whence)
self.writer.seek(offset, whence)
def __getattr__(self, name,
getattr=getattr):
@ -862,7 +868,7 @@ class StreamRecoder:
### Shortcuts
def open(filename, mode='r', encoding=None, errors='strict', buffering=1):
def open(filename, mode='r', encoding=None, errors='strict', buffering=-1):
""" Open an encoded file using the given mode and return
a wrapped version providing transparent encoding/decoding.
@ -883,7 +889,8 @@ def open(filename, mode='r', encoding=None, errors='strict', buffering=1):
encoding error occurs.
buffering has the same meaning as for the builtin open() API.
It defaults to line buffered.
It defaults to -1 which means that the default buffer size will
be used.
The returned wrapped file object provides an extra attribute
.encoding which allows querying the used encoding. This
@ -898,11 +905,16 @@ def open(filename, mode='r', encoding=None, errors='strict', buffering=1):
file = builtins.open(filename, mode, buffering)
if encoding is None:
return file
info = lookup(encoding)
srw = StreamReaderWriter(file, info.streamreader, info.streamwriter, errors)
# Add attributes to simplify introspection
srw.encoding = encoding
return srw
try:
info = lookup(encoding)
srw = StreamReaderWriter(file, info.streamreader, info.streamwriter, errors)
# Add attributes to simplify introspection
srw.encoding = encoding
return srw
except:
file.close()
raise
def EncodedFile(file, data_encoding, file_encoding=None, errors='strict'):

View File

@ -57,6 +57,7 @@ Compile():
"""
import __future__
import warnings
_features = [getattr(__future__, fname)
for fname in __future__.all_feature_names]
@ -80,23 +81,31 @@ def _maybe_compile(compiler, source, filename, symbol):
try:
code = compiler(source, filename, symbol)
except SyntaxError as err:
except SyntaxError:
pass
try:
code1 = compiler(source + "\n", filename, symbol)
except SyntaxError as e:
err1 = e
# Catch syntax warnings after the first compile
# to emit warnings (SyntaxWarning, DeprecationWarning) at most once.
with warnings.catch_warnings():
warnings.simplefilter("error")
try:
code1 = compiler(source + "\n", filename, symbol)
except SyntaxError as e:
err1 = e
try:
code2 = compiler(source + "\n\n", filename, symbol)
except SyntaxError as e:
err2 = e
try:
code2 = compiler(source + "\n\n", filename, symbol)
except SyntaxError as e:
err2 = e
if code:
return code
if not code1 and repr(err1) == repr(err2):
raise err1
if code:
return code
if not code1 and repr(err1) == repr(err2):
raise err1
finally:
err1 = err2 = None
def _compile(source, filename, symbol):
return compile(source, filename, symbol, PyCF_DONT_IMPLY_DEDENT)
@ -109,7 +118,8 @@ def compile_command(source, filename="<input>", symbol="single"):
source -- the source string; may contain \n characters
filename -- optional filename from which source was read; default
"<input>"
symbol -- optional grammar start symbol; "single" (default) or "eval"
symbol -- optional grammar start symbol; "single" (default), "exec"
or "eval"
Return value / exceptions raised:
@ -130,7 +140,7 @@ class Compile:
self.flags = PyCF_DONT_IMPLY_DEDENT
def __call__(self, source, filename, symbol):
codeob = compile(source, filename, symbol, self.flags, 1)
codeob = compile(source, filename, symbol, self.flags, True)
for feature in _features:
if codeob.co_flags & feature.compiler_flag:
self.flags |= feature.compiler_flag

View File

@ -14,17 +14,30 @@ list, set, and tuple.
'''
__all__ = ['deque', 'defaultdict', 'namedtuple', 'UserDict', 'UserList',
'UserString', 'Counter', 'OrderedDict', 'ChainMap']
__all__ = [
'ChainMap',
'Counter',
'OrderedDict',
'UserDict',
'UserList',
'UserString',
'defaultdict',
'deque',
'namedtuple',
]
import _collections_abc
from operator import itemgetter as _itemgetter, eq as _eq
from keyword import iskeyword as _iskeyword
import sys as _sys
import heapq as _heapq
from _weakref import proxy as _proxy
from itertools import repeat as _repeat, chain as _chain, starmap as _starmap
import sys as _sys
from itertools import chain as _chain
from itertools import repeat as _repeat
from itertools import starmap as _starmap
from keyword import iskeyword as _iskeyword
from operator import eq as _eq
from operator import itemgetter as _itemgetter
from reprlib import recursive_repr as _recursive_repr
from _weakref import proxy as _proxy
try:
from _collections import deque
@ -47,13 +60,14 @@ def __getattr__(name):
obj = getattr(_collections_abc, name)
import warnings
warnings.warn("Using or importing the ABCs from 'collections' instead "
"of from 'collections.abc' is deprecated, "
"and in 3.8 it will stop working",
"of from 'collections.abc' is deprecated since Python 3.3, "
"and in 3.10 it will stop working",
DeprecationWarning, stacklevel=2)
globals()[name] = obj
return obj
raise AttributeError(f'module {__name__!r} has no attribute {name!r}')
################################################################################
### OrderedDict
################################################################################
@ -93,16 +107,10 @@ class OrderedDict(dict):
# Individual links are kept alive by the hard reference in self.__map.
# Those hard references disappear when a key is deleted from an OrderedDict.
def __init__(*args, **kwds):
def __init__(self, other=(), /, **kwds):
'''Initialize an ordered dictionary. The signature is the same as
regular dictionaries. Keyword argument order is preserved.
'''
if not args:
raise TypeError("descriptor '__init__' of 'OrderedDict' object "
"needs an argument")
self, *args = args
if len(args) > 1:
raise TypeError('expected at most 1 arguments, got %d' % len(args))
try:
self.__root
except AttributeError:
@ -110,7 +118,7 @@ class OrderedDict(dict):
self.__root = root = _proxy(self.__hardroot)
root.prev = root.next = root
self.__map = {}
self.__update(*args, **kwds)
self.__update(other, **kwds)
def __setitem__(self, key, value,
dict_setitem=dict.__setitem__, proxy=_proxy, Link=_Link):
@ -299,6 +307,24 @@ class OrderedDict(dict):
return dict.__eq__(self, other) and all(map(_eq, self, other))
return dict.__eq__(self, other)
def __ior__(self, other):
self.update(other)
return self
def __or__(self, other):
if not isinstance(other, dict):
return NotImplemented
new = self.__class__(self)
new.update(other)
return new
def __ror__(self, other):
if not isinstance(other, dict):
return NotImplemented
new = self.__class__(other)
new.update(self)
return new
try:
from _collections import OrderedDict
@ -311,7 +337,10 @@ except ImportError:
### namedtuple
################################################################################
_nt_itemgetters = {}
try:
from _collections import _tuplegetter
except ImportError:
_tuplegetter = lambda index, doc: property(_itemgetter(index), doc=doc)
def namedtuple(typename, field_names, *, rename=False, defaults=None, module=None):
"""Returns a new subclass of tuple with named fields.
@ -384,18 +413,23 @@ def namedtuple(typename, field_names, *, rename=False, defaults=None, module=Non
# Variables used in the methods and docstrings
field_names = tuple(map(_sys.intern, field_names))
num_fields = len(field_names)
arg_list = repr(field_names).replace("'", "")[1:-1]
arg_list = ', '.join(field_names)
if num_fields == 1:
arg_list += ','
repr_fmt = '(' + ', '.join(f'{name}=%r' for name in field_names) + ')'
tuple_new = tuple.__new__
_len = len
_dict, _tuple, _len, _map, _zip = dict, tuple, len, map, zip
# Create all the named tuple methods to be added to the class namespace
s = f'def __new__(_cls, {arg_list}): return _tuple_new(_cls, ({arg_list}))'
namespace = {'_tuple_new': tuple_new, '__name__': f'namedtuple_{typename}'}
# Note: exec() has the side-effect of interning the field names
exec(s, namespace)
__new__ = namespace['__new__']
namespace = {
'_tuple_new': tuple_new,
'__builtins__': {},
'__name__': f'namedtuple_{typename}',
}
code = f'lambda _cls, {arg_list}: _tuple_new(_cls, ({arg_list}))'
__new__ = eval(code, namespace)
__new__.__name__ = '__new__'
__new__.__doc__ = f'Create new instance of {typename}({arg_list})'
if defaults is not None:
__new__.__defaults__ = defaults
@ -410,8 +444,8 @@ def namedtuple(typename, field_names, *, rename=False, defaults=None, module=Non
_make.__func__.__doc__ = (f'Make a new {typename} object from a sequence '
'or iterable')
def _replace(_self, **kwds):
result = _self._make(map(kwds.pop, field_names, _self))
def _replace(self, /, **kwds):
result = self._make(_map(kwds.pop, field_names, self))
if kwds:
raise ValueError(f'Got unexpected field names: {list(kwds)!r}')
return result
@ -424,17 +458,22 @@ def namedtuple(typename, field_names, *, rename=False, defaults=None, module=Non
return self.__class__.__name__ + repr_fmt % self
def _asdict(self):
'Return a new OrderedDict which maps field names to their values.'
return OrderedDict(zip(self._fields, self))
'Return a new dict which maps field names to their values.'
return _dict(_zip(self._fields, self))
def __getnewargs__(self):
'Return self as a plain tuple. Used by copy and pickle.'
return tuple(self)
return _tuple(self)
# Modify function metadata to help with introspection and debugging
for method in (__new__, _make.__func__, _replace,
__repr__, _asdict, __getnewargs__):
for method in (
__new__,
_make.__func__,
_replace,
__repr__,
_asdict,
__getnewargs__,
):
method.__qualname__ = f'{typename}.{method.__name__}'
# Build-up the class namespace dictionary
@ -443,7 +482,7 @@ def namedtuple(typename, field_names, *, rename=False, defaults=None, module=Non
'__doc__': f'{typename}({arg_list})',
'__slots__': (),
'_fields': field_names,
'_fields_defaults': field_defaults,
'_field_defaults': field_defaults,
'__new__': __new__,
'_make': _make,
'_replace': _replace,
@ -451,15 +490,9 @@ def namedtuple(typename, field_names, *, rename=False, defaults=None, module=Non
'_asdict': _asdict,
'__getnewargs__': __getnewargs__,
}
cache = _nt_itemgetters
for index, name in enumerate(field_names):
try:
itemgetter_object, doc = cache[index]
except KeyError:
itemgetter_object = _itemgetter(index)
doc = f'Alias for field number {index}'
cache[index] = itemgetter_object, doc
class_namespace[name] = property(itemgetter_object, doc=doc)
doc = _sys.intern(f'Alias for field number {index}')
class_namespace[name] = _tuplegetter(index, doc)
result = type(typename, (tuple,), class_namespace)
@ -545,7 +578,7 @@ class Counter(dict):
# http://code.activestate.com/recipes/259174/
# Knuth, TAOCP Vol. II section 4.6.3
def __init__(*args, **kwds):
def __init__(self, iterable=None, /, **kwds):
'''Create a new, empty Counter object. And if given, count elements
from an input iterable. Or, initialize the count from another mapping
of elements to their counts.
@ -556,14 +589,8 @@ class Counter(dict):
>>> c = Counter(a=4, b=2) # a new counter from keyword args
'''
if not args:
raise TypeError("descriptor '__init__' of 'Counter' object "
"needs an argument")
self, *args = args
if len(args) > 1:
raise TypeError('expected at most 1 arguments, got %d' % len(args))
super(Counter, self).__init__()
self.update(*args, **kwds)
super().__init__()
self.update(iterable, **kwds)
def __missing__(self, key):
'The count of elements not in the Counter is zero.'
@ -574,8 +601,8 @@ class Counter(dict):
'''List the n most common elements and their counts from the most
common to the least. If n is None, then list all element counts.
>>> Counter('abcdeabcdabcaba').most_common(3)
[('a', 5), ('b', 4), ('c', 3)]
>>> Counter('abracadabra').most_common(3)
[('a', 5), ('b', 2), ('r', 2)]
'''
# Emulate Bag.sortedByCount from Smalltalk
@ -609,12 +636,17 @@ class Counter(dict):
@classmethod
def fromkeys(cls, iterable, v=None):
# There is no equivalent method for counters because setting v=1
# means that no element can have a count greater than one.
# There is no equivalent method for counters because the semantics
# would be ambiguous in cases such as Counter.fromkeys('aaabbc', v=2).
# Initializing counters to zero values isn't necessary because zero
# is already the default value for counter lookups. Initializing
# to one is easily accomplished with Counter(set(iterable)). For
# more exotic cases, create a dictionary first using a dictionary
# comprehension or dict.fromkeys().
raise NotImplementedError(
'Counter.fromkeys() is undefined. Use Counter(iterable) instead.')
def update(*args, **kwds):
def update(self, iterable=None, /, **kwds):
'''Like dict.update() but add counts instead of replacing them.
Source can be an iterable, a dictionary, or another Counter instance.
@ -634,13 +666,6 @@ class Counter(dict):
# contexts. Instead, we implement straight-addition. Both the inputs
# and outputs are allowed to contain zero and negative counts.
if not args:
raise TypeError("descriptor 'update' of 'Counter' object "
"needs an argument")
self, *args = args
if len(args) > 1:
raise TypeError('expected at most 1 arguments, got %d' % len(args))
iterable = args[0] if args else None
if iterable is not None:
if isinstance(iterable, _collections_abc.Mapping):
if self:
@ -648,13 +673,14 @@ class Counter(dict):
for elem, count in iterable.items():
self[elem] = count + self_get(elem, 0)
else:
super(Counter, self).update(iterable) # fast path when counter is empty
# fast path when counter is empty
super().update(iterable)
else:
_count_elements(self, iterable)
if kwds:
self.update(kwds)
def subtract(*args, **kwds):
def subtract(self, iterable=None, /, **kwds):
'''Like dict.update() but subtracts counts instead of replacing them.
Counts can be reduced below zero. Both the inputs and outputs are
allowed to contain zero and negative counts.
@ -670,13 +696,6 @@ class Counter(dict):
-1
'''
if not args:
raise TypeError("descriptor 'subtract' of 'Counter' object "
"needs an argument")
self, *args = args
if len(args) > 1:
raise TypeError('expected at most 1 arguments, got %d' % len(args))
iterable = args[0] if args else None
if iterable is not None:
self_get = self.get
if isinstance(iterable, _collections_abc.Mapping):
@ -702,13 +721,14 @@ class Counter(dict):
def __repr__(self):
if not self:
return '%s()' % self.__class__.__name__
return f'{self.__class__.__name__}()'
try:
items = ', '.join(map('%r: %r'.__mod__, self.most_common()))
return '%s({%s})' % (self.__class__.__name__, items)
# dict() preserves the ordering returned by most_common()
d = dict(self.most_common())
except TypeError:
# handle case where values are not orderable
return '{0}({1!r})'.format(self.__class__.__name__, dict(self))
d = dict(self)
return f'{self.__class__.__name__}({d!r})'
# Multiset-style mathematical operations discussed in:
# Knuth TAOCP Volume II section 4.6.3 exercise 19
@ -718,6 +738,13 @@ class Counter(dict):
#
# To strip negative and zero counts, add-in an empty counter:
# c += Counter()
#
# Rich comparison operators for multiset subset and superset tests
# are deliberately omitted due to semantic conflicts with the
# existing inherited dict equality method. Subset and superset
# semantics ignore zero counts and require that p≤q ∧ p≥q → p=q;
# however, that would not be the case for p=Counter(a=1, b=0)
# and q=Counter(a=1) where the dictionaries are not equal.
def __add__(self, other):
'''Add counts from two counters.
@ -922,7 +949,7 @@ class ChainMap(_collections_abc.MutableMapping):
def __iter__(self):
d = {}
for mapping in reversed(self.maps):
d.update(mapping) # reuses stored hash values if possible
d.update(dict.fromkeys(mapping)) # reuses stored hash values if possible
return iter(d)
def __contains__(self, key):
@ -933,8 +960,7 @@ class ChainMap(_collections_abc.MutableMapping):
@_recursive_repr()
def __repr__(self):
return '{0.__class__.__name__}({1})'.format(
self, ', '.join(map(repr, self.maps)))
return f'{self.__class__.__name__}({", ".join(map(repr, self.maps))})'
@classmethod
def fromkeys(cls, iterable, *args):
@ -967,7 +993,7 @@ class ChainMap(_collections_abc.MutableMapping):
try:
del self.maps[0][key]
except KeyError:
raise KeyError('Key not found in the first mapping: {!r}'.format(key))
raise KeyError(f'Key not found in the first mapping: {key!r}')
def popitem(self):
'Remove and return an item pair from maps[0]. Raise KeyError is maps[0] is empty.'
@ -981,12 +1007,31 @@ class ChainMap(_collections_abc.MutableMapping):
try:
return self.maps[0].pop(key, *args)
except KeyError:
raise KeyError('Key not found in the first mapping: {!r}'.format(key))
raise KeyError(f'Key not found in the first mapping: {key!r}')
def clear(self):
'Clear maps[0], leaving maps[1:] intact.'
self.maps[0].clear()
def __ior__(self, other):
self.maps[0].update(other)
return self
def __or__(self, other):
if not isinstance(other, _collections_abc.Mapping):
return NotImplemented
m = self.copy()
m.maps[0].update(other)
return m
def __ror__(self, other):
if not isinstance(other, _collections_abc.Mapping):
return NotImplemented
m = dict(other)
for child in reversed(self.maps):
m.update(child)
return self.__class__(m)
################################################################################
### UserDict
@ -995,36 +1040,29 @@ class ChainMap(_collections_abc.MutableMapping):
class UserDict(_collections_abc.MutableMapping):
# Start by filling-out the abstract methods
def __init__(*args, **kwargs):
if not args:
raise TypeError("descriptor '__init__' of 'UserDict' object "
"needs an argument")
self, *args = args
if len(args) > 1:
raise TypeError('expected at most 1 arguments, got %d' % len(args))
if args:
dict = args[0]
elif 'dict' in kwargs:
dict = kwargs.pop('dict')
import warnings
warnings.warn("Passing 'dict' as keyword argument is deprecated",
DeprecationWarning, stacklevel=2)
else:
dict = None
def __init__(self, dict=None, /, **kwargs):
self.data = {}
if dict is not None:
self.update(dict)
if len(kwargs):
if kwargs:
self.update(kwargs)
def __len__(self): return len(self.data)
def __len__(self):
return len(self.data)
def __getitem__(self, key):
if key in self.data:
return self.data[key]
if hasattr(self.__class__, "__missing__"):
return self.__class__.__missing__(self, key)
raise KeyError(key)
def __setitem__(self, key, item): self.data[key] = item
def __delitem__(self, key): del self.data[key]
def __setitem__(self, key, item):
self.data[key] = item
def __delitem__(self, key):
del self.data[key]
def __iter__(self):
return iter(self.data)
@ -1033,7 +1071,37 @@ class UserDict(_collections_abc.MutableMapping):
return key in self.data
# Now, add the methods in dicts but not in MutableMapping
def __repr__(self): return repr(self.data)
def __repr__(self):
return repr(self.data)
def __or__(self, other):
if isinstance(other, UserDict):
return self.__class__(self.data | other.data)
if isinstance(other, dict):
return self.__class__(self.data | other)
return NotImplemented
def __ror__(self, other):
if isinstance(other, UserDict):
return self.__class__(other.data | self.data)
if isinstance(other, dict):
return self.__class__(other | self.data)
return NotImplemented
def __ior__(self, other):
if isinstance(other, UserDict):
self.data |= other.data
else:
self.data |= other
return self
def __copy__(self):
inst = self.__class__.__new__(self.__class__)
inst.__dict__.update(self.__dict__)
# Create a copy and avoid triggering descriptors
inst.__dict__["data"] = self.__dict__["data"].copy()
return inst
def copy(self):
if self.__class__ is UserDict:
return UserDict(self.data.copy())
@ -1046,6 +1114,7 @@ class UserDict(_collections_abc.MutableMapping):
self.data = data
c.update(self)
return c
@classmethod
def fromkeys(cls, iterable, value=None):
d = cls()
@ -1054,13 +1123,13 @@ class UserDict(_collections_abc.MutableMapping):
return d
################################################################################
### UserList
################################################################################
class UserList(_collections_abc.MutableSequence):
"""A more or less complete user-defined wrapper around list objects."""
def __init__(self, initlist=None):
self.data = []
if initlist is not None:
@ -1071,31 +1140,60 @@ class UserList(_collections_abc.MutableSequence):
self.data[:] = initlist.data[:]
else:
self.data = list(initlist)
def __repr__(self): return repr(self.data)
def __lt__(self, other): return self.data < self.__cast(other)
def __le__(self, other): return self.data <= self.__cast(other)
def __eq__(self, other): return self.data == self.__cast(other)
def __gt__(self, other): return self.data > self.__cast(other)
def __ge__(self, other): return self.data >= self.__cast(other)
def __repr__(self):
return repr(self.data)
def __lt__(self, other):
return self.data < self.__cast(other)
def __le__(self, other):
return self.data <= self.__cast(other)
def __eq__(self, other):
return self.data == self.__cast(other)
def __gt__(self, other):
return self.data > self.__cast(other)
def __ge__(self, other):
return self.data >= self.__cast(other)
def __cast(self, other):
return other.data if isinstance(other, UserList) else other
def __contains__(self, item): return item in self.data
def __len__(self): return len(self.data)
def __getitem__(self, i): return self.data[i]
def __setitem__(self, i, item): self.data[i] = item
def __delitem__(self, i): del self.data[i]
def __contains__(self, item):
return item in self.data
def __len__(self):
return len(self.data)
def __getitem__(self, i):
if isinstance(i, slice):
return self.__class__(self.data[i])
else:
return self.data[i]
def __setitem__(self, i, item):
self.data[i] = item
def __delitem__(self, i):
del self.data[i]
def __add__(self, other):
if isinstance(other, UserList):
return self.__class__(self.data + other.data)
elif isinstance(other, type(self.data)):
return self.__class__(self.data + other)
return self.__class__(self.data + list(other))
def __radd__(self, other):
if isinstance(other, UserList):
return self.__class__(other.data + self.data)
elif isinstance(other, type(self.data)):
return self.__class__(other + self.data)
return self.__class__(list(other) + self.data)
def __iadd__(self, other):
if isinstance(other, UserList):
self.data += other.data
@ -1104,22 +1202,53 @@ class UserList(_collections_abc.MutableSequence):
else:
self.data += list(other)
return self
def __mul__(self, n):
return self.__class__(self.data*n)
return self.__class__(self.data * n)
__rmul__ = __mul__
def __imul__(self, n):
self.data *= n
return self
def append(self, item): self.data.append(item)
def insert(self, i, item): self.data.insert(i, item)
def pop(self, i=-1): return self.data.pop(i)
def remove(self, item): self.data.remove(item)
def clear(self): self.data.clear()
def copy(self): return self.__class__(self)
def count(self, item): return self.data.count(item)
def index(self, item, *args): return self.data.index(item, *args)
def reverse(self): self.data.reverse()
def sort(self, *args, **kwds): self.data.sort(*args, **kwds)
def __copy__(self):
inst = self.__class__.__new__(self.__class__)
inst.__dict__.update(self.__dict__)
# Create a copy and avoid triggering descriptors
inst.__dict__["data"] = self.__dict__["data"][:]
return inst
def append(self, item):
self.data.append(item)
def insert(self, i, item):
self.data.insert(i, item)
def pop(self, i=-1):
return self.data.pop(i)
def remove(self, item):
self.data.remove(item)
def clear(self):
self.data.clear()
def copy(self):
return self.__class__(self)
def count(self, item):
return self.data.count(item)
def index(self, item, *args):
return self.data.index(item, *args)
def reverse(self):
self.data.reverse()
def sort(self, /, *args, **kwds):
self.data.sort(*args, **kwds)
def extend(self, other):
if isinstance(other, UserList):
self.data.extend(other.data)
@ -1127,12 +1256,12 @@ class UserList(_collections_abc.MutableSequence):
self.data.extend(other)
################################################################################
### UserString
################################################################################
class UserString(_collections_abc.Sequence):
def __init__(self, seq):
if isinstance(seq, str):
self.data = seq
@ -1140,12 +1269,25 @@ class UserString(_collections_abc.Sequence):
self.data = seq.data[:]
else:
self.data = str(seq)
def __str__(self): return str(self.data)
def __repr__(self): return repr(self.data)
def __int__(self): return int(self.data)
def __float__(self): return float(self.data)
def __complex__(self): return complex(self.data)
def __hash__(self): return hash(self.data)
def __str__(self):
return str(self.data)
def __repr__(self):
return repr(self.data)
def __int__(self):
return int(self.data)
def __float__(self):
return float(self.data)
def __complex__(self):
return complex(self.data)
def __hash__(self):
return hash(self.data)
def __getnewargs__(self):
return (self.data[:],)
@ -1153,18 +1295,22 @@ class UserString(_collections_abc.Sequence):
if isinstance(string, UserString):
return self.data == string.data
return self.data == string
def __lt__(self, string):
if isinstance(string, UserString):
return self.data < string.data
return self.data < string
def __le__(self, string):
if isinstance(string, UserString):
return self.data <= string.data
return self.data <= string
def __gt__(self, string):
if isinstance(string, UserString):
return self.data > string.data
return self.data > string
def __ge__(self, string):
if isinstance(string, UserString):
return self.data >= string.data
@ -1175,105 +1321,188 @@ class UserString(_collections_abc.Sequence):
char = char.data
return char in self.data
def __len__(self): return len(self.data)
def __getitem__(self, index): return self.__class__(self.data[index])
def __len__(self):
return len(self.data)
def __getitem__(self, index):
return self.__class__(self.data[index])
def __add__(self, other):
if isinstance(other, UserString):
return self.__class__(self.data + other.data)
elif isinstance(other, str):
return self.__class__(self.data + other)
return self.__class__(self.data + str(other))
def __radd__(self, other):
if isinstance(other, str):
return self.__class__(other + self.data)
return self.__class__(str(other) + self.data)
def __mul__(self, n):
return self.__class__(self.data*n)
return self.__class__(self.data * n)
__rmul__ = __mul__
def __mod__(self, args):
return self.__class__(self.data % args)
def __rmod__(self, format):
return self.__class__(format % args)
def __rmod__(self, template):
return self.__class__(str(template) % self)
# the following methods are defined in alphabetical order:
def capitalize(self): return self.__class__(self.data.capitalize())
def capitalize(self):
return self.__class__(self.data.capitalize())
def casefold(self):
return self.__class__(self.data.casefold())
def center(self, width, *args):
return self.__class__(self.data.center(width, *args))
def count(self, sub, start=0, end=_sys.maxsize):
if isinstance(sub, UserString):
sub = sub.data
return self.data.count(sub, start, end)
def encode(self, encoding=None, errors=None): # XXX improve this?
if encoding:
if errors:
return self.__class__(self.data.encode(encoding, errors))
return self.__class__(self.data.encode(encoding))
return self.__class__(self.data.encode())
def removeprefix(self, prefix, /):
if isinstance(prefix, UserString):
prefix = prefix.data
return self.__class__(self.data.removeprefix(prefix))
def removesuffix(self, suffix, /):
if isinstance(suffix, UserString):
suffix = suffix.data
return self.__class__(self.data.removesuffix(suffix))
def encode(self, encoding='utf-8', errors='strict'):
encoding = 'utf-8' if encoding is None else encoding
errors = 'strict' if errors is None else errors
return self.data.encode(encoding, errors)
def endswith(self, suffix, start=0, end=_sys.maxsize):
return self.data.endswith(suffix, start, end)
def expandtabs(self, tabsize=8):
return self.__class__(self.data.expandtabs(tabsize))
def find(self, sub, start=0, end=_sys.maxsize):
if isinstance(sub, UserString):
sub = sub.data
return self.data.find(sub, start, end)
def format(self, *args, **kwds):
def format(self, /, *args, **kwds):
return self.data.format(*args, **kwds)
def format_map(self, mapping):
return self.data.format_map(mapping)
def index(self, sub, start=0, end=_sys.maxsize):
return self.data.index(sub, start, end)
def isalpha(self): return self.data.isalpha()
def isalnum(self): return self.data.isalnum()
def isascii(self): return self.data.isascii()
def isdecimal(self): return self.data.isdecimal()
def isdigit(self): return self.data.isdigit()
def isidentifier(self): return self.data.isidentifier()
def islower(self): return self.data.islower()
def isnumeric(self): return self.data.isnumeric()
def isprintable(self): return self.data.isprintable()
def isspace(self): return self.data.isspace()
def istitle(self): return self.data.istitle()
def isupper(self): return self.data.isupper()
def join(self, seq): return self.data.join(seq)
def isalpha(self):
return self.data.isalpha()
def isalnum(self):
return self.data.isalnum()
def isascii(self):
return self.data.isascii()
def isdecimal(self):
return self.data.isdecimal()
def isdigit(self):
return self.data.isdigit()
def isidentifier(self):
return self.data.isidentifier()
def islower(self):
return self.data.islower()
def isnumeric(self):
return self.data.isnumeric()
def isprintable(self):
return self.data.isprintable()
def isspace(self):
return self.data.isspace()
def istitle(self):
return self.data.istitle()
def isupper(self):
return self.data.isupper()
def join(self, seq):
return self.data.join(seq)
def ljust(self, width, *args):
return self.__class__(self.data.ljust(width, *args))
def lower(self): return self.__class__(self.data.lower())
def lstrip(self, chars=None): return self.__class__(self.data.lstrip(chars))
def lower(self):
return self.__class__(self.data.lower())
def lstrip(self, chars=None):
return self.__class__(self.data.lstrip(chars))
maketrans = str.maketrans
def partition(self, sep):
return self.data.partition(sep)
def replace(self, old, new, maxsplit=-1):
if isinstance(old, UserString):
old = old.data
if isinstance(new, UserString):
new = new.data
return self.__class__(self.data.replace(old, new, maxsplit))
def rfind(self, sub, start=0, end=_sys.maxsize):
if isinstance(sub, UserString):
sub = sub.data
return self.data.rfind(sub, start, end)
def rindex(self, sub, start=0, end=_sys.maxsize):
return self.data.rindex(sub, start, end)
def rjust(self, width, *args):
return self.__class__(self.data.rjust(width, *args))
def rpartition(self, sep):
return self.data.rpartition(sep)
def rstrip(self, chars=None):
return self.__class__(self.data.rstrip(chars))
def split(self, sep=None, maxsplit=-1):
return self.data.split(sep, maxsplit)
def rsplit(self, sep=None, maxsplit=-1):
return self.data.rsplit(sep, maxsplit)
def splitlines(self, keepends=False): return self.data.splitlines(keepends)
def splitlines(self, keepends=False):
return self.data.splitlines(keepends)
def startswith(self, prefix, start=0, end=_sys.maxsize):
return self.data.startswith(prefix, start, end)
def strip(self, chars=None): return self.__class__(self.data.strip(chars))
def swapcase(self): return self.__class__(self.data.swapcase())
def title(self): return self.__class__(self.data.title())
def strip(self, chars=None):
return self.__class__(self.data.strip(chars))
def swapcase(self):
return self.__class__(self.data.swapcase())
def title(self):
return self.__class__(self.data.title())
def translate(self, *args):
return self.__class__(self.data.translate(*args))
def upper(self): return self.__class__(self.data.upper())
def zfill(self, width): return self.__class__(self.data.zfill(width))
def upper(self):
return self.__class__(self.data.upper())
def zfill(self, width):
return self.__class__(self.data.zfill(width))

View File

@ -1,2 +1,3 @@
from _collections_abc import *
from _collections_abc import __all__
from _collections_abc import _CallableGenericAlias

View File

@ -15,16 +15,14 @@ import sys
import importlib.util
import py_compile
import struct
import filecmp
try:
from concurrent.futures import ProcessPoolExecutor
except ImportError:
ProcessPoolExecutor = None
from functools import partial
from pathlib import Path
__all__ = ["compile_dir","compile_file","compile_path"]
def _walk_dir(dir, ddir=None, maxlevels=10, quiet=0):
def _walk_dir(dir, maxlevels, quiet=0):
if quiet < 2 and isinstance(dir, os.PathLike):
dir = os.fspath(dir)
if not quiet:
@ -40,43 +38,64 @@ def _walk_dir(dir, ddir=None, maxlevels=10, quiet=0):
if name == '__pycache__':
continue
fullname = os.path.join(dir, name)
if ddir is not None:
dfile = os.path.join(ddir, name)
else:
dfile = None
if not os.path.isdir(fullname):
yield fullname
elif (maxlevels > 0 and name != os.curdir and name != os.pardir and
os.path.isdir(fullname) and not os.path.islink(fullname)):
yield from _walk_dir(fullname, ddir=dfile,
maxlevels=maxlevels - 1, quiet=quiet)
yield from _walk_dir(fullname, maxlevels=maxlevels - 1,
quiet=quiet)
def compile_dir(dir, maxlevels=10, ddir=None, force=False, rx=None,
quiet=0, legacy=False, optimize=-1, workers=1,
invalidation_mode=py_compile.PycInvalidationMode.TIMESTAMP):
def compile_dir(dir, maxlevels=None, ddir=None, force=False,
rx=None, quiet=0, legacy=False, optimize=-1, workers=1,
invalidation_mode=None, *, stripdir=None,
prependdir=None, limit_sl_dest=None, hardlink_dupes=False):
"""Byte-compile all modules in the given directory tree.
Arguments (only dir is required):
dir: the directory to byte-compile
maxlevels: maximum recursion level (default 10)
maxlevels: maximum recursion level (default `sys.getrecursionlimit()`)
ddir: the directory that will be prepended to the path to the
file as it is compiled into each byte-code file.
force: if True, force compilation, even if timestamps are up-to-date
quiet: full output with False or 0, errors only with 1,
no output with 2
legacy: if True, produce legacy pyc paths instead of PEP 3147 paths
optimize: optimization level or -1 for level of the interpreter
optimize: int or list of optimization levels or -1 for level of
the interpreter. Multiple levels leads to multiple compiled
files each with one optimization level.
workers: maximum number of parallel workers
invalidation_mode: how the up-to-dateness of the pyc will be checked
stripdir: part of path to left-strip from source file path
prependdir: path to prepend to beginning of original file path, applied
after stripdir
limit_sl_dest: ignore symlinks if they are pointing outside of
the defined path
hardlink_dupes: hardlink duplicated pyc files
"""
if workers is not None and workers < 0:
ProcessPoolExecutor = None
if ddir is not None and (stripdir is not None or prependdir is not None):
raise ValueError(("Destination dir (ddir) cannot be used "
"in combination with stripdir or prependdir"))
if ddir is not None:
stripdir = dir
prependdir = ddir
ddir = None
if workers < 0:
raise ValueError('workers must be greater or equal to 0')
files = _walk_dir(dir, quiet=quiet, maxlevels=maxlevels,
ddir=ddir)
if workers != 1:
try:
# Only import when needed, as low resource platforms may
# fail to import it
from concurrent.futures import ProcessPoolExecutor
except ImportError:
workers = 1
if maxlevels is None:
maxlevels = sys.getrecursionlimit()
files = _walk_dir(dir, quiet=quiet, maxlevels=maxlevels)
success = True
if workers is not None and workers != 1 and ProcessPoolExecutor is not None:
if workers != 1 and ProcessPoolExecutor is not None:
# If workers == 0, let ProcessPoolExecutor choose
workers = workers or None
with ProcessPoolExecutor(max_workers=workers) as executor:
results = executor.map(partial(compile_file,
@ -84,19 +103,27 @@ def compile_dir(dir, maxlevels=10, ddir=None, force=False, rx=None,
rx=rx, quiet=quiet,
legacy=legacy,
optimize=optimize,
invalidation_mode=invalidation_mode),
invalidation_mode=invalidation_mode,
stripdir=stripdir,
prependdir=prependdir,
limit_sl_dest=limit_sl_dest,
hardlink_dupes=hardlink_dupes),
files)
success = min(results, default=True)
else:
for file in files:
if not compile_file(file, ddir, force, rx, quiet,
legacy, optimize, invalidation_mode):
legacy, optimize, invalidation_mode,
stripdir=stripdir, prependdir=prependdir,
limit_sl_dest=limit_sl_dest,
hardlink_dupes=hardlink_dupes):
success = False
return success
def compile_file(fullname, ddir=None, force=False, rx=None, quiet=0,
legacy=False, optimize=-1,
invalidation_mode=py_compile.PycInvalidationMode.TIMESTAMP):
invalidation_mode=None, *, stripdir=None, prependdir=None,
limit_sl_dest=None, hardlink_dupes=False):
"""Byte-compile one file.
Arguments (only fullname is required):
@ -108,51 +135,114 @@ def compile_file(fullname, ddir=None, force=False, rx=None, quiet=0,
quiet: full output with False or 0, errors only with 1,
no output with 2
legacy: if True, produce legacy pyc paths instead of PEP 3147 paths
optimize: optimization level or -1 for level of the interpreter
optimize: int or list of optimization levels or -1 for level of
the interpreter. Multiple levels leads to multiple compiled
files each with one optimization level.
invalidation_mode: how the up-to-dateness of the pyc will be checked
stripdir: part of path to left-strip from source file path
prependdir: path to prepend to beginning of original file path, applied
after stripdir
limit_sl_dest: ignore symlinks if they are pointing outside of
the defined path.
hardlink_dupes: hardlink duplicated pyc files
"""
if ddir is not None and (stripdir is not None or prependdir is not None):
raise ValueError(("Destination dir (ddir) cannot be used "
"in combination with stripdir or prependdir"))
success = True
if quiet < 2 and isinstance(fullname, os.PathLike):
fullname = os.fspath(fullname)
name = os.path.basename(fullname)
dfile = None
if ddir is not None:
dfile = os.path.join(ddir, name)
else:
dfile = None
if stripdir is not None:
fullname_parts = fullname.split(os.path.sep)
stripdir_parts = stripdir.split(os.path.sep)
ddir_parts = list(fullname_parts)
for spart, opart in zip(stripdir_parts, fullname_parts):
if spart == opart:
ddir_parts.remove(spart)
dfile = os.path.join(*ddir_parts)
if prependdir is not None:
if dfile is None:
dfile = os.path.join(prependdir, fullname)
else:
dfile = os.path.join(prependdir, dfile)
if isinstance(optimize, int):
optimize = [optimize]
# Use set() to remove duplicates.
# Use sorted() to create pyc files in a deterministic order.
optimize = sorted(set(optimize))
if hardlink_dupes and len(optimize) < 2:
raise ValueError("Hardlinking of duplicated bytecode makes sense "
"only for more than one optimization level")
if rx is not None:
mo = rx.search(fullname)
if mo:
return success
if limit_sl_dest is not None and os.path.islink(fullname):
if Path(limit_sl_dest).resolve() not in Path(fullname).resolve().parents:
return success
opt_cfiles = {}
if os.path.isfile(fullname):
if legacy:
cfile = fullname + 'c'
else:
if optimize >= 0:
opt = optimize if optimize >= 1 else ''
cfile = importlib.util.cache_from_source(
fullname, optimization=opt)
for opt_level in optimize:
if legacy:
opt_cfiles[opt_level] = fullname + 'c'
else:
cfile = importlib.util.cache_from_source(fullname)
cache_dir = os.path.dirname(cfile)
if opt_level >= 0:
opt = opt_level if opt_level >= 1 else ''
cfile = (importlib.util.cache_from_source(
fullname, optimization=opt))
opt_cfiles[opt_level] = cfile
else:
cfile = importlib.util.cache_from_source(fullname)
opt_cfiles[opt_level] = cfile
head, tail = name[:-3], name[-3:]
if tail == '.py':
if not force:
try:
mtime = int(os.stat(fullname).st_mtime)
expect = struct.pack('<4sll', importlib.util.MAGIC_NUMBER,
0, mtime)
with open(cfile, 'rb') as chandle:
actual = chandle.read(12)
if expect == actual:
expect = struct.pack('<4sLL', importlib.util.MAGIC_NUMBER,
0, mtime & 0xFFFF_FFFF)
for cfile in opt_cfiles.values():
with open(cfile, 'rb') as chandle:
actual = chandle.read(12)
if expect != actual:
break
else:
return success
except OSError:
pass
if not quiet:
print('Compiling {!r}...'.format(fullname))
try:
ok = py_compile.compile(fullname, cfile, dfile, True,
optimize=optimize,
invalidation_mode=invalidation_mode)
for index, opt_level in enumerate(optimize):
cfile = opt_cfiles[opt_level]
ok = py_compile.compile(fullname, cfile, dfile, True,
optimize=opt_level,
invalidation_mode=invalidation_mode)
if index > 0 and hardlink_dupes:
previous_cfile = opt_cfiles[optimize[index - 1]]
if filecmp.cmp(cfile, previous_cfile, shallow=False):
os.unlink(cfile)
os.link(previous_cfile, cfile)
except py_compile.PyCompileError as err:
success = False
if quiet >= 2:
@ -162,9 +252,8 @@ def compile_file(fullname, ddir=None, force=False, rx=None, quiet=0,
else:
print('*** ', end='')
# escape non-printable characters in msg
msg = err.msg.encode(sys.stdout.encoding,
errors='backslashreplace')
msg = msg.decode(sys.stdout.encoding)
encoding = sys.stdout.encoding or sys.getdefaultencoding()
msg = err.msg.encode(encoding, errors='backslashreplace').decode(encoding)
print(msg)
except (SyntaxError, UnicodeError, OSError) as e:
success = False
@ -182,7 +271,7 @@ def compile_file(fullname, ddir=None, force=False, rx=None, quiet=0,
def compile_path(skip_curdir=1, maxlevels=0, force=False, quiet=0,
legacy=False, optimize=-1,
invalidation_mode=py_compile.PycInvalidationMode.TIMESTAMP):
invalidation_mode=None):
"""Byte-compile all module on sys.path.
Arguments (all optional):
@ -221,7 +310,7 @@ def main():
parser = argparse.ArgumentParser(
description='Utilities to support installing Python libraries.')
parser.add_argument('-l', action='store_const', const=0,
default=10, dest='maxlevels',
default=None, dest='maxlevels',
help="don't recurse into subdirectories")
parser.add_argument('-r', type=int, dest='recursion',
help=('control the maximum recursion level. '
@ -239,6 +328,20 @@ def main():
'compile-time tracebacks and in runtime '
'tracebacks in cases where the source file is '
'unavailable'))
parser.add_argument('-s', metavar='STRIPDIR', dest='stripdir',
default=None,
help=('part of path to left-strip from path '
'to source file - for example buildroot. '
'`-d` and `-s` options cannot be '
'specified together.'))
parser.add_argument('-p', metavar='PREPENDDIR', dest='prependdir',
default=None,
help=('path to add as prefix to path '
'to source file - for example / to make '
'it absolute when some part is removed '
'by `-s` option. '
'`-d` and `-p` options cannot be '
'specified together.'))
parser.add_argument('-x', metavar='REGEXP', dest='rx', default=None,
help=('skip files matching the regular expression; '
'the regexp is searched for in the full path '
@ -255,9 +358,21 @@ def main():
type=int, help='Run compileall concurrently')
invalidation_modes = [mode.name.lower().replace('_', '-')
for mode in py_compile.PycInvalidationMode]
parser.add_argument('--invalidation-mode', default='timestamp',
parser.add_argument('--invalidation-mode',
choices=sorted(invalidation_modes),
help='How the pycs will be invalidated at runtime')
help=('set .pyc invalidation mode; defaults to '
'"checked-hash" if the SOURCE_DATE_EPOCH '
'environment variable is set, and '
'"timestamp" otherwise.'))
parser.add_argument('-o', action='append', type=int, dest='opt_levels',
help=('Optimization levels to run compilation with. '
'Default is -1 which uses the optimization level '
'of the Python interpreter itself (see -O).'))
parser.add_argument('-e', metavar='DIR', dest='limit_sl_dest',
help='Ignore symlinks pointing outsite of the DIR')
parser.add_argument('--hardlink-dupes', action='store_true',
dest='hardlink_dupes',
help='Hardlink duplicated pyc files')
args = parser.parse_args()
compile_dests = args.compile_dest
@ -266,12 +381,26 @@ def main():
import re
args.rx = re.compile(args.rx)
if args.limit_sl_dest == "":
args.limit_sl_dest = None
if args.recursion is not None:
maxlevels = args.recursion
else:
maxlevels = args.maxlevels
if args.opt_levels is None:
args.opt_levels = [-1]
if len(args.opt_levels) == 1 and args.hardlink_dupes:
parser.error(("Hardlinking of duplicated bytecode makes sense "
"only for more than one optimization level."))
if args.ddir is not None and (
args.stripdir is not None or args.prependdir is not None
):
parser.error("-d cannot be used in combination with -s or -p")
# if flist is provided then load it
if args.flist:
try:
@ -283,11 +412,11 @@ def main():
print("Error reading file list {}".format(args.flist))
return False
if args.workers is not None:
args.workers = args.workers or None
ivl_mode = args.invalidation_mode.replace('-', '_').upper()
invalidation_mode = py_compile.PycInvalidationMode[ivl_mode]
if args.invalidation_mode:
ivl_mode = args.invalidation_mode.replace('-', '_').upper()
invalidation_mode = py_compile.PycInvalidationMode[ivl_mode]
else:
invalidation_mode = None
success = True
try:
@ -296,13 +425,23 @@ def main():
if os.path.isfile(dest):
if not compile_file(dest, args.ddir, args.force, args.rx,
args.quiet, args.legacy,
invalidation_mode=invalidation_mode):
invalidation_mode=invalidation_mode,
stripdir=args.stripdir,
prependdir=args.prependdir,
optimize=args.opt_levels,
limit_sl_dest=args.limit_sl_dest,
hardlink_dupes=args.hardlink_dupes):
success = False
else:
if not compile_dir(dest, maxlevels, args.ddir,
args.force, args.rx, args.quiet,
args.legacy, workers=args.workers,
invalidation_mode=invalidation_mode):
invalidation_mode=invalidation_mode,
stripdir=args.stripdir,
prependdir=args.prependdir,
optimize=args.opt_levels,
limit_sl_dest=args.limit_sl_dest,
hardlink_dupes=args.hardlink_dupes):
success = False
return success
else:

View File

@ -10,6 +10,7 @@ from concurrent.futures._base import (FIRST_COMPLETED,
ALL_COMPLETED,
CancelledError,
TimeoutError,
InvalidStateError,
BrokenExecutor,
Future,
Executor,

View File

@ -7,6 +7,7 @@ import collections
import logging
import threading
import time
import types
FIRST_COMPLETED = 'FIRST_COMPLETED'
FIRST_EXCEPTION = 'FIRST_EXCEPTION'
@ -53,6 +54,10 @@ class TimeoutError(Error):
"""The operation exceeded the given deadline."""
pass
class InvalidStateError(Error):
"""The operation is not allowed in this state."""
pass
class _Waiter(object):
"""Provides the event that wait() and as_completed() block on."""
def __init__(self):
@ -212,7 +217,7 @@ def as_completed(fs, timeout=None):
before the given timeout.
"""
if timeout is not None:
end_time = timeout + time.time()
end_time = timeout + time.monotonic()
fs = set(fs)
total_futures = len(fs)
@ -231,7 +236,7 @@ def as_completed(fs, timeout=None):
if timeout is None:
wait_timeout = None
else:
wait_timeout = end_time - time.time()
wait_timeout = end_time - time.monotonic()
if wait_timeout < 0:
raise TimeoutError(
'%d (of %d) futures unfinished' % (
@ -279,13 +284,14 @@ def wait(fs, timeout=None, return_when=ALL_COMPLETED):
A named 2-tuple of sets. The first set, named 'done', contains the
futures that completed (is finished or cancelled) before the wait
completed. The second set, named 'not_done', contains uncompleted
futures.
futures. Duplicate futures given to *fs* are removed and will be
returned only once.
"""
fs = set(fs)
with _AcquireFutures(fs):
done = set(f for f in fs
if f._state in [CANCELLED_AND_NOTIFIED, FINISHED])
not_done = set(fs) - done
done = {f for f in fs
if f._state in [CANCELLED_AND_NOTIFIED, FINISHED]}
not_done = fs - done
if (return_when == FIRST_COMPLETED) and done:
return DoneAndNotDoneFutures(done, not_done)
elif (return_when == FIRST_EXCEPTION) and done:
@ -304,7 +310,7 @@ def wait(fs, timeout=None, return_when=ALL_COMPLETED):
f._waiters.remove(waiter)
done.update(waiter.finished_futures)
return DoneAndNotDoneFutures(done, set(fs) - done)
return DoneAndNotDoneFutures(done, fs - done)
class Future(object):
"""Represents the result of an asynchronous computation."""
@ -375,13 +381,17 @@ class Future(object):
return self._state == RUNNING
def done(self):
"""Return True of the future was cancelled or finished executing."""
"""Return True if the future was cancelled or finished executing."""
with self._condition:
return self._state in [CANCELLED, CANCELLED_AND_NOTIFIED, FINISHED]
def __get_result(self):
if self._exception:
raise self._exception
try:
raise self._exception
finally:
# Break a reference cycle with the exception in self._exception
self = None
else:
return self._result
@ -400,7 +410,10 @@ class Future(object):
if self._state not in [CANCELLED, CANCELLED_AND_NOTIFIED, FINISHED]:
self._done_callbacks.append(fn)
return
fn(self)
try:
fn(self)
except Exception:
LOGGER.exception('exception calling callback for %r', self)
def result(self, timeout=None):
"""Return the result of the call that the future represents.
@ -418,20 +431,24 @@ class Future(object):
timeout.
Exception: If the call raised then that exception will be raised.
"""
with self._condition:
if self._state in [CANCELLED, CANCELLED_AND_NOTIFIED]:
raise CancelledError()
elif self._state == FINISHED:
return self.__get_result()
try:
with self._condition:
if self._state in [CANCELLED, CANCELLED_AND_NOTIFIED]:
raise CancelledError()
elif self._state == FINISHED:
return self.__get_result()
self._condition.wait(timeout)
self._condition.wait(timeout)
if self._state in [CANCELLED, CANCELLED_AND_NOTIFIED]:
raise CancelledError()
elif self._state == FINISHED:
return self.__get_result()
else:
raise TimeoutError()
if self._state in [CANCELLED, CANCELLED_AND_NOTIFIED]:
raise CancelledError()
elif self._state == FINISHED:
return self.__get_result()
else:
raise TimeoutError()
finally:
# Break a reference cycle with the exception in self._exception
self = None
def exception(self, timeout=None):
"""Return the exception raised by the call that the future represents.
@ -513,6 +530,8 @@ class Future(object):
Should only be used by Executor implementations and unit tests.
"""
with self._condition:
if self._state in {CANCELLED, CANCELLED_AND_NOTIFIED, FINISHED}:
raise InvalidStateError('{}: {!r}'.format(self._state, self))
self._result = result
self._state = FINISHED
for waiter in self._waiters:
@ -526,6 +545,8 @@ class Future(object):
Should only be used by Executor implementations and unit tests.
"""
with self._condition:
if self._state in {CANCELLED, CANCELLED_AND_NOTIFIED, FINISHED}:
raise InvalidStateError('{}: {!r}'.format(self._state, self))
self._exception = exception
self._state = FINISHED
for waiter in self._waiters:
@ -533,10 +554,12 @@ class Future(object):
self._condition.notify_all()
self._invoke_callbacks()
__class_getitem__ = classmethod(types.GenericAlias)
class Executor(object):
"""This is an abstract base class for concrete asynchronous executors."""
def submit(self, fn, *args, **kwargs):
def submit(self, fn, /, *args, **kwargs):
"""Submits a callable to be executed with the given arguments.
Schedules the callable to be executed as fn(*args, **kwargs) and returns
@ -570,7 +593,7 @@ class Executor(object):
Exception: If fn(*args) raises for any values.
"""
if timeout is not None:
end_time = timeout + time.time()
end_time = timeout + time.monotonic()
fs = [self.submit(fn, *args) for args in zip(*iterables)]
@ -585,13 +608,13 @@ class Executor(object):
if timeout is None:
yield fs.pop().result()
else:
yield fs.pop().result(end_time - time.time())
yield fs.pop().result(end_time - time.monotonic())
finally:
for future in fs:
future.cancel()
return result_iterator()
def shutdown(self, wait=True):
def shutdown(self, wait=True, *, cancel_futures=False):
"""Clean-up the resources associated with the Executor.
It is safe to call this method several times. Otherwise, no other
@ -601,6 +624,9 @@ class Executor(object):
wait: If True then shutdown will not return until all running
futures have finished executing and the resources used by the
executor have been reclaimed.
cancel_futures: If True then shutdown will cancel all pending
futures. Futures that are completed or running will not be
cancelled.
"""
pass

View File

@ -3,7 +3,7 @@
"""Implements ProcessPoolExecutor.
The follow diagram and text describe the data-flow through the system:
The following diagram and text describe the data-flow through the system:
|======================= In-process =====================|== Out-of-process ==|
@ -45,33 +45,19 @@ Process #1..n:
__author__ = 'Brian Quinlan (brian@sweetapp.com)'
import atexit
import os
from concurrent.futures import _base
import queue
from queue import Full
import multiprocessing as mp
from multiprocessing.connection import wait
import multiprocessing.connection
from multiprocessing.queues import Queue
import threading
import weakref
from functools import partial
import itertools
import sys
import traceback
# Workers are created as daemon threads and processes. This is done to allow the
# interpreter to exit when there are still idle processes in a
# ProcessPoolExecutor's process pool (i.e. shutdown() was not called). However,
# allowing workers to die with the interpreter has two undesirable properties:
# - The workers would still be running during interpreter shutdown,
# meaning that they would fail in unpredictable ways.
# - The workers could be killed while evaluating a work item, which could
# be bad if the callable being evaluated has external side-effects e.g.
# writing to a file.
#
# To work around this problem, an exit handler is installed which tells the
# workers to exit when their work queues are empty and then waits until the
# threads/processes finish.
_threads_wakeups = weakref.WeakKeyDictionary()
_global_shutdown = False
@ -79,18 +65,23 @@ _global_shutdown = False
class _ThreadWakeup:
def __init__(self):
self._closed = False
self._reader, self._writer = mp.Pipe(duplex=False)
def close(self):
self._writer.close()
self._reader.close()
if not self._closed:
self._closed = True
self._writer.close()
self._reader.close()
def wakeup(self):
self._writer.send_bytes(b"")
if not self._closed:
self._writer.send_bytes(b"")
def clear(self):
while self._reader.poll():
self._reader.recv_bytes()
if not self._closed:
while self._reader.poll():
self._reader.recv_bytes()
def _python_exit():
@ -98,10 +89,17 @@ def _python_exit():
_global_shutdown = True
items = list(_threads_wakeups.items())
for _, thread_wakeup in items:
# call not protected by ProcessPoolExecutor._shutdown_lock
thread_wakeup.wakeup()
for t, _ in items:
t.join()
# Register for `_python_exit()` to be called just before joining all
# non-daemon threads. This is used instead of `atexit.register()` for
# compatibility with subinterpreters, which no longer support daemon threads.
# See bpo-39812 for context.
threading._register_atexit(_python_exit)
# Controls how many more calls than processes will be queued in the call queue.
# A smaller number will mean that processes spend more time idle waiting for
# work while a larger number will make Future.cancel() succeed less frequently
@ -109,6 +107,12 @@ def _python_exit():
EXTRA_QUEUED_CALLS = 1
# On Windows, WaitForMultipleObjects is used to wait for processes to finish.
# It can wait on, at most, 63 objects. There is an overhead of two objects:
# - the result queue reader
# - the thread wakeup reader
_MAX_WINDOWS_WORKERS = 63 - 2
# Hack to embed stringification of remote traceback in local traceback
class _RemoteTraceback(Exception):
@ -122,6 +126,9 @@ class _ExceptionWithTraceback:
tb = traceback.format_exception(type(exc), exc, tb)
tb = ''.join(tb)
self.exc = exc
# Traceback object needs to be garbage-collected as its frames
# contain references to all the objects in the exception scope
self.exc.__traceback__ = None
self.tb = '\n"""\n%s"""' % tb
def __reduce__(self):
return _rebuild_exc, (self.exc, self.tb)
@ -153,8 +160,11 @@ class _CallItem(object):
class _SafeQueue(Queue):
"""Safe Queue set exception to the future object linked to a job"""
def __init__(self, max_size=0, *, ctx, pending_work_items):
def __init__(self, max_size=0, *, ctx, pending_work_items, shutdown_lock,
thread_wakeup):
self.pending_work_items = pending_work_items
self.shutdown_lock = shutdown_lock
self.thread_wakeup = thread_wakeup
super().__init__(max_size, ctx=ctx)
def _on_queue_feeder_error(self, e, obj):
@ -162,8 +172,11 @@ class _SafeQueue(Queue):
tb = traceback.format_exception(type(e), e, e.__traceback__)
e.__cause__ = _RemoteTraceback('\n"""\n{}"""'.format(''.join(tb)))
work_item = self.pending_work_items.pop(obj.work_id, None)
# work_item can be None if another process terminated. In this case,
# the queue_manager_thread fails all work_items with BrokenProcessPool
with self.shutdown_lock:
self.thread_wakeup.wakeup()
# work_item can be None if another process terminated. In this
# case, the executor_manager_thread fails all work_items
# with BrokenProcessPool
if work_item is not None:
work_item.future.set_exception(e)
else:
@ -179,6 +192,7 @@ def _get_chunks(*iterables, chunksize):
return
yield chunk
def _process_chunk(fn, chunk):
""" Processes a chunk of an iterable passed to map.
@ -235,126 +249,139 @@ def _process_worker(call_queue, result_queue, initializer, initargs):
_sendback_result(result_queue, call_item.work_id, exception=exc)
else:
_sendback_result(result_queue, call_item.work_id, result=r)
del r
# Liberate the resource as soon as possible, to avoid holding onto
# open files or shared memory that is not needed anymore
del call_item
def _add_call_item_to_queue(pending_work_items,
work_ids,
call_queue):
"""Fills call_queue with _WorkItems from pending_work_items.
This function never blocks.
Args:
pending_work_items: A dict mapping work ids to _WorkItems e.g.
{5: <_WorkItem...>, 6: <_WorkItem...>, ...}
work_ids: A queue.Queue of work ids e.g. Queue([5, 6, ...]). Work ids
are consumed and the corresponding _WorkItems from
pending_work_items are transformed into _CallItems and put in
call_queue.
call_queue: A multiprocessing.Queue that will be filled with _CallItems
derived from _WorkItems.
"""
while True:
if call_queue.full():
return
try:
work_id = work_ids.get(block=False)
except queue.Empty:
return
else:
work_item = pending_work_items[work_id]
if work_item.future.set_running_or_notify_cancel():
call_queue.put(_CallItem(work_id,
work_item.fn,
work_item.args,
work_item.kwargs),
block=True)
else:
del pending_work_items[work_id]
continue
def _queue_management_worker(executor_reference,
processes,
pending_work_items,
work_ids_queue,
call_queue,
result_queue,
thread_wakeup):
class _ExecutorManagerThread(threading.Thread):
"""Manages the communication between this process and the worker processes.
This function is run in a local thread.
The manager is run in a local thread.
Args:
executor_reference: A weakref.ref to the ProcessPoolExecutor that owns
this thread. Used to determine if the ProcessPoolExecutor has been
garbage collected and that this function can exit.
process: A list of the ctx.Process instances used as
workers.
pending_work_items: A dict mapping work ids to _WorkItems e.g.
{5: <_WorkItem...>, 6: <_WorkItem...>, ...}
work_ids_queue: A queue.Queue of work ids e.g. Queue([5, 6, ...]).
call_queue: A ctx.Queue that will be filled with _CallItems
derived from _WorkItems for processing by the process workers.
result_queue: A ctx.SimpleQueue of _ResultItems generated by the
process workers.
thread_wakeup: A _ThreadWakeup to allow waking up the
queue_manager_thread from the main Thread and avoid deadlocks
caused by permanently locked queues.
executor: A reference to the ProcessPoolExecutor that owns
this thread. A weakref will be own by the manager as well as
references to internal objects used to introspect the state of
the executor.
"""
executor = None
def shutting_down():
return (_global_shutdown or executor is None
or executor._shutdown_thread)
def __init__(self, executor):
# Store references to necessary internals of the executor.
def shutdown_worker():
# This is an upper bound on the number of children alive.
n_children_alive = sum(p.is_alive() for p in processes.values())
n_children_to_stop = n_children_alive
n_sentinels_sent = 0
# Send the right number of sentinels, to make sure all children are
# properly terminated.
while n_sentinels_sent < n_children_to_stop and n_children_alive > 0:
for i in range(n_children_to_stop - n_sentinels_sent):
try:
call_queue.put_nowait(None)
n_sentinels_sent += 1
except Full:
break
n_children_alive = sum(p.is_alive() for p in processes.values())
# A _ThreadWakeup to allow waking up the queue_manager_thread from the
# main Thread and avoid deadlocks caused by permanently locked queues.
self.thread_wakeup = executor._executor_manager_thread_wakeup
self.shutdown_lock = executor._shutdown_lock
# Release the queue's resources as soon as possible.
call_queue.close()
# If .join() is not called on the created processes then
# some ctx.Queue methods may deadlock on Mac OS X.
for p in processes.values():
p.join()
# A weakref.ref to the ProcessPoolExecutor that owns this thread. Used
# to determine if the ProcessPoolExecutor has been garbage collected
# and that the manager can exit.
# When the executor gets garbage collected, the weakref callback
# will wake up the queue management thread so that it can terminate
# if there is no pending work item.
def weakref_cb(_,
thread_wakeup=self.thread_wakeup,
shutdown_lock=self.shutdown_lock):
mp.util.debug('Executor collected: triggering callback for'
' QueueManager wakeup')
with shutdown_lock:
thread_wakeup.wakeup()
result_reader = result_queue._reader
wakeup_reader = thread_wakeup._reader
readers = [result_reader, wakeup_reader]
self.executor_reference = weakref.ref(executor, weakref_cb)
while True:
_add_call_item_to_queue(pending_work_items,
work_ids_queue,
call_queue)
# A list of the ctx.Process instances used as workers.
self.processes = executor._processes
# A ctx.Queue that will be filled with _CallItems derived from
# _WorkItems for processing by the process workers.
self.call_queue = executor._call_queue
# A ctx.SimpleQueue of _ResultItems generated by the process workers.
self.result_queue = executor._result_queue
# A queue.Queue of work ids e.g. Queue([5, 6, ...]).
self.work_ids_queue = executor._work_ids
# A dict mapping work ids to _WorkItems e.g.
# {5: <_WorkItem...>, 6: <_WorkItem...>, ...}
self.pending_work_items = executor._pending_work_items
super().__init__()
def run(self):
# Main loop for the executor manager thread.
while True:
self.add_call_item_to_queue()
result_item, is_broken, cause = self.wait_result_broken_or_wakeup()
if is_broken:
self.terminate_broken(cause)
return
if result_item is not None:
self.process_result_item(result_item)
# Delete reference to result_item to avoid keeping references
# while waiting on new results.
del result_item
# attempt to increment idle process count
executor = self.executor_reference()
if executor is not None:
executor._idle_worker_semaphore.release()
del executor
if self.is_shutting_down():
self.flag_executor_shutting_down()
# Since no new work items can be added, it is safe to shutdown
# this thread if there are no pending work items.
if not self.pending_work_items:
self.join_executor_internals()
return
def add_call_item_to_queue(self):
# Fills call_queue with _WorkItems from pending_work_items.
# This function never blocks.
while True:
if self.call_queue.full():
return
try:
work_id = self.work_ids_queue.get(block=False)
except queue.Empty:
return
else:
work_item = self.pending_work_items[work_id]
if work_item.future.set_running_or_notify_cancel():
self.call_queue.put(_CallItem(work_id,
work_item.fn,
work_item.args,
work_item.kwargs),
block=True)
else:
del self.pending_work_items[work_id]
continue
def wait_result_broken_or_wakeup(self):
# Wait for a result to be ready in the result_queue while checking
# that all worker processes are still running, or for a wake up
# signal send. The wake up signals come either from new tasks being
# submitted, from the executor being shutdown/gc-ed, or from the
# shutdown of the python interpreter.
worker_sentinels = [p.sentinel for p in processes.values()]
ready = wait(readers + worker_sentinels)
result_reader = self.result_queue._reader
assert not self.thread_wakeup._closed
wakeup_reader = self.thread_wakeup._reader
readers = [result_reader, wakeup_reader]
worker_sentinels = [p.sentinel for p in list(self.processes.values())]
ready = mp.connection.wait(readers + worker_sentinels)
cause = None
is_broken = True
result_item = None
if result_reader in ready:
try:
result_item = result_reader.recv()
@ -364,79 +391,138 @@ def _queue_management_worker(executor_reference,
elif wakeup_reader in ready:
is_broken = False
result_item = None
thread_wakeup.clear()
if is_broken:
# Mark the process pool broken so that submits fail right now.
executor = executor_reference()
if executor is not None:
executor._broken = ('A child process terminated '
'abruptly, the process pool is not '
'usable anymore')
executor._shutdown_thread = True
executor = None
bpe = BrokenProcessPool("A process in the process pool was "
"terminated abruptly while the future was "
"running or pending.")
if cause is not None:
bpe.__cause__ = _RemoteTraceback(
f"\n'''\n{''.join(cause)}'''")
# All futures in flight must be marked failed
for work_id, work_item in pending_work_items.items():
work_item.future.set_exception(bpe)
# Delete references to object. See issue16284
del work_item
pending_work_items.clear()
# Terminate remaining workers forcibly: the queues or their
# locks may be in a dirty state and block forever.
for p in processes.values():
p.terminate()
shutdown_worker()
return
with self.shutdown_lock:
self.thread_wakeup.clear()
return result_item, is_broken, cause
def process_result_item(self, result_item):
# Process the received a result_item. This can be either the PID of a
# worker that exited gracefully or a _ResultItem
if isinstance(result_item, int):
# Clean shutdown of a worker using its PID
# (avoids marking the executor broken)
assert shutting_down()
p = processes.pop(result_item)
assert self.is_shutting_down()
p = self.processes.pop(result_item)
p.join()
if not processes:
shutdown_worker()
if not self.processes:
self.join_executor_internals()
return
elif result_item is not None:
work_item = pending_work_items.pop(result_item.work_id, None)
else:
# Received a _ResultItem so mark the future as completed.
work_item = self.pending_work_items.pop(result_item.work_id, None)
# work_item can be None if another process terminated (see above)
if work_item is not None:
if result_item.exception:
work_item.future.set_exception(result_item.exception)
else:
work_item.future.set_result(result_item.result)
# Delete references to object. See issue16284
del work_item
# Delete reference to result_item
del result_item
# Check whether we should start shutting down.
executor = executor_reference()
def is_shutting_down(self):
# Check whether we should start shutting down the executor.
executor = self.executor_reference()
# No more work items can be added if:
# - The interpreter is shutting down OR
# - The executor that owns this worker has been collected OR
# - The executor that owns this worker has been shutdown.
if shutting_down():
try:
# Flag the executor as shutting down as early as possible if it
# is not gc-ed yet.
if executor is not None:
executor._shutdown_thread = True
# Since no new work items can be added, it is safe to shutdown
# this thread if there are no pending work items.
if not pending_work_items:
shutdown_worker()
return
except Full:
# This is not a problem: we will eventually be woken up (in
# result_queue.get()) and be able to send a sentinel again.
pass
executor = None
return (_global_shutdown or executor is None
or executor._shutdown_thread)
def terminate_broken(self, cause):
# Terminate the executor because it is in a broken state. The cause
# argument can be used to display more information on the error that
# lead the executor into becoming broken.
# Mark the process pool broken so that submits fail right now.
executor = self.executor_reference()
if executor is not None:
executor._broken = ('A child process terminated '
'abruptly, the process pool is not '
'usable anymore')
executor._shutdown_thread = True
executor = None
# All pending tasks are to be marked failed with the following
# BrokenProcessPool error
bpe = BrokenProcessPool("A process in the process pool was "
"terminated abruptly while the future was "
"running or pending.")
if cause is not None:
bpe.__cause__ = _RemoteTraceback(
f"\n'''\n{''.join(cause)}'''")
# Mark pending tasks as failed.
for work_id, work_item in self.pending_work_items.items():
work_item.future.set_exception(bpe)
# Delete references to object. See issue16284
del work_item
self.pending_work_items.clear()
# Terminate remaining workers forcibly: the queues or their
# locks may be in a dirty state and block forever.
for p in self.processes.values():
p.terminate()
# clean up resources
self.join_executor_internals()
def flag_executor_shutting_down(self):
# Flag the executor as shutting down and cancel remaining tasks if
# requested as early as possible if it is not gc-ed yet.
executor = self.executor_reference()
if executor is not None:
executor._shutdown_thread = True
# Cancel pending work items if requested.
if executor._cancel_pending_futures:
# Cancel all pending futures and update pending_work_items
# to only have futures that are currently running.
new_pending_work_items = {}
for work_id, work_item in self.pending_work_items.items():
if not work_item.future.cancel():
new_pending_work_items[work_id] = work_item
self.pending_work_items = new_pending_work_items
# Drain work_ids_queue since we no longer need to
# add items to the call queue.
while True:
try:
self.work_ids_queue.get_nowait()
except queue.Empty:
break
# Make sure we do this only once to not waste time looping
# on running processes over and over.
executor._cancel_pending_futures = False
def shutdown_workers(self):
n_children_to_stop = self.get_n_children_alive()
n_sentinels_sent = 0
# Send the right number of sentinels, to make sure all children are
# properly terminated.
while (n_sentinels_sent < n_children_to_stop
and self.get_n_children_alive() > 0):
for i in range(n_children_to_stop - n_sentinels_sent):
try:
self.call_queue.put_nowait(None)
n_sentinels_sent += 1
except queue.Full:
break
def join_executor_internals(self):
self.shutdown_workers()
# Release the queue's resources as soon as possible.
self.call_queue.close()
self.call_queue.join_thread()
with self.shutdown_lock:
self.thread_wakeup.close()
# If .join() is not called on the created processes then
# some ctx.Queue methods may deadlock on Mac OS X.
for p in self.processes.values():
p.join()
def get_n_children_alive(self):
# This is an upper bound on the number of children alive.
return sum(p.is_alive() for p in self.processes.values())
_system_limits_checked = False
@ -497,16 +583,23 @@ class ProcessPoolExecutor(_base.Executor):
worker processes will be created as the machine has processors.
mp_context: A multiprocessing context to launch the workers. This
object should provide SimpleQueue, Queue and Process.
initializer: An callable used to initialize worker processes.
initializer: A callable used to initialize worker processes.
initargs: A tuple of arguments to pass to the initializer.
"""
_check_system_limits()
if max_workers is None:
self._max_workers = os.cpu_count() or 1
if sys.platform == 'win32':
self._max_workers = min(_MAX_WINDOWS_WORKERS,
self._max_workers)
else:
if max_workers <= 0:
raise ValueError("max_workers must be greater than 0")
elif (sys.platform == 'win32' and
max_workers > _MAX_WINDOWS_WORKERS):
raise ValueError(
f"max_workers must be <= {_MAX_WINDOWS_WORKERS}")
self._max_workers = max_workers
@ -514,13 +607,17 @@ class ProcessPoolExecutor(_base.Executor):
mp_context = mp.get_context()
self._mp_context = mp_context
# https://github.com/python/cpython/issues/90622
self._safe_to_dynamically_spawn_children = (
self._mp_context.get_start_method(allow_none=False) != "fork")
if initializer is not None and not callable(initializer):
raise TypeError("initializer must be a callable")
self._initializer = initializer
self._initargs = initargs
# Management thread
self._queue_management_thread = None
self._executor_manager_thread = None
# Map of pids to processes
self._processes = {}
@ -528,9 +625,21 @@ class ProcessPoolExecutor(_base.Executor):
# Shutdown is a two-step process.
self._shutdown_thread = False
self._shutdown_lock = threading.Lock()
self._idle_worker_semaphore = threading.Semaphore(0)
self._broken = False
self._queue_count = 0
self._pending_work_items = {}
self._cancel_pending_futures = False
# _ThreadWakeup is a communication channel used to interrupt the wait
# of the main loop of executor_manager_thread from another thread (e.g.
# when calling executor.submit or executor.shutdown). We do not use the
# _result_queue to send wakeup signals to the executor_manager_thread
# as it could result in a deadlock if a worker process dies with the
# _result_queue write lock still acquired.
#
# _shutdown_lock must be locked to access _ThreadWakeup.
self._executor_manager_thread_wakeup = _ThreadWakeup()
# Create communication channels for the executor
# Make the call queue slightly larger than the number of processes to
@ -539,7 +648,9 @@ class ProcessPoolExecutor(_base.Executor):
queue_size = self._max_workers + EXTRA_QUEUED_CALLS
self._call_queue = _SafeQueue(
max_size=queue_size, ctx=self._mp_context,
pending_work_items=self._pending_work_items)
pending_work_items=self._pending_work_items,
shutdown_lock=self._shutdown_lock,
thread_wakeup=self._executor_manager_thread_wakeup)
# Killed worker processes can produce spurious "broken pipe"
# tracebacks in the queue's own worker thread. But we detect killed
# processes anyway, so silence the tracebacks.
@ -547,53 +658,50 @@ class ProcessPoolExecutor(_base.Executor):
self._result_queue = mp_context.SimpleQueue()
self._work_ids = queue.Queue()
# _ThreadWakeup is a communication channel used to interrupt the wait
# of the main loop of queue_manager_thread from another thread (e.g.
# when calling executor.submit or executor.shutdown). We do not use the
# _result_queue to send the wakeup signal to the queue_manager_thread
# as it could result in a deadlock if a worker process dies with the
# _result_queue write lock still acquired.
self._queue_management_thread_wakeup = _ThreadWakeup()
def _start_queue_management_thread(self):
if self._queue_management_thread is None:
# When the executor gets garbarge collected, the weakref callback
# will wake up the queue management thread so that it can terminate
# if there is no pending work item.
def weakref_cb(_,
thread_wakeup=self._queue_management_thread_wakeup):
mp.util.debug('Executor collected: triggering callback for'
' QueueManager wakeup')
thread_wakeup.wakeup()
def _start_executor_manager_thread(self):
if self._executor_manager_thread is None:
# Start the processes so that their sentinels are known.
self._adjust_process_count()
self._queue_management_thread = threading.Thread(
target=_queue_management_worker,
args=(weakref.ref(self, weakref_cb),
self._processes,
self._pending_work_items,
self._work_ids,
self._call_queue,
self._result_queue,
self._queue_management_thread_wakeup),
name="QueueManagerThread")
self._queue_management_thread.daemon = True
self._queue_management_thread.start()
_threads_wakeups[self._queue_management_thread] = \
self._queue_management_thread_wakeup
if not self._safe_to_dynamically_spawn_children: # ie, using fork.
self._launch_processes()
self._executor_manager_thread = _ExecutorManagerThread(self)
self._executor_manager_thread.start()
_threads_wakeups[self._executor_manager_thread] = \
self._executor_manager_thread_wakeup
def _adjust_process_count(self):
for _ in range(len(self._processes), self._max_workers):
p = self._mp_context.Process(
target=_process_worker,
args=(self._call_queue,
self._result_queue,
self._initializer,
self._initargs))
p.start()
self._processes[p.pid] = p
# if there's an idle process, we don't need to spawn a new one.
if self._idle_worker_semaphore.acquire(blocking=False):
return
def submit(self, fn, *args, **kwargs):
process_count = len(self._processes)
if process_count < self._max_workers:
# Assertion disabled as this codepath is also used to replace a
# worker that unexpectedly dies, even when using the 'fork' start
# method. That means there is still a potential deadlock bug. If a
# 'fork' mp_context worker dies, we'll be forking a new one when
# we know a thread is running (self._executor_manager_thread).
#assert self._safe_to_dynamically_spawn_children or not self._executor_manager_thread, 'https://github.com/python/cpython/issues/90622'
self._spawn_process()
def _launch_processes(self):
# https://github.com/python/cpython/issues/90622
assert not self._executor_manager_thread, (
'Processes cannot be fork()ed after the thread has started, '
'deadlock in the child processes could result.')
for _ in range(len(self._processes), self._max_workers):
self._spawn_process()
def _spawn_process(self):
p = self._mp_context.Process(
target=_process_worker,
args=(self._call_queue,
self._result_queue,
self._initializer,
self._initargs))
p.start()
self._processes[p.pid] = p
def submit(self, fn, /, *args, **kwargs):
with self._shutdown_lock:
if self._broken:
raise BrokenProcessPool(self._broken)
@ -610,9 +718,11 @@ class ProcessPoolExecutor(_base.Executor):
self._work_ids.put(self._queue_count)
self._queue_count += 1
# Wake up queue management thread
self._queue_management_thread_wakeup.wakeup()
self._executor_manager_thread_wakeup.wakeup()
self._start_queue_management_thread()
if self._safe_to_dynamically_spawn_children:
self._adjust_process_count()
self._start_executor_manager_thread()
return f
submit.__doc__ = _base.Executor.submit.__doc__
@ -645,29 +755,24 @@ class ProcessPoolExecutor(_base.Executor):
timeout=timeout)
return _chain_from_iterable_of_lists(results)
def shutdown(self, wait=True):
def shutdown(self, wait=True, *, cancel_futures=False):
with self._shutdown_lock:
self._cancel_pending_futures = cancel_futures
self._shutdown_thread = True
if self._queue_management_thread:
# Wake up queue management thread
self._queue_management_thread_wakeup.wakeup()
if wait:
self._queue_management_thread.join()
if self._executor_manager_thread_wakeup is not None:
# Wake up queue management thread
self._executor_manager_thread_wakeup.wakeup()
if self._executor_manager_thread is not None and wait:
self._executor_manager_thread.join()
# To reduce the risk of opening too many files, remove references to
# objects that use file descriptors.
self._queue_management_thread = None
if self._call_queue is not None:
self._call_queue.close()
if wait:
self._call_queue.join_thread()
self._call_queue = None
self._executor_manager_thread = None
self._call_queue = None
if self._result_queue is not None and wait:
self._result_queue.close()
self._result_queue = None
self._processes = None
if self._queue_management_thread_wakeup:
self._queue_management_thread_wakeup.close()
self._queue_management_thread_wakeup = None
self._executor_manager_thread_wakeup = None
shutdown.__doc__ = _base.Executor.shutdown.__doc__
atexit.register(_python_exit)

View File

@ -5,41 +5,42 @@
__author__ = 'Brian Quinlan (brian@sweetapp.com)'
import atexit
from concurrent.futures import _base
import itertools
import queue
import threading
import types
import weakref
import os
# Workers are created as daemon threads. This is done to allow the interpreter
# to exit when there are still idle threads in a ThreadPoolExecutor's thread
# pool (i.e. shutdown() was not called). However, allowing workers to die with
# the interpreter has two undesirable properties:
# - The workers would still be running during interpreter shutdown,
# meaning that they would fail in unpredictable ways.
# - The workers could be killed while evaluating a work item, which could
# be bad if the callable being evaluated has external side-effects e.g.
# writing to a file.
#
# To work around this problem, an exit handler is installed which tells the
# workers to exit when their work queues are empty and then waits until the
# threads finish.
_threads_queues = weakref.WeakKeyDictionary()
_shutdown = False
# Lock that ensures that new workers are not created while the interpreter is
# shutting down. Must be held while mutating _threads_queues and _shutdown.
_global_shutdown_lock = threading.Lock()
def _python_exit():
global _shutdown
_shutdown = True
with _global_shutdown_lock:
_shutdown = True
items = list(_threads_queues.items())
for t, q in items:
q.put(None)
for t, q in items:
t.join()
atexit.register(_python_exit)
# Register for `_python_exit()` to be called just before joining all
# non-daemon threads. This is used instead of `atexit.register()` for
# compatibility with subinterpreters, which no longer support daemon threads.
# See bpo-39812 for context.
threading._register_atexit(_python_exit)
# At fork, reinitialize the `_global_shutdown_lock` lock in the child process
if hasattr(os, 'register_at_fork'):
os.register_at_fork(before=_global_shutdown_lock.acquire,
after_in_child=_global_shutdown_lock._at_fork_reinit,
after_in_parent=_global_shutdown_lock.release)
class _WorkItem(object):
@ -62,6 +63,8 @@ class _WorkItem(object):
else:
self.future.set_result(result)
__class_getitem__ = classmethod(types.GenericAlias)
def _worker(executor_reference, work_queue, initializer, initargs):
if initializer is not None:
@ -80,7 +83,14 @@ def _worker(executor_reference, work_queue, initializer, initargs):
work_item.run()
# Delete references to object. See issue16284
del work_item
# attempt to increment idle count
executor = executor_reference()
if executor is not None:
executor._idle_semaphore.release()
del executor
continue
executor = executor_reference()
# Exit if:
# - The interpreter is shutting down OR
@ -118,13 +128,18 @@ class ThreadPoolExecutor(_base.Executor):
max_workers: The maximum number of threads that can be used to
execute the given calls.
thread_name_prefix: An optional name prefix to give our threads.
initializer: An callable used to initialize worker threads.
initializer: A callable used to initialize worker threads.
initargs: A tuple of arguments to pass to the initializer.
"""
if max_workers is None:
# Use this number because ThreadPoolExecutor is often
# used to overlap I/O instead of CPU work.
max_workers = (os.cpu_count() or 1) * 5
# ThreadPoolExecutor is often used to:
# * CPU bound task which releases GIL
# * I/O bound task (which releases GIL, of course)
#
# We use cpu_count + 4 for both types of tasks.
# But we limit it to 32 to avoid consuming surprisingly large resource
# on many core machine.
max_workers = min(32, (os.cpu_count() or 1) + 4)
if max_workers <= 0:
raise ValueError("max_workers must be greater than 0")
@ -133,6 +148,7 @@ class ThreadPoolExecutor(_base.Executor):
self._max_workers = max_workers
self._work_queue = queue.SimpleQueue()
self._idle_semaphore = threading.Semaphore(0)
self._threads = set()
self._broken = False
self._shutdown = False
@ -142,15 +158,15 @@ class ThreadPoolExecutor(_base.Executor):
self._initializer = initializer
self._initargs = initargs
def submit(self, fn, *args, **kwargs):
with self._shutdown_lock:
def submit(self, fn, /, *args, **kwargs):
with self._shutdown_lock, _global_shutdown_lock:
if self._broken:
raise BrokenThreadPool(self._broken)
if self._shutdown:
raise RuntimeError('cannot schedule new futures after shutdown')
if _shutdown:
raise RuntimeError('cannot schedule new futures after'
raise RuntimeError('cannot schedule new futures after '
'interpreter shutdown')
f = _base.Future()
@ -162,12 +178,15 @@ class ThreadPoolExecutor(_base.Executor):
submit.__doc__ = _base.Executor.submit.__doc__
def _adjust_thread_count(self):
# if idle threads are available, don't spin new threads
if self._idle_semaphore.acquire(timeout=0):
return
# When the executor gets lost, the weakref callback will wake up
# the worker threads.
def weakref_cb(_, q=self._work_queue):
q.put(None)
# TODO(bquinlan): Should avoid creating new threads if there are more
# idle threads than items in the work queue.
num_threads = len(self._threads)
if num_threads < self._max_workers:
thread_name = '%s_%d' % (self._thread_name_prefix or self,
@ -177,7 +196,6 @@ class ThreadPoolExecutor(_base.Executor):
self._work_queue,
self._initializer,
self._initargs))
t.daemon = True
t.start()
self._threads.add(t)
_threads_queues[t] = self._work_queue
@ -195,9 +213,22 @@ class ThreadPoolExecutor(_base.Executor):
if work_item is not None:
work_item.future.set_exception(BrokenThreadPool(self._broken))
def shutdown(self, wait=True):
def shutdown(self, wait=True, *, cancel_futures=False):
with self._shutdown_lock:
self._shutdown = True
if cancel_futures:
# Drain all work items from the queue, and then cancel their
# associated futures.
while True:
try:
work_item = self._work_queue.get_nowait()
except queue.Empty:
break
if work_item is not None:
work_item.future.cancel()
# Send a wake-up to prevent threads calling
# _work_queue.get(block=True) from permanently blocking.
self._work_queue.put(None)
if wait:
for t in self._threads:

View File

@ -56,7 +56,7 @@ ConfigParser -- responsible for parsing a list of
When `interpolation` is given, it should be an Interpolation subclass
instance. It will be used as the handler for option value
pre-processing when using getters. RawConfigParser object s don't do
pre-processing when using getters. RawConfigParser objects don't do
any sort of interpolation, whereas ConfigParser uses an instance of
BasicInterpolation. The library also provides a ``zc.buildbot``
inspired ExtendedInterpolation implementation.
@ -80,7 +80,7 @@ ConfigParser -- responsible for parsing a list of
Return list of configuration options for the named section.
read(filenames, encoding=None)
Read and parse the list of named configuration files, given by
Read and parse the iterable of named configuration files, given by
name. A single filename is also allowed. Non-existing files
are ignored. Return list of successfully read files.
@ -139,7 +139,7 @@ ConfigParser -- responsible for parsing a list of
"""
from collections.abc import MutableMapping
from collections import OrderedDict as _default_dict, ChainMap as _ChainMap
from collections import ChainMap as _ChainMap
import functools
import io
import itertools
@ -157,6 +157,7 @@ __all__ = ["NoSectionError", "DuplicateOptionError", "DuplicateSectionError",
"LegacyInterpolation", "SectionProxy", "ConverterMapping",
"DEFAULTSECT", "MAX_INTERPOLATION_DEPTH"]
_default_dict = dict
DEFAULTSECT = "DEFAULT"
MAX_INTERPOLATION_DEPTH = 10
@ -676,13 +677,13 @@ class RawConfigParser(MutableMapping):
return list(opts.keys())
def read(self, filenames, encoding=None):
"""Read and parse a filename or a list of filenames.
"""Read and parse a filename or an iterable of filenames.
Files that cannot be opened are silently ignored; this is
designed so that you can specify a list of potential
designed so that you can specify an iterable of potential
configuration file locations (e.g. current directory, user's
home directory, systemwide directory), and all existing
configuration files in the list will be read. A single
configuration files in the iterable will be read. A single
filename may also be given.
Return list of successfully read files.
@ -846,6 +847,7 @@ class RawConfigParser(MutableMapping):
except KeyError:
if section != self.default_section:
raise NoSectionError(section)
orig_keys = list(d.keys())
# Update with the entry specific variables
if vars:
for key, value in vars.items():
@ -854,7 +856,7 @@ class RawConfigParser(MutableMapping):
section, option, d[option], d)
if raw:
value_getter = lambda option: d[option]
return [(option, value_getter(option)) for option in d.keys()]
return [(option, value_getter(option)) for option in orig_keys]
def popitem(self):
"""Remove a section from the parser and return it as
@ -905,6 +907,9 @@ class RawConfigParser(MutableMapping):
If `space_around_delimiters' is True (the default), delimiters
between keys and values are surrounded by spaces.
Please note that comments in the original configuration file are not
preserved when writing the configuration back.
"""
if space_around_delimiters:
d = " {} ".format(self._delimiters[0])
@ -961,7 +966,8 @@ class RawConfigParser(MutableMapping):
def __setitem__(self, key, value):
# To conform with the mapping protocol, overwrites existing values in
# the section.
if key in self and self[key] is value:
return
# XXX this is not atomic if read_dict fails at any point. Then again,
# no update method in configparser is atomic in this implementation.
if key == self.default_section:
@ -1002,7 +1008,7 @@ class RawConfigParser(MutableMapping):
Configuration files may include comments, prefixed by specific
characters (`#' and `;' by default). Comments may appear on their own
in an otherwise empty line or may be entered in lines holding values or
section names.
section names. Please note that comments get stripped off when reading configuration files.
"""
elements_added = set()
cursect = None # None, or a dictionary

View File

@ -4,6 +4,7 @@ import sys
import _collections_abc
from collections import deque
from functools import wraps
from types import MethodType, GenericAlias
__all__ = ["asynccontextmanager", "contextmanager", "closing", "nullcontext",
"AbstractContextManager", "AbstractAsyncContextManager",
@ -15,6 +16,8 @@ class AbstractContextManager(abc.ABC):
"""An abstract base class for context managers."""
__class_getitem__ = classmethod(GenericAlias)
def __enter__(self):
"""Return `self` upon entering the runtime context."""
return self
@ -35,6 +38,8 @@ class AbstractAsyncContextManager(abc.ABC):
"""An abstract base class for asynchronous context managers."""
__class_getitem__ = classmethod(GenericAlias)
async def __aenter__(self):
"""Return `self` upon entering the runtime context."""
return self
@ -92,18 +97,20 @@ class _GeneratorContextManagerBase:
# for the class instead.
# See http://bugs.python.org/issue19404 for more details.
class _GeneratorContextManager(_GeneratorContextManagerBase,
AbstractContextManager,
ContextDecorator):
"""Helper for @contextmanager decorator."""
def _recreate_cm(self):
# _GCM instances are one-shot context managers, so the
# _GCMB instances are one-shot context managers, so the
# CM must be recreated each time a decorated function is
# called
return self.__class__(self.func, self.args, self.kwds)
class _GeneratorContextManager(
_GeneratorContextManagerBase,
AbstractContextManager,
ContextDecorator,
):
"""Helper for @contextmanager decorator."""
def __enter__(self):
# do not keep args and kwds alive unnecessarily
# they are only needed for recreation, which is not possible anymore
@ -113,8 +120,8 @@ class _GeneratorContextManager(_GeneratorContextManagerBase,
except StopIteration:
raise RuntimeError("generator didn't yield") from None
def __exit__(self, type, value, traceback):
if type is None:
def __exit__(self, typ, value, traceback):
if typ is None:
try:
next(self.gen)
except StopIteration:
@ -125,9 +132,9 @@ class _GeneratorContextManager(_GeneratorContextManagerBase,
if value is None:
# Need to force instantiation so we can reliably
# tell if we get the same exception back
value = type()
value = typ()
try:
self.gen.throw(type, value, traceback)
self.gen.throw(typ, value, traceback)
except StopIteration as exc:
# Suppress StopIteration *unless* it's the same exception that
# was passed to throw(). This prevents a StopIteration
@ -137,35 +144,39 @@ class _GeneratorContextManager(_GeneratorContextManagerBase,
# Don't re-raise the passed in exception. (issue27122)
if exc is value:
return False
# Likewise, avoid suppressing if a StopIteration exception
# Avoid suppressing if a StopIteration exception
# was passed to throw() and later wrapped into a RuntimeError
# (see PEP 479).
if type is StopIteration and exc.__cause__ is value:
# (see PEP 479 for sync generators; async generators also
# have this behavior). But do this only if the exception wrapped
# by the RuntimeError is actually Stop(Async)Iteration (see
# issue29692).
if (
isinstance(value, StopIteration)
and exc.__cause__ is value
):
return False
raise
except:
except BaseException as exc:
# only re-raise if it's *not* the exception that was
# passed to throw(), because __exit__() must not raise
# an exception unless __exit__() itself failed. But throw()
# has to raise the exception to signal propagation, so this
# fixes the impedance mismatch between the throw() protocol
# and the __exit__() protocol.
#
# This cannot use 'except BaseException as exc' (as in the
# async implementation) to maintain compatibility with
# Python 2, where old-style class exceptions are not caught
# by 'except BaseException'.
if sys.exc_info()[1] is value:
return False
raise
if exc is not value:
raise
return False
raise RuntimeError("generator didn't stop after throw()")
class _AsyncGeneratorContextManager(_GeneratorContextManagerBase,
AbstractAsyncContextManager):
"""Helper for @asynccontextmanager."""
"""Helper for @asynccontextmanager decorator."""
async def __aenter__(self):
# do not keep args and kwds alive unnecessarily
# they are only needed for recreation, which is not possible anymore
del self.args, self.kwds, self.func
try:
return await self.gen.__anext__()
except StopAsyncIteration:
@ -176,35 +187,48 @@ class _AsyncGeneratorContextManager(_GeneratorContextManagerBase,
try:
await self.gen.__anext__()
except StopAsyncIteration:
return
return False
else:
raise RuntimeError("generator didn't stop")
else:
if value is None:
# Need to force instantiation so we can reliably
# tell if we get the same exception back
value = typ()
# See _GeneratorContextManager.__exit__ for comments on subtleties
# in this implementation
try:
await self.gen.athrow(typ, value, traceback)
raise RuntimeError("generator didn't stop after throw()")
except StopAsyncIteration as exc:
# Suppress StopIteration *unless* it's the same exception that
# was passed to throw(). This prevents a StopIteration
# raised inside the "with" statement from being suppressed.
return exc is not value
except RuntimeError as exc:
# Don't re-raise the passed in exception. (issue27122)
if exc is value:
return False
# Avoid suppressing if a StopIteration exception
# was passed to throw() and later wrapped into a RuntimeError
# Avoid suppressing if a Stop(Async)Iteration exception
# was passed to athrow() and later wrapped into a RuntimeError
# (see PEP 479 for sync generators; async generators also
# have this behavior). But do this only if the exception wrapped
# by the RuntimeError is actully Stop(Async)Iteration (see
# issue29692).
if isinstance(value, (StopIteration, StopAsyncIteration)):
if exc.__cause__ is value:
return False
if (
isinstance(value, (StopIteration, StopAsyncIteration))
and exc.__cause__ is value
):
return False
raise
except BaseException as exc:
# only re-raise if it's *not* the exception that was
# passed to throw(), because __exit__() must not raise
# an exception unless __exit__() itself failed. But throw()
# has to raise the exception to signal propagation, so this
# fixes the impedance mismatch between the throw() protocol
# and the __exit__() protocol.
if exc is not value:
raise
return False
raise RuntimeError("generator didn't stop after athrow()")
def contextmanager(func):
@ -373,12 +397,10 @@ class _BaseExitStack:
@staticmethod
def _create_exit_wrapper(cm, cm_exit):
def _exit_wrapper(exc_type, exc, tb):
return cm_exit(cm, exc_type, exc, tb)
return _exit_wrapper
return MethodType(cm_exit, cm)
@staticmethod
def _create_cb_wrapper(callback, *args, **kwds):
def _create_cb_wrapper(callback, /, *args, **kwds):
def _exit_wrapper(exc_type, exc, tb):
callback(*args, **kwds)
return _exit_wrapper
@ -427,7 +449,7 @@ class _BaseExitStack:
self._push_cm_exit(cm, _exit)
return result
def callback(self, callback, *args, **kwds):
def callback(self, callback, /, *args, **kwds):
"""Registers an arbitrary callback and arguments.
Cannot suppress exceptions.
@ -443,7 +465,6 @@ class _BaseExitStack:
def _push_cm_exit(self, cm, cm_exit):
"""Helper to correctly register callbacks to __exit__ methods."""
_exit_wrapper = self._create_exit_wrapper(cm, cm_exit)
_exit_wrapper.__self__ = cm
self._push_exit_callback(_exit_wrapper, True)
def _push_exit_callback(self, callback, is_sync=True):
@ -475,10 +496,10 @@ class ExitStack(_BaseExitStack, AbstractContextManager):
# Context may not be correct, so find the end of the chain
while 1:
exc_context = new_exc.__context__
if exc_context is old_exc:
if exc_context is None or exc_context is old_exc:
# Context is already set correctly (see issue 20317)
return
if exc_context is None or exc_context is frame_exc:
if exc_context is frame_exc:
break
new_exc = exc_context
# Change the end of the chain to point to the exception
@ -535,12 +556,10 @@ class AsyncExitStack(_BaseExitStack, AbstractAsyncContextManager):
@staticmethod
def _create_async_exit_wrapper(cm, cm_exit):
async def _exit_wrapper(exc_type, exc, tb):
return await cm_exit(cm, exc_type, exc, tb)
return _exit_wrapper
return MethodType(cm_exit, cm)
@staticmethod
def _create_async_cb_wrapper(callback, *args, **kwds):
def _create_async_cb_wrapper(callback, /, *args, **kwds):
async def _exit_wrapper(exc_type, exc, tb):
await callback(*args, **kwds)
return _exit_wrapper
@ -575,7 +594,7 @@ class AsyncExitStack(_BaseExitStack, AbstractAsyncContextManager):
self._push_async_cm_exit(exit, exit_method)
return exit # Allow use as a decorator
def push_async_callback(self, callback, *args, **kwds):
def push_async_callback(self, callback, /, *args, **kwds):
"""Registers an arbitrary coroutine function and arguments.
Cannot suppress exceptions.
@ -596,7 +615,6 @@ class AsyncExitStack(_BaseExitStack, AbstractAsyncContextManager):
"""Helper to correctly register coroutine function to __aexit__
method."""
_exit_wrapper = self._create_async_exit_wrapper(cm, cm_exit)
_exit_wrapper.__self__ = cm
self._push_exit_callback(_exit_wrapper, False)
async def __aenter__(self):
@ -612,10 +630,10 @@ class AsyncExitStack(_BaseExitStack, AbstractAsyncContextManager):
# Context may not be correct, so find the end of the chain
while 1:
exc_context = new_exc.__context__
if exc_context is old_exc:
if exc_context is None or exc_context is old_exc:
# Context is already set correctly (see issue 20317)
return
if exc_context is None or exc_context is frame_exc:
if exc_context is frame_exc:
break
new_exc = exc_context
# Change the end of the chain to point to the exception

View File

@ -39,8 +39,8 @@ Python's deep copy operation avoids these problems by:
set of components copied
This version does not copy types like module, class, function, method,
nor stack trace, stack frame, nor file, socket, window, nor array, nor
any similar types.
nor stack trace, stack frame, nor file, socket, window, nor any
similar types.
Classes can use the same interfaces to control copying that they use
to control pickling: they can define methods called __getinitargs__(),
@ -75,24 +75,20 @@ def copy(x):
if copier:
return copier(x)
try:
issc = issubclass(cls, type)
except TypeError: # cls is not a class
issc = False
if issc:
if issubclass(cls, type):
# treat it as a regular class:
return _copy_immutable(x)
copier = getattr(cls, "__copy__", None)
if copier:
if copier is not None:
return copier(x)
reductor = dispatch_table.get(cls)
if reductor:
if reductor is not None:
rv = reductor(x)
else:
reductor = getattr(x, "__reduce_ex__", None)
if reductor:
if reductor is not None:
rv = reductor(4)
else:
reductor = getattr(x, "__reduce__", None)
@ -111,7 +107,7 @@ _copy_dispatch = d = {}
def _copy_immutable(x):
return x
for t in (type(None), int, float, bool, complex, str, tuple,
bytes, frozenset, type, range, slice,
bytes, frozenset, type, range, slice, property,
types.BuiltinFunctionType, type(Ellipsis), type(NotImplemented),
types.FunctionType, weakref.ref):
d[t] = _copy_immutable
@ -146,18 +142,14 @@ def deepcopy(x, memo=None, _nil=[]):
cls = type(x)
copier = _deepcopy_dispatch.get(cls)
if copier:
if copier is not None:
y = copier(x, memo)
else:
try:
issc = issubclass(cls, type)
except TypeError: # cls is not a class (old Boost; see SF #502085)
issc = 0
if issc:
if issubclass(cls, type):
y = _deepcopy_atomic(x, memo)
else:
copier = getattr(x, "__deepcopy__", None)
if copier:
if copier is not None:
y = copier(memo)
else:
reductor = dispatch_table.get(cls)
@ -165,7 +157,7 @@ def deepcopy(x, memo=None, _nil=[]):
rv = reductor(x)
else:
reductor = getattr(x, "__reduce_ex__", None)
if reductor:
if reductor is not None:
rv = reductor(4)
else:
reductor = getattr(x, "__reduce__", None)
@ -198,14 +190,12 @@ d[bool] = _deepcopy_atomic
d[complex] = _deepcopy_atomic
d[bytes] = _deepcopy_atomic
d[str] = _deepcopy_atomic
try:
d[types.CodeType] = _deepcopy_atomic
except AttributeError:
pass
d[types.CodeType] = _deepcopy_atomic
d[type] = _deepcopy_atomic
d[types.BuiltinFunctionType] = _deepcopy_atomic
d[types.FunctionType] = _deepcopy_atomic
d[weakref.ref] = _deepcopy_atomic
d[property] = _deepcopy_atomic
def _deepcopy_list(x, memo, deepcopy=deepcopy):
y = []

View File

@ -48,29 +48,36 @@ def _reconstructor(cls, base, state):
return obj
_HEAPTYPE = 1<<9
_new_type = type(int.__new__)
# Python code for object.__reduce_ex__ for protocols 0 and 1
def _reduce_ex(self, proto):
assert proto < 2
for base in self.__class__.__mro__:
cls = self.__class__
for base in cls.__mro__:
if hasattr(base, '__flags__') and not base.__flags__ & _HEAPTYPE:
break
new = base.__new__
if isinstance(new, _new_type) and new.__self__ is base:
break
else:
base = object # not really reachable
if base is object:
state = None
else:
if base is self.__class__:
raise TypeError("can't pickle %s objects" % base.__name__)
if base is cls:
raise TypeError(f"cannot pickle {cls.__name__!r} object")
state = base(self)
args = (self.__class__, base, state)
args = (cls, base, state)
try:
getstate = self.__getstate__
except AttributeError:
if getattr(self, "__slots__", None):
raise TypeError("a class that defines __slots__ without "
"defining __getstate__ cannot be pickled") from None
raise TypeError(f"cannot pickle {cls.__name__!r} object: "
f"a class that defines __slots__ without "
f"defining __getstate__ cannot be pickled "
f"with protocol {proto}") from None
try:
dict = self.__dict__
except AttributeError:

View File

@ -1,6 +1,16 @@
"""Wrapper to the POSIX crypt library call and associated functionality."""
import _crypt
import sys as _sys
try:
import _crypt
except ModuleNotFoundError:
if _sys.platform == 'win32':
raise ImportError("The crypt module is not supported on Windows")
else:
raise ImportError("The required _crypt module was not built as part of CPython")
import errno
import string as _string
from random import SystemRandom as _SystemRandom
from collections import namedtuple as _namedtuple
@ -79,7 +89,14 @@ def _add_method(name, *args, rounds=None):
method = _Method(name, *args)
globals()['METHOD_' + name] = method
salt = mksalt(method, rounds=rounds)
result = crypt('', salt)
result = None
try:
result = crypt('', salt)
except OSError as e:
# Not all libc libraries support all encryption methods.
if e.errno == errno.EINVAL:
return False
raise
if result and len(result) == method.total_size:
methods.append(method)
return True

Some files were not shown because too many files have changed in this diff Show More