trace.CoverageResults

class trace.CoverageResults A container for coverage results, created by Trace.results(). Should not be created directly by the user. update(other) Merge in data from another CoverageResults object. write_results(show_missing=True, summary=False, coverdir=None) Write coverage results. Set show_missing to show lines that had no hits. Set summary to include in the output the coverage summary per module. coverdir specifies the directory into which the coverage result files will be outp

tokenize.untokenize()

tokenize.untokenize(iterable) Converts tokens back into Python source code. The iterable must return sequences with at least two elements, the token type and the token string. Any additional sequence elements are ignored. The reconstructed script is returned as a single string. The result is guaranteed to tokenize back to match the input so that the conversion is lossless and round-trips are assured. The guarantee applies only to the token type and token string as the spacing between tokens

tokenize.tokenize()

tokenize.tokenize(readline) The tokenize() generator requires one argument, readline, which must be a callable object which provides the same interface as the io.IOBase.readline() method of file objects. Each call to the function should return one line of input as bytes. The generator produces 5-tuples with these members: the token type; the token string; a 2-tuple (srow, scol) of ints specifying the row and column where the token begins in the source; a 2-tuple (erow, ecol) of ints specifyi

tokenize.TokenError

exception tokenize.TokenError Raised when either a docstring or expression that may be split over several lines is not completed anywhere in the file, for example: """Beginning of docstring or: [1, 2, 3

tokenize.open()

tokenize.open(filename) Open a file in read only mode using the encoding detected by detect_encoding(). New in version 3.2.

tokenize.detect_encoding()

tokenize.detect_encoding(readline) The detect_encoding() function is used to detect the encoding that should be used to decode a Python source file. It requires one argument, readline, in the same way as the tokenize() generator. It will call readline a maximum of twice, and return the encoding used (as a string) and a list of any lines (not decoded from bytes) it has read in. It detects the encoding from the presence of a UTF-8 BOM or an encoding cookie as specified in PEP 263. If both a BO

token.tok_name

token.tok_name Dictionary mapping the numeric values of the constants defined in this module back to name strings, allowing more human-readable representation of parse trees to be generated.

token.ISTERMINAL()

token.ISTERMINAL(x) Return true for terminal token values.

token.ISNONTERMINAL()

token.ISNONTERMINAL(x) Return true for non-terminal token values.

token.ISEOF()

token.ISEOF(x) Return true if x is the marker indicating the end of input.