uu.encode()

uu.encode(in_file, out_file, name=None, mode=None) Uuencode file in_file into file out_file. The uuencoded file will have the header specifying name and mode as the defaults for the results of decoding the file. The default defaults are taken from in_file, or '-' and 0o666 respectively.

uu.decode()

uu.decode(in_file, out_file=None, mode=None, quiet=False) This call decodes uuencoded file in_file placing the result on file out_file. If out_file is a pathname, mode is used to set the permission bits if the file must be created. Defaults for out_file and mode are taken from the uuencode header. However, if the file specified in the header already exists, a uu.Error is raised. decode() may print a warning to standard error if the input was produced by an incorrect uuencoder and Python coul

UserWarning

exception UserWarning Base class for warnings generated by user code.

urllib.robotparser.RobotFileParser.set_url()

set_url(url) Sets the URL referring to a robots.txt file.

urllib.robotparser.RobotFileParser.read()

read() Reads the robots.txt URL and feeds it to the parser.

urllib.robotparser.RobotFileParser.parse()

parse(lines) Parses the lines argument.

urllib.robotparser.RobotFileParser.mtime()

mtime() Returns the time the robots.txt file was last fetched. This is useful for long-running web spiders that need to check for new robots.txt files periodically.

urllib.robotparser.RobotFileParser.modified()

modified() Sets the time the robots.txt file was last fetched to the current time.

urllib.robotparser.RobotFileParser.can_fetch()

can_fetch(useragent, url) Returns True if the useragent is allowed to fetch the url according to the rules contained in the parsed robots.txt file.

urllib.robotparser.RobotFileParser

class urllib.robotparser.RobotFileParser(url='') This class provides methods to read, parse and answer questions about the robots.txt file at url. set_url(url) Sets the URL referring to a robots.txt file. read() Reads the robots.txt URL and feeds it to the parser. parse(lines) Parses the lines argument. can_fetch(useragent, url) Returns True if the useragent is allowed to fetch the url according to the rules contained in the parsed robots.txt file. mtime() Returns