pandas.read_msgpack()

pandas.read_msgpack(path_or_buf, encoding='utf-8', iterator=False, **kwargs) [source] Load msgpack pandas object from the specified file path THIS IS AN EXPERIMENTAL LIBRARY and the storage format may not be stable until a future release. Parameters: path_or_buf : string File path, BytesIO like or string encoding: Encoding for decoding msgpack str type iterator : boolean, if True, return an iterator to the unpacker (default is False) Returns: obj : type of object stored in file

pandas.read_json()

pandas.read_json(path_or_buf=None, orient=None, typ='frame', dtype=True, convert_axes=True, convert_dates=True, keep_default_dates=True, numpy=False, precise_float=False, date_unit=None, encoding=None, lines=False) [source] Convert a JSON string to pandas object Parameters: path_or_buf : a valid JSON string or file-like, default: None The string could be a URL. Valid URL schemes include http, ftp, s3, and file. For file URLs, a host is expected. For instance, a local file could be file://

pandas.read_pickle()

pandas.read_pickle(path) [source] Load pickled pandas object (or any other pickled object) from the specified file path Warning: Loading pickled data received from untrusted sources can be unsafe. See: http://docs.python.org/2.7/library/pickle.html Parameters: path : string File path Returns: unpickled : type of object stored in file

pandas.read_html()

pandas.read_html(io, match='.+', flavor=None, header=None, index_col=None, skiprows=None, attrs=None, parse_dates=False, tupleize_cols=False, thousands=', ', encoding=None, decimal='.', converters=None, na_values=None, keep_default_na=True) [source] Read HTML tables into a list of DataFrame objects. Parameters: io : str or file-like A URL, a file-like object, or a raw string containing HTML. Note that lxml only accepts the http, ftp and file url protocols. If you have a URL that starts wi

pandas.read_hdf()

pandas.read_hdf(path_or_buf, key=None, **kwargs) [source] read from the store, close it if we opened it Retrieve pandas object stored in file, optionally based on where criteria Parameters: path_or_buf : path (string), buffer, or path object (pathlib.Path or py._path.local.LocalPath) to read from New in version 0.19.0: support for pathlib, py.path. key : group identifier in the store. Can be omitted if the HDF file contains a single pandas object. where : list of Term (or convertable

pandas.read_excel()

pandas.read_excel(io, sheetname=0, header=0, skiprows=None, skip_footer=0, index_col=None, names=None, parse_cols=None, parse_dates=False, date_parser=None, na_values=None, thousands=None, convert_float=True, has_index_names=None, converters=None, true_values=None, false_values=None, engine=None, squeeze=False, **kwds) [source] Read an Excel table into a pandas DataFrame Parameters: io : string, path object (pathlib.Path or py._path.local.LocalPath), file-like object, pandas ExcelFile, or

pandas.read_fwf()

pandas.read_fwf(filepath_or_buffer, colspecs='infer', widths=None, **kwds) [source] Read a table of fixed-width formatted lines into DataFrame Also supports optionally iterating or breaking of the file into chunks. Additional help can be found in the online docs for IO Tools. Parameters: filepath_or_buffer : str, pathlib.Path, py._path.local.LocalPath or any object with a read() method (such as a file handle or StringIO) The string could be a URL. Valid URL schemes include http, ftp, s3,

pandas.read_csv()

pandas.read_csv(filepath_or_buffer, sep=', ', delimiter=None, header='infer', names=None, index_col=None, usecols=None, squeeze=False, prefix=None, mangle_dupe_cols=True, dtype=None, engine=None, converters=None, true_values=None, false_values=None, skipinitialspace=False, skiprows=None, nrows=None, na_values=None, keep_default_na=True, na_filter=True, verbose=False, skip_blank_lines=True, parse_dates=False, infer_datetime_format=False, keep_date_col=False, date_parser=None, dayfirst=False, i

pandas.read_clipboard()

pandas.read_clipboard(**kwargs) [source] Read text from clipboard and pass to read_table. See read_table for the full argument list If unspecified, sep defaults to ?s+? Returns: parsed : DataFrame

pandas.qcut()

pandas.qcut(x, q, labels=None, retbins=False, precision=3) [source] Quantile-based discretization function. Discretize variable into equal-sized buckets based on rank or based on sample quantiles. For example 1000 values for 10 quantiles would produce a Categorical object indicating quantile membership for each data point. Parameters: x : ndarray or Series q : integer or array of quantiles Number of quantiles. 10 for deciles, 4 for quartiles, etc. Alternately array of quantiles, e.g. [0,