DataFrame.to_pickle()

DataFrame.to_pickle(path) [source] Pickle (serialize) object to input file path. Parameters: path : string File path

DataFrame.to_msgpack()

DataFrame.to_msgpack(path_or_buf=None, encoding='utf-8', **kwargs) [source] msgpack (serialize) object to input file path THIS IS AN EXPERIMENTAL LIBRARY and the storage format may not be stable until a future release. Parameters: path : string File path, buffer-like, or None if None, return generated string append : boolean whether to append to an existing msgpack (default is False) compress : type of compressor (zlib or blosc), default to None (no compression)

DataFrame.to_latex()

DataFrame.to_latex(buf=None, columns=None, col_space=None, header=True, index=True, na_rep='NaN', formatters=None, float_format=None, sparsify=None, index_names=True, bold_rows=True, column_format=None, longtable=None, escape=None, encoding=None, decimal='.') [source] Render a DataFrame to a tabular environment table. You can splice this into a LaTeX document. Requires usepackage{booktabs}. to_latex-specific options: bold_rows : boolean, default TrueMake the row labels bold in the output co

DataFrame.to_json()

DataFrame.to_json(path_or_buf=None, orient=None, date_format='epoch', double_precision=10, force_ascii=True, date_unit='ms', default_handler=None, lines=False) [source] Convert the object to a JSON string. Note NaN?s and None will be converted to null and datetime objects will be converted to UNIX timestamps. Parameters: path_or_buf : the path or buffer to write the result string if this is None, return a StringIO of the converted string orient : string Seriesdefault is ?index? allowed

DataFrame.to_hdf()

DataFrame.to_hdf(path_or_buf, key, **kwargs) [source] Write the contained data to an HDF5 file using HDFStore. Parameters: path_or_buf : the path (string) or HDFStore object key : string indentifier for the group in the store mode : optional, {?a?, ?w?, ?r+?}, default ?a? 'w' Write; a new file is created (an existing file with the same name would be deleted). 'a' Append; an existing file is opened for reading and writing, and if the file does not exist it is created. 'r+' It

DataFrame.to_html()

DataFrame.to_html(buf=None, columns=None, col_space=None, header=True, index=True, na_rep='NaN', formatters=None, float_format=None, sparsify=None, index_names=True, justify=None, bold_rows=True, classes=None, escape=True, max_rows=None, max_cols=None, show_dimensions=False, notebook=False, decimal='.', border=None) [source] Render a DataFrame as an HTML table. to_html-specific options: bold_rows : boolean, default TrueMake the row labels bold in the output classes : str or list or tuple,

DataFrame.to_excel()

DataFrame.to_excel(excel_writer, sheet_name='Sheet1', na_rep='', float_format=None, columns=None, header=True, index=True, index_label=None, startrow=0, startcol=0, engine=None, merge_cells=True, encoding=None, inf_rep='inf', verbose=True) [source] Write DataFrame to a excel sheet Parameters: excel_writer : string or ExcelWriter object File path or existing ExcelWriter sheet_name : string, default ?Sheet1? Name of sheet which will contain DataFrame na_rep : string, default ?? Missing

DataFrame.to_gbq()

DataFrame.to_gbq(destination_table, project_id, chunksize=10000, verbose=True, reauth=False, if_exists='fail', private_key=None) [source] Write a DataFrame to a Google BigQuery table. THIS IS AN EXPERIMENTAL LIBRARY Parameters: dataframe : DataFrame DataFrame to be written destination_table : string Name of table to be written, in the form ?dataset.tablename? project_id : str Google BigQuery Account project ID. chunksize : int (default 10000) Number of rows to be inserted in each ch

DataFrame.to_dict()

DataFrame.to_dict(orient='dict') [source] Convert DataFrame to dictionary. Parameters: orient : str {?dict?, ?list?, ?series?, ?split?, ?records?, ?index?} Determines the type of the values of the dictionary. dict (default) : dict like {column -> {index -> value}} list : dict like {column -> [values]} series : dict like {column -> Series(values)} split : dict like {index -> [index], columns -> [columns], data -> [values]} records : list like [{column -> value},

DataFrame.to_dense()

DataFrame.to_dense() [source] Return dense representation of NDFrame (as opposed to sparse)