DataFrame.info()

DataFrame.info(verbose=None, buf=None, max_cols=None, memory_usage=None, null_counts=None) [source] Concise summary of a DataFrame. Parameters: verbose : {None, True, False}, optional Whether to print the full summary. None follows the display.max_info_columns setting. True or False overrides the display.max_info_columns setting. buf : writable buffer, defaults to sys.stdout max_cols : int, default None Determines whether full summary or short summary is printed. None follows the displa

pandas.to_timedelta()

pandas.to_timedelta(*args, **kwargs) [source] Convert argument to timedelta Parameters: arg : string, timedelta, list, tuple, 1-d array, or Series unit : unit of the arg (D,h,m,s,ms,us,ns) denote the unit, which is an integer/float number box : boolean, default True If True returns a Timedelta/TimedeltaIndex of the results if False returns a np.timedelta64 or ndarray of values of dtype timedelta64[ns] errors : {?ignore?, ?raise?, ?coerce?}, default ?raise? If ?raise?, then invalid pars

Intro to Data Structures

We?ll start with a quick, non-comprehensive overview of the fundamental data structures in pandas to get you started. The fundamental behavior about data types, indexing, and axis labeling / alignment apply across all of the objects. To get started, import numpy and load pandas into your namespace: In [1]: import numpy as np In [2]: import pandas as pd Here is a basic tenet to keep in mind: data alignment is intrinsic. The link between labels and data will not be broken unless done so explic

DataFrame.round()

DataFrame.round(decimals=0, *args, **kwargs) [source] Round a DataFrame to a variable number of decimal places. New in version 0.17.0. Parameters: decimals : int, dict, Series Number of decimal places to round each column to. If an int is given, round each column to the same number of places. Otherwise dict and Series round to variable numbers of places. Column names should be in the keys if decimals is a dict-like, or in the index if decimals is a Series. Any columns not included in de

Index.symmetric_difference()

Index.symmetric_difference(other, result_name=None) [source] Compute the symmetric difference of two Index objects. It?s sorted if sorting is possible. Parameters: other : Index or array-like result_name : str Returns: symmetric_difference : Index Notes symmetric_difference contains elements that appear in either idx1 or idx2 but not both. Equivalent to the Index created by idx1.difference(idx2) | idx2.difference(idx1) with duplicates dropped. Examples >>> idx1 = Index([1, 2, 3

MultiIndex.union()

MultiIndex.union(other) [source] Form the union of two MultiIndex objects, sorting if possible Parameters: other : MultiIndex or array / Index of tuples Returns: Index >>> index.union(index2)

DataFrame.from_items()

classmethod DataFrame.from_items(items, columns=None, orient='columns') [source] Convert (key, value) pairs to DataFrame. The keys will be the axis index (usually the columns, but depends on the specified orientation). The values should be arrays or Series. Parameters: items : sequence of (key, value) pairs Values should be arrays or Series. columns : sequence of column labels, optional Must be passed if orient=?index?. orient : {?columns?, ?index?}, default ?columns? The ?orientation

Index.dropna()

Index.dropna(how='any') [source] Return Index without NA/NaN values Parameters: how : {?any?, ?all?}, default ?any? If the Index is a MultiIndex, drop the value when any or all levels are NaN. Returns: valid : Index

TimedeltaIndex.identical()

TimedeltaIndex.identical(other) [source] Similar to equals, but check that other comparable attributes are also equal

GroupBy.head()

GroupBy.head(n=5) [source] Returns first n rows of each group. Essentially equivalent to .apply(lambda x: x.head(n)), except ignores as_index flag. See also pandas.Series.groupby, pandas.DataFrame.groupby, pandas.Panel.groupby Examples >>> df = DataFrame([[1, 2], [1, 4], [5, 6]], columns=['A', 'B']) >>> df.groupby('A', as_index=False).head(1) A B 0 1 2 2 5 6 >>> df.groupby('A').head(1) A B 0 1 2 2 5 6