Series.order()

Series.order(na_last=None, ascending=True, kind='quicksort', na_position='last', inplace=False) [source] DEPRECATED: use Series.sort_values() Sorts Series object, by value, maintaining index-value link. This will return a new Series by default. Series.sort is the equivalent but as an inplace method. Parameters: na_last : boolean (optional, default=True)?DEPRECATED; use na_position Put NaN?s at beginning or end ascending : boolean, default True Sort ascending. Passing False sorts descend

Series.cat.ordered

Series.cat.ordered Gets the ordered attribute

DataFrame.lookup()

DataFrame.lookup(row_labels, col_labels) [source] Label-based ?fancy indexing? function for DataFrame. Given equal-length arrays of row and column labels, return an array of the values corresponding to each (row, col) pair. Parameters: row_labels : sequence The row labels to use for lookup col_labels : sequence The column labels to use for lookup Notes Akin to: result = [] for row, col in zip(row_labels, col_labels): result.append(df.get_value(row, col)) Examples values : ndarray

Panel.all()

Panel.all(axis=None, bool_only=None, skipna=None, level=None, **kwargs) [source] Return whether all elements are True over requested axis Parameters: axis : {items (0), major_axis (1), minor_axis (2)} skipna : boolean, default True Exclude NA/null values. If an entire row/column is NA, the result will be NA level : int or level name, default None If the axis is a MultiIndex (hierarchical), count along a particular level, collapsing into a DataFrame bool_only : boolean, default None In

TimedeltaIndex.memory_usage()

TimedeltaIndex.memory_usage(deep=False) [source] Memory usage of my values Parameters: deep : bool Introspect the data deeply, interrogate object dtypes for system-level memory consumption Returns: bytes used See also numpy.ndarray.nbytes Notes Memory usage does not include memory consumed by elements that are not components of the array if deep=False

TimedeltaIndex.is_()

TimedeltaIndex.is_(other) [source] More flexible, faster check like is but that works through views Note: this is not the same as Index.identical(), which checks that metadata is also the same. Parameters: other : object other object to compare against. Returns: True if both have same underlying data, False otherwise : bool

Panel4D.to_hdf()

Panel4D.to_hdf(path_or_buf, key, **kwargs) [source] Write the contained data to an HDF5 file using HDFStore. Parameters: path_or_buf : the path (string) or HDFStore object key : string indentifier for the group in the store mode : optional, {?a?, ?w?, ?r+?}, default ?a? 'w' Write; a new file is created (an existing file with the same name would be deleted). 'a' Append; an existing file is opened for reading and writing, and if the file does not exist it is created. 'r+' It is

DataFrame.convert_objects()

DataFrame.convert_objects(convert_dates=True, convert_numeric=False, convert_timedeltas=True, copy=True) [source] Deprecated. Attempt to infer better dtype for object columns Parameters: convert_dates : boolean, default True If True, convert to date where possible. If ?coerce?, force conversion, with unconvertible values becoming NaT. convert_numeric : boolean, default False If True, attempt to coerce to numbers (including strings), with unconvertible values becoming NaN. convert_timed

DataFrame.applymap()

DataFrame.applymap(func) [source] Apply a function to a DataFrame that is intended to operate elementwise, i.e. like doing map(func, series) for each series in the DataFrame Parameters: func : function Python function, returns a single value from a single value Returns: applied : DataFrame See also DataFrame.apply For operations on rows/columns Examples >>> df = pd.DataFrame(np.random.randn(3, 3)) >>> df 0 1 2 0 -0.029638 1.081563 1.2803

Series.to_period()

Series.to_period(freq=None, copy=True) [source] Convert Series from DatetimeIndex to PeriodIndex with desired frequency (inferred from index if not passed) Parameters: freq : string, default Returns: ts : Series with PeriodIndex