Here we discuss a lot of the essential functionality common to the pandas data structures. Here?s how to create some of the objects used in the examples from the previous section:
In [1]: index = pd.date_range('1/1/2000', periods=8) In [2]: s = pd.Series(np.random.randn(5), index=['a', 'b', 'c', 'd', 'e']) In [3]: df = pd.DataFrame(np.random.randn(8, 3), index=index, ...: columns=['A', 'B', 'C']) ...: In [4]: wp = pd.Panel(np.random.randn(2, 5, 4), items=['Item1', 'Item2'], ...: major_axis=pd.date_range('1/1/2000', periods=5), ...: minor_axis=['A', 'B', 'C', 'D']) ...:
Head and Tail
To view a small sample of a Series or DataFrame object, use the head()
and tail()
methods. The default number of elements to display is five, but you may pass a custom number.
In [5]: long_series = pd.Series(np.random.randn(1000)) In [6]: long_series.head() Out[6]: 0 -0.305384 1 -0.479195 2 0.095031 3 -0.270099 4 -0.707140 dtype: float64 In [7]: long_series.tail(3) Out[7]: 997 0.588446 998 0.026465 999 -1.728222 dtype: float64
Attributes and the raw ndarray(s)
pandas objects have a number of attributes enabling you to access the metadata
- shape: gives the axis dimensions of the object, consistent with ndarray
- Axis labels
- Series: index (only axis)
- DataFrame: index (rows) and columns
- Panel: items, major_axis, and minor_axis
Note, these attributes can be safely assigned to!
In [8]: df[:2] Out[8]: A B C 2000-01-01 0.187483 -1.933946 0.377312 2000-01-02 0.734122 2.141616 -0.011225 In [9]: df.columns = [x.lower() for x in df.columns] In [10]: df Out[10]: a b c 2000-01-01 0.187483 -1.933946 0.377312 2000-01-02 0.734122 2.141616 -0.011225 2000-01-03 0.048869 -1.360687 -0.479010 2000-01-04 -0.859661 -0.231595 -0.527750 2000-01-05 -1.296337 0.150680 0.123836 2000-01-06 0.571764 1.555563 -0.823761 2000-01-07 0.535420 -1.032853 1.469725 2000-01-08 1.304124 1.449735 0.203109
To get the actual data inside a data structure, one need only access the values property:
In [11]: s.values Out[11]: array([ 0.1122, 0.8717, -0.8161, -0.7849, 1.0307]) In [12]: df.values Out[12]: array([[ 0.1875, -1.9339, 0.3773], [ 0.7341, 2.1416, -0.0112], [ 0.0489, -1.3607, -0.479 ], [-0.8597, -0.2316, -0.5278], [-1.2963, 0.1507, 0.1238], [ 0.5718, 1.5556, -0.8238], [ 0.5354, -1.0329, 1.4697], [ 1.3041, 1.4497, 0.2031]]) In [13]: wp.values Out[13]: array([[[-1.032 , 0.9698, -0.9627, 1.3821], [-0.9388, 0.6691, -0.4336, -0.2736], [ 0.6804, -0.3084, -0.2761, -1.8212], [-1.9936, -1.9274, -2.0279, 1.625 ], [ 0.5511, 3.0593, 0.4553, -0.0307]], [[ 0.9357, 1.0612, -2.1079, 0.1999], [ 0.3236, -0.6416, -0.5875, 0.0539], [ 0.1949, -0.382 , 0.3186, 2.0891], [-0.7283, -0.0903, -0.7482, 1.3189], [-2.0298, 0.7927, 0.461 , -0.5427]]])
If a DataFrame or Panel contains homogeneously-typed data, the ndarray can actually be modified in-place, and the changes will be reflected in the data structure. For heterogeneous data (e.g. some of the DataFrame?s columns are not all the same dtype), this will not be the case. The values attribute itself, unlike the axis labels, cannot be assigned to.
Note
When working with heterogeneous data, the dtype of the resulting ndarray will be chosen to accommodate all of the data involved. For example, if strings are involved, the result will be of object dtype. If there are only floats and integers, the resulting array will be of float dtype.
Accelerated operations
pandas has support for accelerating certain types of binary numerical and boolean operations using the numexpr
library (starting in 0.11.0) and the bottleneck
libraries.
These libraries are especially useful when dealing with large data sets, and provide large speedups. numexpr
uses smart chunking, caching, and multiple cores. bottleneck
is a set of specialized cython routines that are especially fast when dealing with arrays that have nans
.
Here is a sample (using 100 column x 100,000 row DataFrames
):
Operation | 0.11.0 (ms) | Prior Version (ms) | Ratio to Prior |
---|---|---|---|
df1 > df2 | 13.32 | 125.35 | 0.1063 |
df1 * df2 | 21.71 | 36.63 | 0.5928 |
df1 + df2 | 22.04 | 36.50 | 0.6039 |
You are highly encouraged to install both libraries. See the section Recommended Dependencies for more installation info.
Flexible binary operations
With binary operations between pandas data structures, there are two key points of interest:
- Broadcasting behavior between higher- (e.g. DataFrame) and lower-dimensional (e.g. Series) objects.
- Missing data in computations
We will demonstrate how to manage these issues independently, though they can be handled simultaneously.
Matching / broadcasting behavior
DataFrame has the methods add()
, sub()
, mul()
, div()
and related functions radd()
, rsub()
, ... for carrying out binary operations. For broadcasting behavior, Series input is of primary interest. Using these functions, you can use to either match on the index or columns via the axis keyword:
In [14]: df = pd.DataFrame({'one' : pd.Series(np.random.randn(3), index=['a', 'b', 'c']), ....: 'two' : pd.Series(np.random.randn(4), index=['a', 'b', 'c', 'd']), ....: 'three' : pd.Series(np.random.randn(3), index=['b', 'c', 'd'])}) ....: In [15]: df Out[15]: one three two a -0.626544 NaN -0.351587 b -0.138894 -0.177289 1.136249 c 0.011617 0.462215 -0.448789 d NaN 1.124472 -1.101558 In [16]: row = df.ix[1] In [17]: column = df['two'] In [18]: df.sub(row, axis='columns') Out[18]: one three two a -0.487650 NaN -1.487837 b 0.000000 0.000000 0.000000 c 0.150512 0.639504 -1.585038 d NaN 1.301762 -2.237808 In [19]: df.sub(row, axis=1) Out[19]: one three two a -0.487650 NaN -1.487837 b 0.000000 0.000000 0.000000 c 0.150512 0.639504 -1.585038 d NaN 1.301762 -2.237808 In [20]: df.sub(column, axis='index') Out[20]: one three two a -0.274957 NaN 0.0 b -1.275144 -1.313539 0.0 c 0.460406 0.911003 0.0 d NaN 2.226031 0.0 In [21]: df.sub(column, axis=0) Out[21]: one three two a -0.274957 NaN 0.0 b -1.275144 -1.313539 0.0 c 0.460406 0.911003 0.0 d NaN 2.226031 0.0
Furthermore you can align a level of a multi-indexed DataFrame with a Series.
In [22]: dfmi = df.copy() In [23]: dfmi.index = pd.MultiIndex.from_tuples([(1,'a'),(1,'b'),(1,'c'),(2,'a')], ....: names=['first','second']) ....: In [24]: dfmi.sub(column, axis=0, level='second') Out[24]: one three two first second 1 a -0.274957 NaN 0.000000 b -1.275144 -1.313539 0.000000 c 0.460406 0.911003 0.000000 2 a NaN 1.476060 -0.749971
With Panel, describing the matching behavior is a bit more difficult, so the arithmetic methods instead (and perhaps confusingly?) give you the option to specify the broadcast axis. For example, suppose we wished to demean the data over a particular axis. This can be accomplished by taking the mean over an axis and broadcasting over the same axis:
In [25]: major_mean = wp.mean(axis='major') In [26]: major_mean Out[26]: Item1 Item2 A -0.546569 -0.260774 B 0.492478 0.147993 C -0.649010 -0.532794 D 0.176307 0.623812 In [27]: wp.sub(major_mean, axis='major') Out[27]: <class 'pandas.core.panel.Panel'> Dimensions: 2 (items) x 5 (major_axis) x 4 (minor_axis) Items axis: Item1 to Item2 Major_axis axis: 2000-01-01 00:00:00 to 2000-01-05 00:00:00 Minor_axis axis: A to D
And similarly for axis="items"
and axis="minor"
.
Note
I could be convinced to make the axis argument in the DataFrame methods match the broadcasting behavior of Panel. Though it would require a transition period so users can change their code...
Series and Index also support the divmod()
builtin. This function takes the floor division and modulo operation at the same time returning a two-tuple of the same type as the left hand side. For example:
In [28]: s = pd.Series(np.arange(10)) In [29]: s Out[29]: 0 0 1 1 2 2 3 3 4 4 5 5 6 6 7 7 8 8 9 9 dtype: int64 In [30]: div, rem = divmod(s, 3) In [31]: div Out[31]: 0 0 1 0 2 0 3 1 4 1 5 1 6 2 7 2 8 2 9 3 dtype: int64 In [32]: rem Out[32]: 0 0 1 1 2 2 3 0 4 1 5 2 6 0 7 1 8 2 9 0 dtype: int64 In [33]: idx = pd.Index(np.arange(10)) In [34]: idx Out[34]: Int64Index([0, 1, 2, 3, 4, 5, 6, 7, 8, 9], dtype='int64') In [35]: div, rem = divmod(idx, 3) In [36]: div Out[36]: Int64Index([0, 0, 0, 1, 1, 1, 2, 2, 2, 3], dtype='int64') In [37]: rem Out[37]: Int64Index([0, 1, 2, 0, 1, 2, 0, 1, 2, 0], dtype='int64')
We can also do elementwise divmod()
:
In [38]: div, rem = divmod(s, [2, 2, 3, 3, 4, 4, 5, 5, 6, 6]) In [39]: div Out[39]: 0 0 1 0 2 0 3 1 4 1 5 1 6 1 7 1 8 1 9 1 dtype: int64 In [40]: rem Out[40]: 0 0 1 1 2 2 3 0 4 0 5 1 6 1 7 2 8 2 9 3 dtype: int64
Missing data / operations with fill values
In Series and DataFrame (though not yet in Panel), the arithmetic functions have the option of inputting a fill_value, namely a value to substitute when at most one of the values at a location are missing. For example, when adding two DataFrame objects, you may wish to treat NaN as 0 unless both DataFrames are missing that value, in which case the result will be NaN (you can later replace NaN with some other value using fillna
if you wish).
In [41]: df Out[41]: one three two a -0.626544 NaN -0.351587 b -0.138894 -0.177289 1.136249 c 0.011617 0.462215 -0.448789 d NaN 1.124472 -1.101558 In [42]: df2 Out[42]: one three two a -0.626544 1.000000 -0.351587 b -0.138894 -0.177289 1.136249 c 0.011617 0.462215 -0.448789 d NaN 1.124472 -1.101558 In [43]: df + df2 Out[43]: one three two a -1.253088 NaN -0.703174 b -0.277789 -0.354579 2.272499 c 0.023235 0.924429 -0.897577 d NaN 2.248945 -2.203116 In [44]: df.add(df2, fill_value=0) Out[44]: one three two a -1.253088 1.000000 -0.703174 b -0.277789 -0.354579 2.272499 c 0.023235 0.924429 -0.897577 d NaN 2.248945 -2.203116
Flexible Comparisons
Starting in v0.8, pandas introduced binary comparison methods eq, ne, lt, gt, le, and ge to Series and DataFrame whose behavior is analogous to the binary arithmetic operations described above:
In [45]: df.gt(df2) Out[45]: one three two a False False False b False False False c False False False d False False False In [46]: df2.ne(df) Out[46]: one three two a False True False b False False False c False False False d True False False
These operations produce a pandas object the same type as the left-hand-side input that if of dtype bool
. These boolean
objects can be used in indexing operations, see here
Boolean Reductions
You can apply the reductions: empty
, any()
, all()
, and bool()
to provide a way to summarize a boolean result.
In [47]: (df > 0).all() Out[47]: one False three False two False dtype: bool In [48]: (df > 0).any() Out[48]: one True three True two True dtype: bool
You can reduce to a final boolean value.
In [49]: (df > 0).any().any() Out[49]: True
You can test if a pandas object is empty, via the empty
property.
In [50]: df.empty Out[50]: False In [51]: pd.DataFrame(columns=list('ABC')).empty Out[51]: True
To evaluate single-element pandas objects in a boolean context, use the method bool()
:
In [52]: pd.Series([True]).bool() Out[52]: True In [53]: pd.Series([False]).bool() Out[53]: False In [54]: pd.DataFrame([[True]]).bool() Out[54]: True In [55]: pd.DataFrame([[False]]).bool() Out[55]: False
Warning
You might be tempted to do the following:
>>> if df: ...
Or
>>> df and df2
These both will raise as you are trying to compare multiple values.
ValueError: The truth value of an array is ambiguous. Use a.empty, a.any() or a.all().
See gotchas for a more detailed discussion.
Comparing if objects are equivalent
Often you may find there is more than one way to compute the same result. As a simple example, consider df+df
and df*2
. To test that these two computations produce the same result, given the tools shown above, you might imagine using (df+df == df*2).all()
. But in fact, this expression is False:
In [56]: df+df == df*2 Out[56]: one three two a True False True b True True True c True True True d False True True In [57]: (df+df == df*2).all() Out[57]: one False three False two True dtype: bool
Notice that the boolean DataFrame df+df == df*2
contains some False values! That is because NaNs do not compare as equals:
In [58]: np.nan == np.nan Out[58]: False
So, as of v0.13.1, NDFrames (such as Series, DataFrames, and Panels) have an equals()
method for testing equality, with NaNs in corresponding locations treated as equal.
In [59]: (df+df).equals(df*2) Out[59]: True
Note that the Series or DataFrame index needs to be in the same order for equality to be True:
In [60]: df1 = pd.DataFrame({'col':['foo', 0, np.nan]}) In [61]: df2 = pd.DataFrame({'col':[np.nan, 0, 'foo']}, index=[2,1,0]) In [62]: df1.equals(df2) Out[62]: False In [63]: df1.equals(df2.sort_index()) Out[63]: True
Comparing array-like objects
You can conveniently do element-wise comparisons when comparing a pandas data structure with a scalar value:
In [64]: pd.Series(['foo', 'bar', 'baz']) == 'foo' Out[64]: 0 True 1 False 2 False dtype: bool In [65]: pd.Index(['foo', 'bar', 'baz']) == 'foo' Out[65]: array([ True, False, False], dtype=bool)
Pandas also handles element-wise comparisons between different array-like objects of the same length:
In [66]: pd.Series(['foo', 'bar', 'baz']) == pd.Index(['foo', 'bar', 'qux']) Out[66]: 0 True 1 True 2 False dtype: bool In [67]: pd.Series(['foo', 'bar', 'baz']) == np.array(['foo', 'bar', 'qux']) Out[67]: 0 True 1 True 2 False dtype: bool
Trying to compare Index
or Series
objects of different lengths will raise a ValueError:
In [55]: pd.Series(['foo', 'bar', 'baz']) == pd.Series(['foo', 'bar']) ValueError: Series lengths must match to compare In [56]: pd.Series(['foo', 'bar', 'baz']) == pd.Series(['foo']) ValueError: Series lengths must match to compare
Note that this is different from the numpy behavior where a comparison can be broadcast:
In [68]: np.array([1, 2, 3]) == np.array([2]) Out[68]: array([False, True, False], dtype=bool)
or it can return False if broadcasting can not be done:
In [69]: np.array([1, 2, 3]) == np.array([1, 2]) Out[69]: False
Combining overlapping data sets
A problem occasionally arising is the combination of two similar data sets where values in one are preferred over the other. An example would be two data series representing a particular economic indicator where one is considered to be of ?higher quality?. However, the lower quality series might extend further back in history or have more complete data coverage. As such, we would like to combine two DataFrame objects where missing values in one DataFrame are conditionally filled with like-labeled values from the other DataFrame. The function implementing this operation is combine_first()
, which we illustrate:
In [70]: df1 = pd.DataFrame({'A' : [1., np.nan, 3., 5., np.nan], ....: 'B' : [np.nan, 2., 3., np.nan, 6.]}) ....: In [71]: df2 = pd.DataFrame({'A' : [5., 2., 4., np.nan, 3., 7.], ....: 'B' : [np.nan, np.nan, 3., 4., 6., 8.]}) ....: In [72]: df1 Out[72]: A B 0 1.0 NaN 1 NaN 2.0 2 3.0 3.0 3 5.0 NaN 4 NaN 6.0 In [73]: df2 Out[73]: A B 0 5.0 NaN 1 2.0 NaN 2 4.0 3.0 3 NaN 4.0 4 3.0 6.0 5 7.0 8.0 In [74]: df1.combine_first(df2) Out[74]: A B 0 1.0 NaN 1 2.0 2.0 2 3.0 3.0 3 5.0 4.0 4 3.0 6.0 5 7.0 8.0
General DataFrame Combine
The combine_first()
method above calls the more general DataFrame method combine()
. This method takes another DataFrame and a combiner function, aligns the input DataFrame and then passes the combiner function pairs of Series (i.e., columns whose names are the same).
So, for instance, to reproduce combine_first()
as above:
In [75]: combiner = lambda x, y: np.where(pd.isnull(x), y, x) In [76]: df1.combine(df2, combiner) Out[76]: A B 0 1.0 NaN 1 2.0 2.0 2 3.0 3.0 3 5.0 4.0 4 3.0 6.0 5 7.0 8.0
Descriptive statistics
A large number of methods for computing descriptive statistics and other related operations on Series, DataFrame, and Panel. Most of these are aggregations (hence producing a lower-dimensional result) like sum()
, mean()
, and quantile()
, but some of them, like cumsum()
and cumprod()
, produce an object of the same size. Generally speaking, these methods take an axis argument, just like ndarray.{sum, std, ...}, but the axis can be specified by name or integer:
- Series: no axis argument needed
- DataFrame: ?index? (axis=0, default), ?columns? (axis=1)
- Panel: ?items? (axis=0), ?major? (axis=1, default), ?minor? (axis=2)
For example:
In [77]: df Out[77]: one three two a -0.626544 NaN -0.351587 b -0.138894 -0.177289 1.136249 c 0.011617 0.462215 -0.448789 d NaN 1.124472 -1.101558 In [78]: df.mean(0) Out[78]: one -0.251274 three 0.469799 two -0.191421 dtype: float64 In [79]: df.mean(1) Out[79]: a -0.489066 b 0.273355 c 0.008348 d 0.011457 dtype: float64
All such methods have a skipna
option signaling whether to exclude missing data (True
by default):
In [80]: df.sum(0, skipna=False) Out[80]: one NaN three NaN two -0.765684 dtype: float64 In [81]: df.sum(axis=1, skipna=True) Out[81]: a -0.978131 b 0.820066 c 0.025044 d 0.022914 dtype: float64
Combined with the broadcasting / arithmetic behavior, one can describe various statistical procedures, like standardization (rendering data zero mean and standard deviation 1), very concisely:
In [82]: ts_stand = (df - df.mean()) / df.std() In [83]: ts_stand.std() Out[83]: one 1.0 three 1.0 two 1.0 dtype: float64 In [84]: xs_stand = df.sub(df.mean(1), axis=0).div(df.std(1), axis=0) In [85]: xs_stand.std(1) Out[85]: a 1.0 b 1.0 c 1.0 d 1.0 dtype: float64
Note that methods like cumsum()
and cumprod()
preserve the location of NA values:
In [86]: df.cumsum() Out[86]: one three two a -0.626544 NaN -0.351587 b -0.765438 -0.177289 0.784662 c -0.753821 0.284925 0.335874 d NaN 1.409398 -0.765684
Here is a quick reference summary table of common functions. Each also takes an optional level
parameter which applies only if the object has a hierarchical index.
Function | Description |
---|---|
count | Number of non-null observations |
sum | Sum of values |
mean | Mean of values |
mad | Mean absolute deviation |
median | Arithmetic median of values |
min | Minimum |
max | Maximum |
mode | Mode |
abs | Absolute Value |
prod | Product of values |
std | Bessel-corrected sample standard deviation |
var | Unbiased variance |
sem | Standard error of the mean |
skew | Sample skewness (3rd moment) |
kurt | Sample kurtosis (4th moment) |
quantile | Sample quantile (value at %) |
cumsum | Cumulative sum |
cumprod | Cumulative product |
cummax | Cumulative maximum |
cummin | Cumulative minimum |
Note that by chance some NumPy methods, like mean
, std
, and sum
, will exclude NAs on Series input by default:
In [87]: np.mean(df['one']) Out[87]: -0.2512736517583951 In [88]: np.mean(df['one'].values) Out[88]: nan
Series
also has a method nunique()
which will return the number of unique non-null values:
In [89]: series = pd.Series(np.random.randn(500)) In [90]: series[20:500] = np.nan In [91]: series[10:20] = 5 In [92]: series.nunique() Out[92]: 11
Summarizing data: describe
There is a convenient describe()
function which computes a variety of summary statistics about a Series or the columns of a DataFrame (excluding NAs of course):
In [93]: series = pd.Series(np.random.randn(1000)) In [94]: series[::2] = np.nan In [95]: series.describe() Out[95]: count 500.000000 mean -0.039663 std 1.069371 min -3.463789 25% -0.731101 50% -0.058918 75% 0.672758 max 3.120271 dtype: float64 In [96]: frame = pd.DataFrame(np.random.randn(1000, 5), columns=['a', 'b', 'c', 'd', 'e']) In [97]: frame.ix[::2] = np.nan In [98]: frame.describe() Out[98]: a b c d e count 500.000000 500.000000 500.000000 500.000000 500.000000 mean 0.000954 -0.044014 0.075936 -0.003679 0.020751 std 1.005133 0.974882 0.967432 1.004732 0.963812 min -3.010899 -2.782760 -3.401252 -2.944925 -3.794127 25% -0.682900 -0.681161 -0.528190 -0.663503 -0.615717 50% -0.001651 -0.006279 0.040098 -0.003378 0.006282 75% 0.656439 0.632852 0.717919 0.687214 0.653423 max 3.007143 2.627688 2.702490 2.850852 3.072117
You can select specific percentiles to include in the output:
In [99]: series.describe(percentiles=[.05, .25, .75, .95]) Out[99]: count 500.000000 mean -0.039663 std 1.069371 min -3.463789 5% -1.741334 25% -0.731101 50% -0.058918 75% 0.672758 95% 1.854383 max 3.120271 dtype: float64
By default, the median is always included.
For a non-numerical Series object, describe()
will give a simple summary of the number of unique values and most frequently occurring values:
In [100]: s = pd.Series(['a', 'a', 'b', 'b', 'a', 'a', np.nan, 'c', 'd', 'a']) In [101]: s.describe() Out[101]: count 9 unique 4 top a freq 5 dtype: object
Note that on a mixed-type DataFrame object, describe()
will restrict the summary to include only numerical columns or, if none are, only categorical columns:
In [102]: frame = pd.DataFrame({'a': ['Yes', 'Yes', 'No', 'No'], 'b': range(4)}) In [103]: frame.describe() Out[103]: b count 4.000000 mean 1.500000 std 1.290994 min 0.000000 25% 0.750000 50% 1.500000 75% 2.250000 max 3.000000
This behaviour can be controlled by providing a list of types as include
/exclude
arguments. The special value all
can also be used:
In [104]: frame.describe(include=['object']) Out[104]: a count 4 unique 2 top No freq 2 In [105]: frame.describe(include=['number']) Out[105]: b count 4.000000 mean 1.500000 std 1.290994 min 0.000000 25% 0.750000 50% 1.500000 75% 2.250000 max 3.000000 In [106]: frame.describe(include='all') Out[106]: a b count 4 4.000000 unique 2 NaN top No NaN freq 2 NaN mean NaN 1.500000 std NaN 1.290994 min NaN 0.000000 25% NaN 0.750000 50% NaN 1.500000 75% NaN 2.250000 max NaN 3.000000
That feature relies on select_dtypes. Refer to there for details about accepted inputs.
Index of Min/Max Values
The idxmin()
and idxmax()
functions on Series and DataFrame compute the index labels with the minimum and maximum corresponding values:
In [107]: s1 = pd.Series(np.random.randn(5)) In [108]: s1 Out[108]: 0 -0.872725 1 1.522411 2 0.080594 3 -1.676067 4 0.435804 dtype: float64 In [109]: s1.idxmin(), s1.idxmax() Out[109]: (3, 1) In [110]: df1 = pd.DataFrame(np.random.randn(5,3), columns=['A','B','C']) In [111]: df1 Out[111]: A B C 0 0.445734 -1.649461 0.169660 1 1.246181 0.131682 -2.001988 2 -1.273023 0.870502 0.214583 3 0.088452 -0.173364 1.207466 4 0.546121 0.409515 -0.310515 In [112]: df1.idxmin(axis=0) Out[112]: A 2 B 0 C 1 dtype: int64 In [113]: df1.idxmax(axis=1) Out[113]: 0 A 1 A 2 B 3 C 4 A dtype: object
When there are multiple rows (or columns) matching the minimum or maximum value, idxmin()
and idxmax()
return the first matching index:
In [114]: df3 = pd.DataFrame([2, 1, 1, 3, np.nan], columns=['A'], index=list('edcba')) In [115]: df3 Out[115]: A e 2.0 d 1.0 c 1.0 b 3.0 a NaN In [116]: df3['A'].idxmin() Out[116]: 'd'
Note
idxmin
and idxmax
are called argmin
and argmax
in NumPy.
Value counts (histogramming) / Mode
The value_counts()
Series method and top-level function computes a histogram of a 1D array of values. It can also be used as a function on regular arrays:
In [117]: data = np.random.randint(0, 7, size=50) In [118]: data Out[118]: array([5, 3, 2, 2, 1, 4, 0, 4, 0, 2, 0, 6, 4, 1, 6, 3, 3, 0, 2, 1, 0, 5, 5, 3, 6, 1, 5, 6, 2, 0, 0, 6, 3, 3, 5, 0, 4, 3, 3, 3, 0, 6, 1, 3, 5, 5, 0, 4, 0, 6]) In [119]: s = pd.Series(data) In [120]: s.value_counts() Out[120]: 0 11 3 10 6 7 5 7 4 5 2 5 1 5 dtype: int64 In [121]: pd.value_counts(data) Out[121]: 0 11 3 10 6 7 5 7 4 5 2 5 1 5 dtype: int64
Similarly, you can get the most frequently occurring value(s) (the mode) of the values in a Series or DataFrame:
In [122]: s5 = pd.Series([1, 1, 3, 3, 3, 5, 5, 7, 7, 7]) In [123]: s5.mode() Out[123]: 0 3 1 7 dtype: int64 In [124]: df5 = pd.DataFrame({"A": np.random.randint(0, 7, size=50), .....: "B": np.random.randint(-10, 15, size=50)}) .....: In [125]: df5.mode() Out[125]: A B 0 1 -5
Discretization and quantiling
Continuous values can be discretized using the cut()
(bins based on values) and qcut()
(bins based on sample quantiles) functions:
In [126]: arr = np.random.randn(20) In [127]: factor = pd.cut(arr, 4) In [128]: factor Out[128]: [(-0.645, 0.336], (-2.61, -1.626], (-1.626, -0.645], (-1.626, -0.645], (-1.626, -0.645], ..., (0.336, 1.316], (0.336, 1.316], (0.336, 1.316], (0.336, 1.316], (-2.61, -1.626]] Length: 20 Categories (4, object): [(-2.61, -1.626] < (-1.626, -0.645] < (-0.645, 0.336] < (0.336, 1.316]] In [129]: factor = pd.cut(arr, [-5, -1, 0, 1, 5]) In [130]: factor Out[130]: [(-1, 0], (-5, -1], (-1, 0], (-5, -1], (-1, 0], ..., (0, 1], (1, 5], (0, 1], (0, 1], (-5, -1]] Length: 20 Categories (4, object): [(-5, -1] < (-1, 0] < (0, 1] < (1, 5]]
qcut()
computes sample quantiles. For example, we could slice up some normally distributed data into equal-size quartiles like so:
In [131]: arr = np.random.randn(30) In [132]: factor = pd.qcut(arr, [0, .25, .5, .75, 1]) In [133]: factor Out[133]: [(-0.139, 1.00736], (1.00736, 1.976], (1.00736, 1.976], [-1.0705, -0.439], [-1.0705, -0.439], ..., (1.00736, 1.976], [-1.0705, -0.439], (-0.439, -0.139], (-0.439, -0.139], (-0.439, -0.139]] Length: 30 Categories (4, object): [[-1.0705, -0.439] < (-0.439, -0.139] < (-0.139, 1.00736] < (1.00736, 1.976]] In [134]: pd.value_counts(factor) Out[134]: (1.00736, 1.976] 8 [-1.0705, -0.439] 8 (-0.139, 1.00736] 7 (-0.439, -0.139] 7 dtype: int64
We can also pass infinite values to define the bins:
In [135]: arr = np.random.randn(20) In [136]: factor = pd.cut(arr, [-np.inf, 0, np.inf]) In [137]: factor Out[137]: [(-inf, 0], (0, inf], (0, inf], (0, inf], (-inf, 0], ..., (-inf, 0], (0, inf], (-inf, 0], (-inf, 0], (0, inf]] Length: 20 Categories (2, object): [(-inf, 0] < (0, inf]]
Function application
To apply your own or another library?s functions to pandas objects, you should be aware of the three methods below. The appropriate method to use depends on whether your function expects to operate on an entire DataFrame
or Series
, row- or column-wise, or elementwise.
-
Tablewise Function Application:
pipe()
-
Row or Column-wise Function Application:
apply()
-
Elementwise function application:
applymap()
Tablewise Function Application
New in version 0.16.2.
DataFrames
and Series
can of course just be passed into functions. However, if the function needs to be called in a chain, consider using the pipe()
method. Compare the following
# f, g, and h are functions taking and returning ``DataFrames`` >>> f(g(h(df), arg1=1), arg2=2, arg3=3)
with the equivalent
>>> (df.pipe(h) .pipe(g, arg1=1) .pipe(f, arg2=2, arg3=3) )
Pandas encourages the second style, which is known as method chaining. pipe
makes it easy to use your own or another library?s functions in method chains, alongside pandas? methods.
In the example above, the functions f
, g
, and h
each expected the DataFrame
as the first positional argument. What if the function you wish to apply takes its data as, say, the second argument? In this case, provide pipe
with a tuple of (callable, data_keyword)
. .pipe
will route the DataFrame
to the argument specified in the tuple.
For example, we can fit a regression using statsmodels. Their API expects a formula first and a DataFrame
as the second argument, data
. We pass in the function, keyword pair (sm.poisson, 'data')
to pipe
:
In [138]: import statsmodels.formula.api as sm In [139]: bb = pd.read_csv('data/baseball.csv', index_col='id') In [140]: (bb.query('h > 0') .....: .assign(ln_h = lambda df: np.log(df.h)) .....: .pipe((sm.poisson, 'data'), 'hr ~ ln_h + year + g + C(lg)') .....: .fit() .....: .summary() .....: ) .....: Optimization terminated successfully. Current function value: 2.116284 Iterations 24 Out[140]: <class 'statsmodels.iolib.summary.Summary'> """ Poisson Regression Results ============================================================================== Dep. Variable: hr No. Observations: 68 Model: Poisson Df Residuals: 63 Method: MLE Df Model: 4 Date: Sat, 24 Dec 2016 Pseudo R-squ.: 0.6878 Time: 18:31:33 Log-Likelihood: -143.91 converged: True LL-Null: -460.91 LLR p-value: 6.774e-136 =============================================================================== coef std err z P>|z| [95.0% Conf. Int.] ------------------------------------------------------------------------------- Intercept -1267.3636 457.867 -2.768 0.006 -2164.767 -369.960 C(lg)[T.NL] -0.2057 0.101 -2.044 0.041 -0.403 -0.008 ln_h 0.9280 0.191 4.866 0.000 0.554 1.302 year 0.6301 0.228 2.762 0.006 0.183 1.077 g 0.0099 0.004 2.754 0.006 0.003 0.017 =============================================================================== """
The pipe method is inspired by unix pipes and more recently dplyr and magrittr, which have introduced the popular (%>%)
(read pipe) operator for R. The implementation of pipe
here is quite clean and feels right at home in python. We encourage you to view the source code (pd.DataFrame.pipe??
in IPython).
Row or Column-wise Function Application
Arbitrary functions can be applied along the axes of a DataFrame or Panel using the apply()
method, which, like the descriptive statistics methods, take an optional axis
argument:
In [141]: df.apply(np.mean) Out[141]: one -0.251274 three 0.469799 two -0.191421 dtype: float64 In [142]: df.apply(np.mean, axis=1) Out[142]: a -0.489066 b 0.273355 c 0.008348 d 0.011457 dtype: float64 In [143]: df.apply(lambda x: x.max() - x.min()) Out[143]: one 0.638161 three 1.301762 two 2.237808 dtype: float64 In [144]: df.apply(np.cumsum) Out[144]: one three two a -0.626544 NaN -0.351587 b -0.765438 -0.177289 0.784662 c -0.753821 0.284925 0.335874 d NaN 1.409398 -0.765684 In [145]: df.apply(np.exp) Out[145]: one three two a 0.534436 NaN 0.703570 b 0.870320 0.837537 3.115063 c 1.011685 1.587586 0.638401 d NaN 3.078592 0.332353
Depending on the return type of the function passed to apply()
, the result will either be of lower dimension or the same dimension.
apply()
combined with some cleverness can be used to answer many questions about a data set. For example, suppose we wanted to extract the date where the maximum value for each column occurred:
In [146]: tsdf = pd.DataFrame(np.random.randn(1000, 3), columns=['A', 'B', 'C'], .....: index=pd.date_range('1/1/2000', periods=1000)) .....: In [147]: tsdf.apply(lambda x: x.idxmax()) Out[147]: A 2001-04-27 B 2002-06-02 C 2000-04-02 dtype: datetime64[ns]
You may also pass additional arguments and keyword arguments to the apply()
method. For instance, consider the following function you would like to apply:
def subtract_and_divide(x, sub, divide=1): return (x - sub) / divide
You may then apply this function as follows:
df.apply(subtract_and_divide, args=(5,), divide=3)
Another useful feature is the ability to pass Series methods to carry out some Series operation on each column or row:
In [148]: tsdf Out[148]: A B C 2000-01-01 1.796883 -0.930690 3.542846 2000-01-02 -1.242888 -0.695279 -1.000884 2000-01-03 -0.720299 0.546303 -0.082042 2000-01-04 NaN NaN NaN 2000-01-05 NaN NaN NaN 2000-01-06 NaN NaN NaN 2000-01-07 NaN NaN NaN 2000-01-08 -0.527402 0.933507 0.129646 2000-01-09 -0.338903 -1.265452 -1.969004 2000-01-10 0.532566 0.341548 0.150493 In [149]: tsdf.apply(pd.Series.interpolate) Out[149]: A B C 2000-01-01 1.796883 -0.930690 3.542846 2000-01-02 -1.242888 -0.695279 -1.000884 2000-01-03 -0.720299 0.546303 -0.082042 2000-01-04 -0.681720 0.623743 -0.039704 2000-01-05 -0.643140 0.701184 0.002633 2000-01-06 -0.604561 0.778625 0.044971 2000-01-07 -0.565982 0.856066 0.087309 2000-01-08 -0.527402 0.933507 0.129646 2000-01-09 -0.338903 -1.265452 -1.969004 2000-01-10 0.532566 0.341548 0.150493
Finally, apply()
takes an argument raw
which is False by default, which converts each row or column into a Series before applying the function. When set to True, the passed function will instead receive an ndarray object, which has positive performance implications if you do not need the indexing functionality.
See also
The section on GroupBy demonstrates related, flexible functionality for grouping by some criterion, applying, and combining the results into a Series, DataFrame, etc.
Applying elementwise Python functions
Since not all functions can be vectorized (accept NumPy arrays and return another array or value), the methods applymap()
on DataFrame and analogously map()
on Series accept any Python function taking a single value and returning a single value. For example:
In [150]: df4 Out[150]: one three two a -0.626544 NaN -0.351587 b -0.138894 -0.177289 1.136249 c 0.011617 0.462215 -0.448789 d NaN 1.124472 -1.101558 In [151]: f = lambda x: len(str(x)) In [152]: df4['one'].map(f) Out[152]: a 14 b 15 c 15 d 3 Name: one, dtype: int64 In [153]: df4.applymap(f) Out[153]: one three two a 14 3 15 b 15 15 11 c 15 14 15 d 3 13 14
Series.map()
has an additional feature which is that it can be used to easily ?link? or ?map? values defined by a secondary series. This is closely related to merging/joining functionality:
In [154]: s = pd.Series(['six', 'seven', 'six', 'seven', 'six'], .....: index=['a', 'b', 'c', 'd', 'e']) .....: In [155]: t = pd.Series({'six' : 6., 'seven' : 7.}) In [156]: s Out[156]: a six b seven c six d seven e six dtype: object In [157]: s.map(t) Out[157]: a 6.0 b 7.0 c 6.0 d 7.0 e 6.0 dtype: float64
Applying with a Panel
Applying with a Panel
will pass a Series
to the applied function. If the applied function returns a Series
, the result of the application will be a Panel
. If the applied function reduces to a scalar, the result of the application will be a DataFrame
.
Note
Prior to 0.13.1 apply
on a Panel
would only work on ufuncs
(e.g. np.sum/np.max
).
In [158]: import pandas.util.testing as tm In [159]: panel = tm.makePanel(5) In [160]: panel Out[160]: <class 'pandas.core.panel.Panel'> Dimensions: 3 (items) x 5 (major_axis) x 4 (minor_axis) Items axis: ItemA to ItemC Major_axis axis: 2000-01-03 00:00:00 to 2000-01-07 00:00:00 Minor_axis axis: A to D In [161]: panel['ItemA'] Out[161]: A B C D 2000-01-03 0.330418 1.893177 0.801111 0.528154 2000-01-04 1.761200 0.170247 0.445614 -0.029371 2000-01-05 0.567133 -0.916844 1.453046 -0.631117 2000-01-06 -0.251020 0.835024 2.430373 -0.172441 2000-01-07 1.020099 1.259919 0.653093 -1.020485
A transformational apply.
In [162]: result = panel.apply(lambda x: x*2, axis='items') In [163]: result Out[163]: <class 'pandas.core.panel.Panel'> Dimensions: 3 (items) x 5 (major_axis) x 4 (minor_axis) Items axis: ItemA to ItemC Major_axis axis: 2000-01-03 00:00:00 to 2000-01-07 00:00:00 Minor_axis axis: A to D In [164]: result['ItemA'] Out[164]: A B C D 2000-01-03 0.660836 3.786354 1.602222 1.056308 2000-01-04 3.522400 0.340494 0.891228 -0.058742 2000-01-05 1.134266 -1.833689 2.906092 -1.262234 2000-01-06 -0.502039 1.670047 4.860747 -0.344882 2000-01-07 2.040199 2.519838 1.306185 -2.040969
A reduction operation.
In [165]: panel.apply(lambda x: x.dtype, axis='items') Out[165]: A B C D 2000-01-03 float64 float64 float64 float64 2000-01-04 float64 float64 float64 float64 2000-01-05 float64 float64 float64 float64 2000-01-06 float64 float64 float64 float64 2000-01-07 float64 float64 float64 float64
A similar reduction type operation
In [166]: panel.apply(lambda x: x.sum(), axis='major_axis') Out[166]: ItemA ItemB ItemC A 3.427831 -2.581431 0.840809 B 3.241522 -1.409935 -1.114512 C 5.783237 0.319672 -0.431906 D -1.325260 -2.914834 0.857043
This last reduction is equivalent to
In [167]: panel.sum('major_axis') Out[167]: ItemA ItemB ItemC A 3.427831 -2.581431 0.840809 B 3.241522 -1.409935 -1.114512 C 5.783237 0.319672 -0.431906 D -1.325260 -2.914834 0.857043
A transformation operation that returns a Panel
, but is computing the z-score across the major_axis
.
In [168]: result = panel.apply( .....: lambda x: (x-x.mean())/x.std(), .....: axis='major_axis') .....: In [169]: result Out[169]: <class 'pandas.core.panel.Panel'> Dimensions: 3 (items) x 5 (major_axis) x 4 (minor_axis) Items axis: ItemA to ItemC Major_axis axis: 2000-01-03 00:00:00 to 2000-01-07 00:00:00 Minor_axis axis: A to D In [170]: result['ItemA'] Out[170]: A B C D 2000-01-03 -0.469761 1.156225 -0.441347 1.341731 2000-01-04 1.422763 -0.444015 -0.882647 0.398661 2000-01-05 -0.156654 -1.453694 0.367936 -0.619210 2000-01-06 -1.238841 0.173423 1.581149 0.156654 2000-01-07 0.442494 0.568061 -0.625091 -1.277837
Apply can also accept multiple axes in the axis
argument. This will pass a DataFrame
of the cross-section to the applied function.
In [171]: f = lambda x: ((x.T-x.mean(1))/x.std(1)).T In [172]: result = panel.apply(f, axis = ['items','major_axis']) In [173]: result Out[173]: <class 'pandas.core.panel.Panel'> Dimensions: 4 (items) x 5 (major_axis) x 3 (minor_axis) Items axis: A to D Major_axis axis: 2000-01-03 00:00:00 to 2000-01-07 00:00:00 Minor_axis axis: ItemA to ItemC In [174]: result.loc[:,:,'ItemA'] Out[174]: A B C D 2000-01-03 0.864236 1.132969 0.557316 0.575106 2000-01-04 0.795745 0.652527 0.534808 -0.070674 2000-01-05 -0.310864 0.558627 1.086688 -1.051477 2000-01-06 -0.001065 0.832460 0.846006 0.043602 2000-01-07 1.128946 1.152469 -0.218186 -0.891680
This is equivalent to the following
In [175]: result = pd.Panel(dict([ (ax, f(panel.loc[:,:,ax])) .....: for ax in panel.minor_axis ])) .....: In [176]: result Out[176]: <class 'pandas.core.panel.Panel'> Dimensions: 4 (items) x 5 (major_axis) x 3 (minor_axis) Items axis: A to D Major_axis axis: 2000-01-03 00:00:00 to 2000-01-07 00:00:00 Minor_axis axis: ItemA to ItemC In [177]: result.loc[:,:,'ItemA'] Out[177]: A B C D 2000-01-03 0.864236 1.132969 0.557316 0.575106 2000-01-04 0.795745 0.652527 0.534808 -0.070674 2000-01-05 -0.310864 0.558627 1.086688 -1.051477 2000-01-06 -0.001065 0.832460 0.846006 0.043602 2000-01-07 1.128946 1.152469 -0.218186 -0.891680
Reindexing and altering labels
reindex()
is the fundamental data alignment method in pandas. It is used to implement nearly all other features relying on label-alignment functionality. To reindex means to conform the data to match a given set of labels along a particular axis. This accomplishes several things:
- Reorders the existing data to match a new set of labels
- Inserts missing value (NA) markers in label locations where no data for that label existed
- If specified, fill data for missing labels using logic (highly relevant to working with time series data)
Here is a simple example:
In [178]: s = pd.Series(np.random.randn(5), index=['a', 'b', 'c', 'd', 'e']) In [179]: s Out[179]: a -1.010924 b -0.672504 c -1.139222 d 0.354653 e 0.563622 dtype: float64 In [180]: s.reindex(['e', 'b', 'f', 'd']) Out[180]: e 0.563622 b -0.672504 f NaN d 0.354653 dtype: float64
Here, the f
label was not contained in the Series and hence appears as NaN
in the result.
With a DataFrame, you can simultaneously reindex the index and columns:
In [181]: df Out[181]: one three two a -0.626544 NaN -0.351587 b -0.138894 -0.177289 1.136249 c 0.011617 0.462215 -0.448789 d NaN 1.124472 -1.101558 In [182]: df.reindex(index=['c', 'f', 'b'], columns=['three', 'two', 'one']) Out[182]: three two one c 0.462215 -0.448789 0.011617 f NaN NaN NaN b -0.177289 1.136249 -0.138894
For convenience, you may utilize the reindex_axis()
method, which takes the labels and a keyword axis
parameter.
Note that the Index
objects containing the actual axis labels can be shared between objects. So if we have a Series and a DataFrame, the following can be done:
In [183]: rs = s.reindex(df.index) In [184]: rs Out[184]: a -1.010924 b -0.672504 c -1.139222 d 0.354653 dtype: float64 In [185]: rs.index is df.index Out[185]: True
This means that the reindexed Series?s index is the same Python object as the DataFrame?s index.
See also
MultiIndex / Advanced Indexing is an even more concise way of doing reindexing.
Note
When writing performance-sensitive code, there is a good reason to spend some time becoming a reindexing ninja: many operations are faster on pre-aligned data. Adding two unaligned DataFrames internally triggers a reindexing step. For exploratory analysis you will hardly notice the difference (because reindex
has been heavily optimized), but when CPU cycles matter sprinkling a few explicit reindex
calls here and there can have an impact.
Reindexing to align with another object
You may wish to take an object and reindex its axes to be labeled the same as another object. While the syntax for this is straightforward albeit verbose, it is a common enough operation that the reindex_like()
method is available to make this simpler:
In [186]: df2 Out[186]: one two a -0.626544 -0.351587 b -0.138894 1.136249 c 0.011617 -0.448789 In [187]: df3 Out[187]: one two a -0.375270 -0.463545 b 0.112379 1.024292 c 0.262891 -0.560746 In [188]: df.reindex_like(df2) Out[188]: one two a -0.626544 -0.351587 b -0.138894 1.136249 c 0.011617 -0.448789
Aligning objects with each other with align
The align()
method is the fastest way to simultaneously align two objects. It supports a join
argument (related to joining and merging):
-
join='outer'
: take the union of the indexes (default) -
join='left'
: use the calling object?s index -
join='right'
: use the passed object?s index -
join='inner'
: intersect the indexes
It returns a tuple with both of the reindexed Series:
In [189]: s = pd.Series(np.random.randn(5), index=['a', 'b', 'c', 'd', 'e']) In [190]: s1 = s[:4] In [191]: s2 = s[1:] In [192]: s1.align(s2) Out[192]: (a -0.365106 b 1.092702 c -1.481449 d 1.781190 e NaN dtype: float64, a NaN b 1.092702 c -1.481449 d 1.781190 e -0.031543 dtype: float64) In [193]: s1.align(s2, join='inner') Out[193]: (b 1.092702 c -1.481449 d 1.781190 dtype: float64, b 1.092702 c -1.481449 d 1.781190 dtype: float64) In [194]: s1.align(s2, join='left') Out[194]: (a -0.365106 b 1.092702 c -1.481449 d 1.781190 dtype: float64, a NaN b 1.092702 c -1.481449 d 1.781190 dtype: float64)
For DataFrames, the join method will be applied to both the index and the columns by default:
In [195]: df.align(df2, join='inner') Out[195]: ( one two a -0.626544 -0.351587 b -0.138894 1.136249 c 0.011617 -0.448789, one two a -0.626544 -0.351587 b -0.138894 1.136249 c 0.011617 -0.448789)
You can also pass an axis
option to only align on the specified axis:
In [196]: df.align(df2, join='inner', axis=0) Out[196]: ( one three two a -0.626544 NaN -0.351587 b -0.138894 -0.177289 1.136249 c 0.011617 0.462215 -0.448789, one two a -0.626544 -0.351587 b -0.138894 1.136249 c 0.011617 -0.448789)
If you pass a Series to DataFrame.align()
, you can choose to align both objects either on the DataFrame?s index or columns using the axis
argument:
In [197]: df.align(df2.ix[0], axis=1) Out[197]: ( one three two a -0.626544 NaN -0.351587 b -0.138894 -0.177289 1.136249 c 0.011617 0.462215 -0.448789 d NaN 1.124472 -1.101558, one -0.626544 three NaN two -0.351587 Name: a, dtype: float64)
Filling while reindexing
reindex()
takes an optional parameter method
which is a filling method chosen from the following table:
Method | Action |
---|---|
pad / ffill | Fill values forward |
bfill / backfill | Fill values backward |
nearest | Fill from the nearest index value |
We illustrate these fill methods on a simple Series:
In [198]: rng = pd.date_range('1/3/2000', periods=8) In [199]: ts = pd.Series(np.random.randn(8), index=rng) In [200]: ts2 = ts[[0, 3, 6]] In [201]: ts Out[201]: 2000-01-03 0.480993 2000-01-04 0.604244 2000-01-05 -0.487265 2000-01-06 1.990533 2000-01-07 0.327007 2000-01-08 1.053639 2000-01-09 -2.927808 2000-01-10 0.082065 Freq: D, dtype: float64 In [202]: ts2 Out[202]: 2000-01-03 0.480993 2000-01-06 1.990533 2000-01-09 -2.927808 dtype: float64 In [203]: ts2.reindex(ts.index) Out[203]: 2000-01-03 0.480993 2000-01-04 NaN 2000-01-05 NaN 2000-01-06 1.990533 2000-01-07 NaN 2000-01-08 NaN 2000-01-09 -2.927808 2000-01-10 NaN Freq: D, dtype: float64 In [204]: ts2.reindex(ts.index, method='ffill') Out[204]: 2000-01-03 0.480993 2000-01-04 0.480993 2000-01-05 0.480993 2000-01-06 1.990533 2000-01-07 1.990533 2000-01-08 1.990533 2000-01-09 -2.927808 2000-01-10 -2.927808 Freq: D, dtype: float64 In [205]: ts2.reindex(ts.index, method='bfill') Out[205]: 2000-01-03 0.480993 2000-01-04 1.990533 2000-01-05 1.990533 2000-01-06 1.990533 2000-01-07 -2.927808 2000-01-08 -2.927808 2000-01-09 -2.927808 2000-01-10 NaN Freq: D, dtype: float64 In [206]: ts2.reindex(ts.index, method='nearest') Out[206]: 2000-01-03 0.480993 2000-01-04 0.480993 2000-01-05 1.990533 2000-01-06 1.990533 2000-01-07 1.990533 2000-01-08 -2.927808 2000-01-09 -2.927808 2000-01-10 -2.927808 Freq: D, dtype: float64
These methods require that the indexes are ordered increasing or decreasing.
Note that the same result could have been achieved using fillna (except for method='nearest'
) or interpolate:
In [207]: ts2.reindex(ts.index).fillna(method='ffill') Out[207]: 2000-01-03 0.480993 2000-01-04 0.480993 2000-01-05 0.480993 2000-01-06 1.990533 2000-01-07 1.990533 2000-01-08 1.990533 2000-01-09 -2.927808 2000-01-10 -2.927808 Freq: D, dtype: float64
reindex()
will raise a ValueError if the index is not monotonic increasing or decreasing. fillna()
and interpolate()
will not make any checks on the order of the index.
Limits on filling while reindexing
The limit
and tolerance
arguments provide additional control over filling while reindexing. Limit specifies the maximum count of consecutive matches:
In [208]: ts2.reindex(ts.index, method='ffill', limit=1) Out[208]: 2000-01-03 0.480993 2000-01-04 0.480993 2000-01-05 NaN 2000-01-06 1.990533 2000-01-07 1.990533 2000-01-08 NaN 2000-01-09 -2.927808 2000-01-10 -2.927808 Freq: D, dtype: float64
In contrast, tolerance specifies the maximum distance between the index and indexer values:
In [209]: ts2.reindex(ts.index, method='ffill', tolerance='1 day') Out[209]: 2000-01-03 0.480993 2000-01-04 0.480993 2000-01-05 NaN 2000-01-06 1.990533 2000-01-07 1.990533 2000-01-08 NaN 2000-01-09 -2.927808 2000-01-10 -2.927808 Freq: D, dtype: float64
Notice that when used on a DatetimeIndex
, TimedeltaIndex
or PeriodIndex
, tolerance
will coerced into a Timedelta
if possible. This allows you to specify tolerance with appropriate strings.
Dropping labels from an axis
A method closely related to reindex
is the drop()
function. It removes a set of labels from an axis:
In [210]: df Out[210]: one three two a -0.626544 NaN -0.351587 b -0.138894 -0.177289 1.136249 c 0.011617 0.462215 -0.448789 d NaN 1.124472 -1.101558 In [211]: df.drop(['a', 'd'], axis=0) Out[211]: one three two b -0.138894 -0.177289 1.136249 c 0.011617 0.462215 -0.448789 In [212]: df.drop(['one'], axis=1) Out[212]: three two a NaN -0.351587 b -0.177289 1.136249 c 0.462215 -0.448789 d 1.124472 -1.101558
Note that the following also works, but is a bit less obvious / clean:
In [213]: df.reindex(df.index.difference(['a', 'd'])) Out[213]: one three two b -0.138894 -0.177289 1.136249 c 0.011617 0.462215 -0.448789
Renaming / mapping labels
The rename()
method allows you to relabel an axis based on some mapping (a dict or Series) or an arbitrary function.
In [214]: s Out[214]: a -0.365106 b 1.092702 c -1.481449 d 1.781190 e -0.031543 dtype: float64 In [215]: s.rename(str.upper) Out[215]: A -0.365106 B 1.092702 C -1.481449 D 1.781190 E -0.031543 dtype: float64
If you pass a function, it must return a value when called with any of the labels (and must produce a set of unique values). A dict or Series can also be used:
In [216]: df.rename(columns={'one' : 'foo', 'two' : 'bar'}, .....: index={'a' : 'apple', 'b' : 'banana', 'd' : 'durian'}) .....: Out[216]: foo three bar apple -0.626544 NaN -0.351587 banana -0.138894 -0.177289 1.136249 c 0.011617 0.462215 -0.448789 durian NaN 1.124472 -1.101558
If the mapping doesn?t include a column/index label, it isn?t renamed. Also extra labels in the mapping don?t throw an error.
The rename()
method also provides an inplace
named parameter that is by default False
and copies the underlying data. Pass inplace=True
to rename the data in place.
New in version 0.18.0.
Finally, rename()
also accepts a scalar or list-like for altering the Series.name
attribute.
In [217]: s.rename("scalar-name") Out[217]: a -0.365106 b 1.092702 c -1.481449 d 1.781190 e -0.031543 Name: scalar-name, dtype: float64
The Panel class has a related rename_axis()
class which can rename any of its three axes.
Iteration
The behavior of basic iteration over pandas objects depends on the type. When iterating over a Series, it is regarded as array-like, and basic iteration produces the values. Other data structures, like DataFrame and Panel, follow the dict-like convention of iterating over the ?keys? of the objects.
In short, basic iteration (for i in object
) produces:
- Series: values
- DataFrame: column labels
- Panel: item labels
Thus, for example, iterating over a DataFrame gives you the column names:
In [218]: df = pd.DataFrame({'col1' : np.random.randn(3), 'col2' : np.random.randn(3)}, .....: index=['a', 'b', 'c']) .....: In [219]: for col in df: .....: print(col) .....: col1 col2
Pandas objects also have the dict-like iteritems()
method to iterate over the (key, value) pairs.
To iterate over the rows of a DataFrame, you can use the following methods:
-
iterrows()
: Iterate over the rows of a DataFrame as (index, Series) pairs. This converts the rows to Series objects, which can change the dtypes and has some performance implications. -
itertuples()
: Iterate over the rows of a DataFrame as namedtuples of the values. This is a lot faster thaniterrows()
, and is in most cases preferable to use to iterate over the values of a DataFrame.
Warning
Iterating through pandas objects is generally slow. In many cases, iterating manually over the rows is not needed and can be avoided with one of the following approaches:
- Look for a vectorized solution: many operations can be performed using built-in methods or numpy functions, (boolean) indexing, ...
- When you have a function that cannot work on the full DataFrame/Series at once, it is better to use
apply()
instead of iterating over the values. See the docs on function application. - If you need to do iterative manipulations on the values but performance is important, consider writing the inner loop using e.g. cython or numba. See the enhancing performance section for some examples of this approach.
Warning
You should never modify something you are iterating over. This is not guaranteed to work in all cases. Depending on the data types, the iterator returns a copy and not a view, and writing to it will have no effect!
For example, in the following case setting the value has no effect:
In [220]: df = pd.DataFrame({'a': [1, 2, 3], 'b': ['a', 'b', 'c']}) In [221]: for index, row in df.iterrows(): .....: row['a'] = 10 .....: In [222]: df Out[222]: a b 0 1 a 1 2 b 2 3 c
iteritems
Consistent with the dict-like interface, iteritems()
iterates through key-value pairs:
- Series: (index, scalar value) pairs
- DataFrame: (column, Series) pairs
- Panel: (item, DataFrame) pairs
For example:
In [223]: for item, frame in wp.iteritems(): .....: print(item) .....: print(frame) .....: Item1 A B C D 2000-01-01 -1.032011 0.969818 -0.962723 1.382083 2000-01-02 -0.938794 0.669142 -0.433567 -0.273610 2000-01-03 0.680433 -0.308450 -0.276099 -1.821168 2000-01-04 -1.993606 -1.927385 -2.027924 1.624972 2000-01-05 0.551135 3.059267 0.455264 -0.030740 Item2 A B C D 2000-01-01 0.935716 1.061192 -2.107852 0.199905 2000-01-02 0.323586 -0.641630 -0.587514 0.053897 2000-01-03 0.194889 -0.381994 0.318587 2.089075 2000-01-04 -0.728293 -0.090255 -0.748199 1.318931 2000-01-05 -2.029766 0.792652 0.461007 -0.542749
iterrows
iterrows()
allows you to iterate through the rows of a DataFrame as Series objects. It returns an iterator yielding each index value along with a Series containing the data in each row:
In [224]: for row_index, row in df.iterrows(): .....: print('%s\n%s' % (row_index, row)) .....: 0 a 1 b a Name: 0, dtype: object 1 a 2 b b Name: 1, dtype: object 2 a 3 b c Name: 2, dtype: object
Note
Because iterrows()
returns a Series for each row, it does not preserve dtypes across the rows (dtypes are preserved across columns for DataFrames). For example,
In [225]: df_orig = pd.DataFrame([[1, 1.5]], columns=['int', 'float']) In [226]: df_orig.dtypes Out[226]: int int64 float float64 dtype: object In [227]: row = next(df_orig.iterrows())[1] In [228]: row Out[228]: int 1.0 float 1.5 Name: 0, dtype: float64
All values in row
, returned as a Series, are now upcasted to floats, also the original integer value in column x
:
In [229]: row['int'].dtype Out[229]: dtype('float64') In [230]: df_orig['int'].dtype Out[230]: dtype('int64')
To preserve dtypes while iterating over the rows, it is better to use itertuples()
which returns namedtuples of the values and which is generally much faster as iterrows
.
For instance, a contrived way to transpose the DataFrame would be:
In [231]: df2 = pd.DataFrame({'x': [1, 2, 3], 'y': [4, 5, 6]}) In [232]: print(df2) x y 0 1 4 1 2 5 2 3 6 In [233]: print(df2.T) 0 1 2 x 1 2 3 y 4 5 6 In [234]: df2_t = pd.DataFrame(dict((idx,values) for idx, values in df2.iterrows())) In [235]: print(df2_t) 0 1 2 x 1 2 3 y 4 5 6
itertuples
The itertuples()
method will return an iterator yielding a namedtuple for each row in the DataFrame. The first element of the tuple will be the row?s corresponding index value, while the remaining values are the row values.
For instance,
In [236]: for row in df.itertuples(): .....: print(row) .....: Pandas(Index=0, a=1, b='a') Pandas(Index=1, a=2, b='b') Pandas(Index=2, a=3, b='c')
This method does not convert the row to a Series object but just returns the values inside a namedtuple. Therefore, itertuples()
preserves the data type of the values and is generally faster as iterrows()
.
Note
The column names will be renamed to positional names if they are invalid Python identifiers, repeated, or start with an underscore. With a large number of columns (>255), regular tuples are returned.
.dt accessor
Series
has an accessor to succinctly return datetime like properties for the values of the Series, if it is a datetime/period like Series. This will return a Series, indexed like the existing Series.
# datetime In [237]: s = pd.Series(pd.date_range('20130101 09:10:12', periods=4)) In [238]: s Out[238]: 0 2013-01-01 09:10:12 1 2013-01-02 09:10:12 2 2013-01-03 09:10:12 3 2013-01-04 09:10:12 dtype: datetime64[ns] In [239]: s.dt.hour Out[239]: 0 9 1 9 2 9 3 9 dtype: int64 In [240]: s.dt.second Out[240]: 0 12 1 12 2 12 3 12 dtype: int64 In [241]: s.dt.day Out[241]: 0 1 1 2 2 3 3 4 dtype: int64
This enables nice expressions like this:
In [242]: s[s.dt.day==2] Out[242]: 1 2013-01-02 09:10:12 dtype: datetime64[ns]
You can easily produces tz aware transformations:
In [243]: stz = s.dt.tz_localize('US/Eastern') In [244]: stz Out[244]: 0 2013-01-01 09:10:12-05:00 1 2013-01-02 09:10:12-05:00 2 2013-01-03 09:10:12-05:00 3 2013-01-04 09:10:12-05:00 dtype: datetime64[ns, US/Eastern] In [245]: stz.dt.tz Out[245]: <DstTzInfo 'US/Eastern' LMT-1 day, 19:04:00 STD>
You can also chain these types of operations:
In [246]: s.dt.tz_localize('UTC').dt.tz_convert('US/Eastern') Out[246]: 0 2013-01-01 04:10:12-05:00 1 2013-01-02 04:10:12-05:00 2 2013-01-03 04:10:12-05:00 3 2013-01-04 04:10:12-05:00 dtype: datetime64[ns, US/Eastern]
You can also format datetime values as strings with Series.dt.strftime()
which supports the same format as the standard strftime()
.
# DatetimeIndex In [247]: s = pd.Series(pd.date_range('20130101', periods=4)) In [248]: s Out[248]: 0 2013-01-01 1 2013-01-02 2 2013-01-03 3 2013-01-04 dtype: datetime64[ns] In [249]: s.dt.strftime('%Y/%m/%d') Out[249]: 0 2013/01/01 1 2013/01/02 2 2013/01/03 3 2013/01/04 dtype: object
# PeriodIndex In [250]: s = pd.Series(pd.period_range('20130101', periods=4)) In [251]: s Out[251]: 0 2013-01-01 1 2013-01-02 2 2013-01-03 3 2013-01-04 dtype: object In [252]: s.dt.strftime('%Y/%m/%d') Out[252]: 0 2013/01/01 1 2013/01/02 2 2013/01/03 3 2013/01/04 dtype: object
The .dt
accessor works for period and timedelta dtypes.
# period In [253]: s = pd.Series(pd.period_range('20130101', periods=4, freq='D')) In [254]: s Out[254]: 0 2013-01-01 1 2013-01-02 2 2013-01-03 3 2013-01-04 dtype: object In [255]: s.dt.year Out[255]: 0 2013 1 2013 2 2013 3 2013 dtype: int64 In [256]: s.dt.day Out[256]: 0 1 1 2 2 3 3 4 dtype: int64
# timedelta In [257]: s = pd.Series(pd.timedelta_range('1 day 00:00:05', periods=4, freq='s')) In [258]: s Out[258]: 0 1 days 00:00:05 1 1 days 00:00:06 2 1 days 00:00:07 3 1 days 00:00:08 dtype: timedelta64[ns] In [259]: s.dt.days Out[259]: 0 1 1 1 2 1 3 1 dtype: int64 In [260]: s.dt.seconds Out[260]: 0 5 1 6 2 7 3 8 dtype: int64 In [261]: s.dt.components Out[261]: days hours minutes seconds milliseconds microseconds nanoseconds 0 1 0 0 5 0 0 0 1 1 0 0 6 0 0 0 2 1 0 0 7 0 0 0 3 1 0 0 8 0 0 0
Note
Series.dt
will raise a TypeError
if you access with a non-datetimelike values
Vectorized string methods
Series is equipped with a set of string processing methods that make it easy to operate on each element of the array. Perhaps most importantly, these methods exclude missing/NA values automatically. These are accessed via the Series?s str
attribute and generally have names matching the equivalent (scalar) built-in string methods. For example:
In [262]: s = pd.Series(['A', 'B', 'C', 'Aaba', 'Baca', np.nan, 'CABA', 'dog', 'cat']) In [263]: s.str.lower() Out[263]: 0 a 1 b 2 c 3 aaba 4 baca 5 NaN 6 caba 7 dog 8 cat dtype: object
Powerful pattern-matching methods are provided as well, but note that pattern-matching generally uses regular expressions by default (and in some cases always uses them).
Please see Vectorized String Methods for a complete description.
Sorting
Warning
The sorting API is substantially changed in 0.17.0, see here for these changes. In particular, all sorting methods now return a new object by default, and DO NOT operate in-place (except by passing inplace=True
).
There are two obvious kinds of sorting that you may be interested in: sorting by label and sorting by actual values.
By Index
The primary method for sorting axis labels (indexes) are the Series.sort_index()
and the DataFrame.sort_index()
methods.
In [264]: unsorted_df = df.reindex(index=['a', 'd', 'c', 'b'], .....: columns=['three', 'two', 'one']) .....: # DataFrame In [265]: unsorted_df.sort_index() Out[265]: three two one a NaN NaN NaN b NaN NaN NaN c NaN NaN NaN d NaN NaN NaN In [266]: unsorted_df.sort_index(ascending=False) Out[266]: three two one d NaN NaN NaN c NaN NaN NaN b NaN NaN NaN a NaN NaN NaN In [267]: unsorted_df.sort_index(axis=1) Out[267]: one three two a NaN NaN NaN d NaN NaN NaN c NaN NaN NaN b NaN NaN NaN # Series In [268]: unsorted_df['three'].sort_index() Out[268]: a NaN b NaN c NaN d NaN Name: three, dtype: float64
By Values
The Series.sort_values()
and DataFrame.sort_values()
are the entry points for value sorting (that is the values in a column or row). DataFrame.sort_values()
can accept an optional by
argument for axis=0
which will use an arbitrary vector or a column name of the DataFrame to determine the sort order:
In [269]: df1 = pd.DataFrame({'one':[2,1,1,1],'two':[1,3,2,4],'three':[5,4,3,2]}) In [270]: df1.sort_values(by='two') Out[270]: one three two 0 2 5 1 2 1 3 2 1 1 4 3 3 1 2 4
The by
argument can take a list of column names, e.g.:
In [271]: df1[['one', 'two', 'three']].sort_values(by=['one','two']) Out[271]: one two three 2 1 2 3 1 1 3 4 3 1 4 2 0 2 1 5
These methods have special treatment of NA values via the na_position
argument:
In [272]: s[2] = np.nan In [273]: s.sort_values() Out[273]: 0 A 3 Aaba 1 B 4 Baca 6 CABA 8 cat 7 dog 2 NaN 5 NaN dtype: object In [274]: s.sort_values(na_position='first') Out[274]: 2 NaN 5 NaN 0 A 3 Aaba 1 B 4 Baca 6 CABA 8 cat 7 dog dtype: object
searchsorted
Series has the searchsorted()
method, which works similar to numpy.ndarray.searchsorted()
.
In [275]: ser = pd.Series([1, 2, 3]) In [276]: ser.searchsorted([0, 3]) Out[276]: array([0, 2]) In [277]: ser.searchsorted([0, 4]) Out[277]: array([0, 3]) In [278]: ser.searchsorted([1, 3], side='right') Out[278]: array([1, 3]) In [279]: ser.searchsorted([1, 3], side='left') Out[279]: array([0, 2]) In [280]: ser = pd.Series([3, 1, 2]) In [281]: ser.searchsorted([0, 3], sorter=np.argsort(ser)) Out[281]: array([0, 2])
smallest / largest values
New in version 0.14.0.
Series
has the nsmallest()
and nlargest()
methods which return the smallest or largest n values. For a large Series
this can be much faster than sorting the entire Series and calling head(n)
on the result.
In [282]: s = pd.Series(np.random.permutation(10)) In [283]: s Out[283]: 0 9 1 8 2 5 3 3 4 6 5 7 6 0 7 2 8 4 9 1 dtype: int64 In [284]: s.sort_values() Out[284]: 6 0 9 1 7 2 3 3 8 4 2 5 4 6 5 7 1 8 0 9 dtype: int64 In [285]: s.nsmallest(3) Out[285]: 6 0 9 1 7 2 dtype: int64 In [286]: s.nlargest(3) Out[286]: 0 9 1 8 5 7 dtype: int64
New in version 0.17.0.
DataFrame
also has the nlargest
and nsmallest
methods.
In [287]: df = pd.DataFrame({'a': [-2, -1, 1, 10, 8, 11, -1], .....: 'b': list('abdceff'), .....: 'c': [1.0, 2.0, 4.0, 3.2, np.nan, 3.0, 4.0]}) .....: In [288]: df.nlargest(3, 'a') Out[288]: a b c 5 11 f 3.0 3 10 c 3.2 4 8 e NaN In [289]: df.nlargest(5, ['a', 'c']) Out[289]: a b c 5 11 f 3.0 3 10 c 3.2 4 8 e NaN 2 1 d 4.0 1 -1 b 2.0 6 -1 f 4.0 In [290]: df.nsmallest(3, 'a') Out[290]: a b c 0 -2 a 1.0 1 -1 b 2.0 6 -1 f 4.0 1 -1 b 2.0 6 -1 f 4.0 In [291]: df.nsmallest(5, ['a', 'c']) Out[291]: a b c 0 -2 a 1.0 1 -1 b 2.0 6 -1 f 4.0 1 -1 b 2.0 6 -1 f 4.0 2 1 d 4.0 4 8 e NaN
Sorting by a multi-index column
You must be explicit about sorting when the column is a multi-index, and fully specify all levels to by
.
In [292]: df1.columns = pd.MultiIndex.from_tuples([('a','one'),('a','two'),('b','three')]) In [293]: df1.sort_values(by=('a','two')) Out[293]: a b one two three 3 1 2 4 2 1 3 2 1 1 4 3 0 2 5 1
Copying
The copy()
method on pandas objects copies the underlying data (though not the axis indexes, since they are immutable) and returns a new object. Note that it is seldom necessary to copy objects. For example, there are only a handful of ways to alter a DataFrame in-place:
- Inserting, deleting, or modifying a column
- Assigning to the
index
orcolumns
attributes - For homogeneous data, directly modifying the values via the
values
attribute or advanced indexing
To be clear, no pandas methods have the side effect of modifying your data; almost all methods return new objects, leaving the original object untouched. If data is modified, it is because you did so explicitly.
dtypes
The main types stored in pandas objects are float
, int
, bool
, datetime64[ns]
and datetime64[ns, tz]
(in >= 0.17.0), timedelta[ns]
, category
(in >= 0.15.0), and object
. In addition these dtypes have item sizes, e.g. int64
and int32
. See Series with TZ for more detail on datetime64[ns, tz]
dtypes.
A convenient dtypes
attribute for DataFrames returns a Series with the data type of each column.
In [294]: dft = pd.DataFrame(dict(A = np.random.rand(3), .....: B = 1, .....: C = 'foo', .....: D = pd.Timestamp('20010102'), .....: E = pd.Series([1.0]*3).astype('float32'), .....: F = False, .....: G = pd.Series([1]*3,dtype='int8'))) .....: In [295]: dft Out[295]: A B C D E F G 0 0.954940 1 foo 2001-01-02 1.0 False 1 1 0.318163 1 foo 2001-01-02 1.0 False 1 2 0.985803 1 foo 2001-01-02 1.0 False 1 In [296]: dft.dtypes Out[296]: A float64 B int64 C object D datetime64[ns] E float32 F bool G int8 dtype: object
On a Series
use the dtype
attribute.
In [297]: dft['A'].dtype Out[297]: dtype('float64')
If a pandas object contains data multiple dtypes IN A SINGLE COLUMN, the dtype of the column will be chosen to accommodate all of the data types (object
is the most general).
# these ints are coerced to floats In [298]: pd.Series([1, 2, 3, 4, 5, 6.]) Out[298]: 0 1.0 1 2.0 2 3.0 3 4.0 4 5.0 5 6.0 dtype: float64 # string data forces an ``object`` dtype In [299]: pd.Series([1, 2, 3, 6., 'foo']) Out[299]: 0 1 1 2 2 3 3 6 4 foo dtype: object
The method get_dtype_counts()
will return the number of columns of each type in a DataFrame
:
In [300]: dft.get_dtype_counts() Out[300]: bool 1 datetime64[ns] 1 float32 1 float64 1 int64 1 int8 1 object 1 dtype: int64
Numeric dtypes will propagate and can coexist in DataFrames (starting in v0.11.0). If a dtype is passed (either directly via the dtype
keyword, a passed ndarray
, or a passed Series
, then it will be preserved in DataFrame operations. Furthermore, different numeric dtypes will NOT be combined. The following example will give you a taste.
In [301]: df1 = pd.DataFrame(np.random.randn(8, 1), columns=['A'], dtype='float32') In [302]: df1 Out[302]: A 0 0.647650 1 0.822993 2 1.778703 3 -1.543048 4 -0.123256 5 2.239740 6 -0.143778 7 -2.885090 In [303]: df1.dtypes Out[303]: A float32 dtype: object In [304]: df2 = pd.DataFrame(dict( A = pd.Series(np.random.randn(8), dtype='float16'), .....: B = pd.Series(np.random.randn(8)), .....: C = pd.Series(np.array(np.random.randn(8), dtype='uint8')) )) .....: In [305]: df2 Out[305]: A B C 0 0.027588 0.296947 0 1 -1.150391 0.007045 255 2 0.246460 0.707877 1 3 -0.455078 0.950661 0 4 -1.507812 0.087527 0 5 -0.502441 -0.339212 0 6 0.528809 -0.278698 0 7 0.590332 1.775379 0 In [306]: df2.dtypes Out[306]: A float16 B float64 C uint8 dtype: object
defaults
By default integer types are int64
and float types are float64
, REGARDLESS of platform (32-bit or 64-bit). The following will all result in int64
dtypes.
In [307]: pd.DataFrame([1, 2], columns=['a']).dtypes Out[307]: a int64 dtype: object In [308]: pd.DataFrame({'a': [1, 2]}).dtypes Out[308]: a int64 dtype: object In [309]: pd.DataFrame({'a': 1 }, index=list(range(2))).dtypes Out[309]: a int64 dtype: object
Numpy, however will choose platform-dependent types when creating arrays. The following WILL result in int32
on 32-bit platform.
In [310]: frame = pd.DataFrame(np.array([1, 2]))
upcasting
Types can potentially be upcasted when combined with other types, meaning they are promoted from the current type (say int
to float
)
In [311]: df3 = df1.reindex_like(df2).fillna(value=0.0) + df2 In [312]: df3 Out[312]: A B C 0 0.675238 0.296947 0.0 1 -0.327398 0.007045 255.0 2 2.025163 0.707877 1.0 3 -1.998126 0.950661 0.0 4 -1.631068 0.087527 0.0 5 1.737299 -0.339212 0.0 6 0.385030 -0.278698 0.0 7 -2.294758 1.775379 0.0 In [313]: df3.dtypes Out[313]: A float32 B float64 C float64 dtype: object
The values
attribute on a DataFrame return the lower-common-denominator of the dtypes, meaning the dtype that can accommodate ALL of the types in the resulting homogeneous dtyped numpy array. This can force some upcasting.
In [314]: df3.values.dtype Out[314]: dtype('float64')
astype
You can use the astype()
method to explicitly convert dtypes from one to another. These will by default return a copy, even if the dtype was unchanged (pass copy=False
to change this behavior). In addition, they will raise an exception if the astype operation is invalid.
Upcasting is always according to the numpy rules. If two different dtypes are involved in an operation, then the more general one will be used as the result of the operation.
In [315]: df3 Out[315]: A B C 0 0.675238 0.296947 0.0 1 -0.327398 0.007045 255.0 2 2.025163 0.707877 1.0 3 -1.998126 0.950661 0.0 4 -1.631068 0.087527 0.0 5 1.737299 -0.339212 0.0 6 0.385030 -0.278698 0.0 7 -2.294758 1.775379 0.0 In [316]: df3.dtypes Out[316]: A float32 B float64 C float64 dtype: object # conversion of dtypes In [317]: df3.astype('float32').dtypes Out[317]: A float32 B float32 C float32 dtype: object
Convert a subset of columns to a specified type using astype()
In [318]: dft = pd.DataFrame({'a': [1,2,3], 'b': [4,5,6], 'c': [7, 8, 9]}) In [319]: dft[['a','b']] = dft[['a','b']].astype(np.uint8) In [320]: dft Out[320]: a b c 0 1 4 7 1 2 5 8 2 3 6 9 In [321]: dft.dtypes Out[321]: a uint8 b uint8 c int64 dtype: object
Note
When trying to convert a subset of columns to a specified type using astype()
and loc()
, upcasting occurs.
loc()
tries to fit in what we are assigning to the current dtypes, while []
will overwrite them taking the dtype from the right hand side. Therefore the following piece of code produces the unintended result.
In [322]: dft = pd.DataFrame({'a': [1,2,3], 'b': [4,5,6], 'c': [7, 8, 9]}) In [323]: dft.loc[:, ['a', 'b']].astype(np.uint8).dtypes Out[323]: a uint8 b uint8 dtype: object In [324]: dft.loc[:, ['a', 'b']] = dft.loc[:, ['a', 'b']].astype(np.uint8) In [325]: dft.dtypes Out[325]: a int64 b int64 c int64 dtype: object
object conversion
pandas offers various functions to try to force conversion of types from the object
dtype to other types. The following functions are available for one dimensional object arrays or scalars:
-
to_numeric()
(conversion to numeric dtypes)In [326]: m = ['1.1', 2, 3] In [327]: pd.to_numeric(m) Out[327]: array([ 1.1, 2. , 3. ])
-
to_datetime()
(conversion to datetime objects)In [328]: import datetime In [329]: m = ['2016-07-09', datetime.datetime(2016, 3, 2)] In [330]: pd.to_datetime(m) Out[330]: DatetimeIndex(['2016-07-09', '2016-03-02'], dtype='datetime64[ns]', freq=None)
-
to_timedelta()
(conversion to timedelta objects)In [331]: m = ['5us', pd.Timedelta('1day')] In [332]: pd.to_timedelta(m) Out[332]: TimedeltaIndex(['0 days 00:00:00.000005', '1 days 00:00:00'], dtype='timedelta64[ns]', freq=None)
To force a conversion, we can pass in an errors
argument, which specifies how pandas should deal with elements that cannot be converted to desired dtype or object. By default, errors='raise'
, meaning that any errors encountered will be raised during the conversion process. However, if errors='coerce'
, these errors will be ignored and pandas will convert problematic elements to pd.NaT
(for datetime and timedelta) or np.nan
(for numeric). This might be useful if you are reading in data which is mostly of the desired dtype (e.g. numeric, datetime), but occasionally has non-conforming elements intermixed that you want to represent as missing:
In [333]: import datetime In [334]: m = ['apple', datetime.datetime(2016, 3, 2)] In [335]: pd.to_datetime(m, errors='coerce') Out[335]: DatetimeIndex(['NaT', '2016-03-02'], dtype='datetime64[ns]', freq=None) In [336]: m = ['apple', 2, 3] In [337]: pd.to_numeric(m, errors='coerce') Out[337]: array([ nan, 2., 3.]) In [338]: m = ['apple', pd.Timedelta('1day')] In [339]: pd.to_timedelta(m, errors='coerce') Out[339]: TimedeltaIndex([NaT, '1 days'], dtype='timedelta64[ns]', freq=None)
The errors
parameter has a third option of errors='ignore'
, which will simply return the passed in data if it encounters any errors with the conversion to a desired data type:
In [340]: import datetime In [341]: m = ['apple', datetime.datetime(2016, 3, 2)] In [342]: pd.to_datetime(m, errors='ignore') Out[342]: array(['apple', datetime.datetime(2016, 3, 2, 0, 0)], dtype=object) In [343]: m = ['apple', 2, 3] In [344]: pd.to_numeric(m, errors='ignore') Out[344]: array(['apple', 2, 3], dtype=object) In [345]: m = ['apple', pd.Timedelta('1day')] In [346]: pd.to_timedelta(m, errors='ignore') Out[346]: array(['apple', Timedelta('1 days 00:00:00')], dtype=object)
In addition to object conversion, to_numeric()
provides another argument downcast
, which gives the option of downcasting the newly (or already) numeric data to a smaller dtype, which can conserve memory:
In [347]: m = ['1', 2, 3] In [348]: pd.to_numeric(m, downcast='integer') # smallest signed int dtype Out[348]: array([1, 2, 3], dtype=int8) In [349]: pd.to_numeric(m, downcast='signed') # same as 'integer' Out[349]: array([1, 2, 3], dtype=int8) In [350]: pd.to_numeric(m, downcast='unsigned') # smallest unsigned int dtype Out[350]: array([1, 2, 3], dtype=uint8) In [351]: pd.to_numeric(m, downcast='float') # smallest float dtype Out[351]: array([ 1., 2., 3.], dtype=float32)
As these methods apply only to one-dimensional arrays, lists or scalars; they cannot be used directly on multi-dimensional objects such as DataFrames. However, with apply()
, we can ?apply? the function over each column efficiently:
In [352]: import datetime In [353]: df = pd.DataFrame([['2016-07-09', datetime.datetime(2016, 3, 2)]] * 2, dtype='O') In [354]: df Out[354]: 0 1 0 2016-07-09 2016-03-02 00:00:00 1 2016-07-09 2016-03-02 00:00:00 In [355]: df.apply(pd.to_datetime) Out[355]: 0 1 0 2016-07-09 2016-03-02 1 2016-07-09 2016-03-02 In [356]: df = pd.DataFrame([['1.1', 2, 3]] * 2, dtype='O') In [357]: df Out[357]: 0 1 2 0 1.1 2 3 1 1.1 2 3 In [358]: df.apply(pd.to_numeric) Out[358]: 0 1 2 0 1.1 2 3 1 1.1 2 3 In [359]: df = pd.DataFrame([['5us', pd.Timedelta('1day')]] * 2, dtype='O') In [360]: df Out[360]: 0 1 0 5us 1 days 00:00:00 1 5us 1 days 00:00:00 In [361]: df.apply(pd.to_timedelta) Out[361]: 0 1 0 00:00:00.000005 1 days 1 00:00:00.000005 1 days
gotchas
Performing selection operations on integer
type data can easily upcast the data to floating
. The dtype of the input data will be preserved in cases where nans
are not introduced (starting in 0.11.0) See also integer na gotchas
In [362]: dfi = df3.astype('int32') In [363]: dfi['E'] = 1 In [364]: dfi Out[364]: A B C E 0 0 0 0 1 1 0 0 255 1 2 2 0 1 1 3 -1 0 0 1 4 -1 0 0 1 5 1 0 0 1 6 0 0 0 1 7 -2 1 0 1 In [365]: dfi.dtypes Out[365]: A int32 B int32 C int32 E int64 dtype: object In [366]: casted = dfi[dfi>0] In [367]: casted Out[367]: A B C E 0 NaN NaN NaN 1 1 NaN NaN 255.0 1 2 2.0 NaN 1.0 1 3 NaN NaN NaN 1 4 NaN NaN NaN 1 5 1.0 NaN NaN 1 6 NaN NaN NaN 1 7 NaN 1.0 NaN 1 In [368]: casted.dtypes Out[368]: A float64 B float64 C float64 E int64 dtype: object
While float dtypes are unchanged.
In [369]: dfa = df3.copy() In [370]: dfa['A'] = dfa['A'].astype('float32') In [371]: dfa.dtypes Out[371]: A float32 B float64 C float64 dtype: object In [372]: casted = dfa[df2>0] In [373]: casted Out[373]: A B C 0 0.675238 0.296947 NaN 1 NaN 0.007045 255.0 2 2.025163 0.707877 1.0 3 NaN 0.950661 NaN 4 NaN 0.087527 NaN 5 NaN NaN NaN 6 0.385030 NaN NaN 7 -2.294758 1.775379 NaN In [374]: casted.dtypes Out[374]: A float32 B float64 C float64 dtype: object
Selecting columns based on dtype
New in version 0.14.1.
The select_dtypes()
method implements subsetting of columns based on their dtype
.
First, let?s create a DataFrame
with a slew of different dtypes:
In [375]: df = pd.DataFrame({'string': list('abc'), .....: 'int64': list(range(1, 4)), .....: 'uint8': np.arange(3, 6).astype('u1'), .....: 'float64': np.arange(4.0, 7.0), .....: 'bool1': [True, False, True], .....: 'bool2': [False, True, False], .....: 'dates': pd.date_range('now', periods=3).values, .....: 'category': pd.Series(list("ABC")).astype('category')}) .....: In [376]: df['tdeltas'] = df.dates.diff() In [377]: df['uint64'] = np.arange(3, 6).astype('u8') In [378]: df['other_dates'] = pd.date_range('20130101', periods=3).values In [379]: df['tz_aware_dates'] = pd.date_range('20130101', periods=3, tz='US/Eastern') In [380]: df Out[380]: bool1 bool2 category dates float64 int64 string \ 0 True False A 2016-12-24 18:31:36.297875 4.0 1 a 1 False True B 2016-12-25 18:31:36.297875 5.0 2 b 2 True False C 2016-12-26 18:31:36.297875 6.0 3 c uint8 tdeltas uint64 other_dates tz_aware_dates 0 3 NaT 3 2013-01-01 2013-01-01 00:00:00-05:00 1 4 1 days 4 2013-01-02 2013-01-02 00:00:00-05:00 2 5 1 days 5 2013-01-03 2013-01-03 00:00:00-05:00
And the dtypes
In [381]: df.dtypes Out[381]: bool1 bool bool2 bool category category dates datetime64[ns] float64 float64 int64 int64 string object uint8 uint8 tdeltas timedelta64[ns] uint64 uint64 other_dates datetime64[ns] tz_aware_dates datetime64[ns, US/Eastern] dtype: object
select_dtypes()
has two parameters include
and exclude
that allow you to say ?give me the columns WITH these dtypes? (include
) and/or ?give the columns WITHOUT these dtypes? (exclude
).
For example, to select bool
columns
In [382]: df.select_dtypes(include=[bool]) Out[382]: bool1 bool2 0 True False 1 False True 2 True False
You can also pass the name of a dtype in the numpy dtype hierarchy:
In [383]: df.select_dtypes(include=['bool']) Out[383]: bool1 bool2 0 True False 1 False True 2 True False
select_dtypes()
also works with generic dtypes as well.
For example, to select all numeric and boolean columns while excluding unsigned integers
In [384]: df.select_dtypes(include=['number', 'bool'], exclude=['unsignedinteger']) Out[384]: bool1 bool2 float64 int64 tdeltas 0 True False 4.0 1 NaT 1 False True 5.0 2 1 days 2 True False 6.0 3 1 days
To select string columns you must use the object
dtype:
In [385]: df.select_dtypes(include=['object']) Out[385]: string 0 a 1 b 2 c
To see all the child dtypes of a generic dtype
like numpy.number
you can define a function that returns a tree of child dtypes:
In [386]: def subdtypes(dtype): .....: subs = dtype.__subclasses__() .....: if not subs: .....: return dtype .....: return [dtype, [subdtypes(dt) for dt in subs]] .....:
All numpy dtypes are subclasses of numpy.generic
:
In [387]: subdtypes(np.generic) Out[387]: [numpy.generic, [[numpy.number, [[numpy.integer, [[numpy.signedinteger, [numpy.int8, numpy.int16, numpy.int32, numpy.int64, numpy.int64, numpy.timedelta64]], [numpy.unsignedinteger, [numpy.uint8, numpy.uint16, numpy.uint32, numpy.uint64, numpy.uint64]]]], [numpy.inexact, [[numpy.floating, [numpy.float16, numpy.float32, numpy.float64, numpy.float128]], [numpy.complexfloating, [numpy.complex64, numpy.complex128, numpy.complex256]]]]]], [numpy.flexible, [[numpy.character, [numpy.string_, numpy.unicode_]], [numpy.void, [numpy.record]]]], numpy.bool_, numpy.datetime64, numpy.object_]]
Note
Pandas also defines the types category
, and datetime64[ns, tz]
, which are not integrated into the normal numpy hierarchy and wont show up with the above function.
Note
The include
and exclude
parameters must be non-string sequences.
Please login to continue.