What is an efficient way in pandas to do summaryBy(...,full.dimension=T)

Go To StackoverFlow.com

1

With doBy package in R, we do a summary on group and get results in the same shape and order as the original data:

> require(doBy)
> df <- data.frame(
          first = c('bar', 'bar', 'baz', 'baz', 'foo', 'foo', 'qux', 'qux'),
          second = c('one', 'two', 'one', 'two', 'one', 'two', 'one', 'two'), 
          data = c(-0.424972, 0.567020, 0.276232, -1.087401, -0.673690, 0.113648, -1.478427, 0.524988))
> df
  first second      data
1   bar    one -0.424972
2   bar    two  0.567020*emphasized text*
3   baz    one  0.276232
4   baz    two -1.087401
5   foo    one -0.673690
6   foo    two  0.113648
7   qux    one -1.478427
8   qux    two  0.524988
> df['data.sum'] = summaryBy(data~first, data=df, FUN=sum, full.dimension=T)['data.sum']
> df
  first second      data  data.sum
1   bar    one -0.424972  0.142048
2   bar    two  0.567020  0.142048
3   baz    one  0.276232 -0.811169
4   baz    two -1.087401 -0.811169
5   foo    one -0.673690 -0.560042
6   foo    two  0.113648 -0.560042
7   qux    one -1.478427 -0.953439
8   qux    two  0.524988 -0.953439

Is there a way to do the same in pandas, when the DataFrame is grouped by one of multiple indexes?

>>> from pandas import DataFrame
>>> df = DataFrame({ 
                 'first': ['bar', 'bar', 'baz', 'baz', 'foo', 'foo', 'qux', 'qux'],
                 'second': ['one', 'two', 'one', 'two', 'one', 'two', 'one', 'two'],
                 'data': [-0.424972, 0.567020, 0.276232, -1.087401, -0.673690, 0.113648, -1.478427, 0.524988] })
>>> df = df.set_index(['first', 'second'])
>>> s = df.groupby(level='first')['data'].sum()
>>> df.join(s, on='first', rsuffix='.sum')

KeyError: 'no item named first'
2012-04-03 23:59
by LmW.
I figured out that it can be done with:

`df['data.sum'] = s.reindex(df.index.get_level_values('first')).values`
< - LmW. 2012-04-07 03:44


2

How about:

df['data.sum'] = df.groupby('first')['data'].transform(np.sum)

You can also pass as_index=False to groupby to get more R-like behavior when aggregating (or call reset_index on the result)

2012-04-07 20:24
by Wes McKinney
Cool! I would never have known this trick. I need to use df['data.sum'] = df.groupby(level='first')['data'].transform(np.sum) if 'first' is already one of the index. Thanks - LmW. 2012-04-08 00:44


0

How about

from pandas import *
df = DataFrame({ 
    'first': ['bar', 'bar', 'baz', 'baz', 'foo', 'foo', 'qux', 'qux'],
    'second': ['one', 'two', 'one', 'two', 'one', 'two', 'one', 'two'],
    'data': [-0.424972, 0.567020, 0.276232, -1.087401, -0.673690, 0.113648, -1.478427, 0.524988] 
})

df2 = df.join(df.groupby("first").sum().rename(columns={"data":"sum_data"}), 
              on="first")
2012-04-04 01:15
by HYRY
Thanks, this does work. But I really want to have the index in place, if at all possible - LmW. 2012-04-04 01:34
I think you received a KeyError because df was indexed before joining, thus 'first' was no longer a column to join "on". To solve this, can you set the index after joining - Garrett 2012-04-04 16:29
Ads