Pandas 1.0.5
import pandas as pd
data = [
[1, 1, '2020-01-01', 'McDonalds', '94521', 'US', '5812'],
[2, 1, '2020-01-02', 'Burger King', '94521', 'US', '5812'],
[3, 1, '2020-01-03', 'Taco Bell', '94521', 'US', '5812'],
[4, 2, '2020-02-01', 'Arbys', '94522', 'US', '5812'],
[5, 2, '2020-02-02', 'Carls Jr', '94522', 'US', '5812'],
[6, 2, '2020-02-03', 'Jack In The Box', '94522', 'US', '5812'],
[7, 1, '2020-03-01', 'McDonalds', '94521', 'US', '5812'],
[8, 1, '2020-03-02', 'Burger King', '94521', 'US', '5812'],
[9, 1, '2020-03-03', 'Taco Bell', '94521', 'US', '5812'],
[10, 1, '2020-03-04', 'Taco Bell', '94521', 'US', '5812'],
]
d = pd.DataFrame(data, columns = ['id', 'card_id', 'trandt', 'merchant', 'zipcode', 'country', 'mcc'])
d['merchant_count'] = d.groupby(['card_id', 'merchant']).cumcount()
d['zipcode_count'] = d.groupby(['card_id', 'zipcode']).cumcount()
d['country_count'] = d.groupby(['card_id', 'country']).cumcount()
d['mcc_count'] = d.groupby(['card_id', 'mcc']).cumcount()
#etc
The data is 50 million rows. I will use 100 such groupbys to add 100 columns. All groupbys start with card_id for the first column, and various columns for the second column, as shown in this example. The above code works, but is not efficient because it groups by the same card_id repetitively.
Is there any way to make this more efficient? Any way to avoid grouping by card_id over and over again?
question from:
https://stackoverflow.com/questions/65557395/pandas-groupby-efficiency-and-avoiding-repetitive-operations