HI @wonsun - I believe typical practice when processing large datasets is pagination and splitting that data up into chunks.
Without knowing too much more of what you are doing., or how that data is retrieved, I think it would be a good idea to split the 1 million data points out into some sort of data structure / dataframe, and process them one at a time (per frame, not per data point).