Kartik Ullal
12/20/2023, 2:39 AMNate
12/20/2023, 3:10 AMpd.read_csv(io.BytesIO(blob_bytes))
.
if it cant fit into memory you could do something like:
• do it a `chunksize` at a time
• load it to BQ right away and use SQL to do transformations thereKevin Grismore
12/20/2023, 3:45 AMSean Davis
01/26/2024, 1:49 PM<gs://my_bucket/my_file.csv>
, you can read into pandas dataframe using:
df = pandas.read_csv("<gs://my_bucket/my_file.csv>")
There is not a need to download the file(s) or to read them into a separate place in memory.Sean Davis
01/26/2024, 1:49 PM