Hey, I never tested this, but I don't think this is a good approach. You are effectively sending your serialized dataframe to prefect server, which (probably) shouldn't be burdened with big payloads.
Instead, serialize your dataframe and upload it in a place accessible to both your flows. Like AWS S3 or some other file server. Then your first flow will pass the file id to the second flow. The second flow can download the file and read it into a dataframe, using the file id.