Hello friends, I’m running a `BigQueryTask` on Pre...
# prefect-community
a
Hello friends, I’m running a
BigQueryTask
on Prefect Cloud and it failed with the error “No heartbeat detected from the remote task; marking the run as failed”. Was this Zombie killer? It worked fine locally. How can I ensure this long-running task completes?
n
Hi @Adam - it’s possible your remote task is running into resource constraints in your environment; you have two immediate options: 1 is to give it more resources and the other is to turn off heartbeats for the flow. You can do the latter through the settings tab of any flow in the UI.
a
“remote task is running into resource constraints” - is that possible given it’s just a BigQueryTask (with no data being returned as I use table_dest etc)?
n
Sure it's always possible; I'm not certain that's exactly the cause but typically the times where a task is unable to report back to the Server are those where it has either capped CPU/Memory or has died altogether due to something else.
This is especially the case if you're running in an environment with shared resources (like local Dask or something).
If you're confident you're not hitting resource constraints, try turning off heartbeats for the flow and see how it fares
a
Yeah, definitely not hitting resource constraints for that task. I’m not fully aware of the internals of Google’s BigQuery client but based on the docs it seems to indicate that if I set the destination table then the results are not returned. So there shouldn’t be any CPU/MEM impact on my side right?
n
It doesn't sound like it, but I'm not really familiar with BigQuery either so I'm not sure what's going on under the hood
a
@nicholas is there any reason why it should run differently locally (with
flow.run()
) compared to KubernetesJobEnvironment? Locally, that task completes in a few seconds, but I can’t seem to get it to complete on Prefect Cloud