<@ULVA73B9P> I have deployed an example python scr...
# ask-marvin
s
@Marvin I have deployed an example python script on prefect cloud by executing the python script with .serve() method. It runs perfectly and I can see the execution. But if I exit the terminal locally, the python script also stops executing on the cloud and the status becomes 'Not Ready'
m
thought for 403.1 seconds
Short version:
.serve()
is a local, long‑running process. When you close your terminal, that process dies, so your served deployment goes “Not Ready” in Cloud. Nothing’s broken—serve is meant for development or lightweight use, not production. What “Not Ready” means - “Ready” reflects that the local serve process is online and heartbeating to Cloud. - When you exit the terminal, the heartbeat stops → Cloud marks the served deployment “Not Ready” and it stops creating/running new flow runs. How to keep things running after you close the terminal Option A: Keep using serve, but keep the process alive - Run it under something that stays up:
Copy code
# tmux
  tmux new -s prefect
  python your_script.py

  # or nohup
  nohup python your_script.py > serve.log 2>&1 &
- Or run it in a Docker container / systemd service with a restart policy. Functional, but still “dev-ish”. Option B (recommended): Use a deployment + worker This is the reliable path for Prefect Cloud. 1) Create a work pool (pick the type you’ll actually run on):
Copy code
prefect work-pool create my-pool --type process
2) Deploy your flow. In Prefect 3.x, do it either via CLI or in code: - In code (schedules supported here, verified):
Copy code
from prefect import flow

@flow
def my_flow():
    ...

if __name__ == "__main__":
    (
        my_flow
        .from_source("<https://github.com/you/your-repo>", "path/to/flow.py:my_flow")
        .deploy(
            name="prod",
            work_pool_name="my-pool",
            cron="0 * * * *",  # hourly, for example
        )
    )
- Or with the CLI:
Copy code
prefect deploy ./path/to/flow.py:my_flow -n prod -p my-pool
(You can add a schedule during deploy or later in the UI.) 3) Start a worker somewhere always‑on (server, container, k8s):
Copy code
prefect worker start -p my-pool
- Leave this running; it polls Cloud and executes scheduled/manual runs. If you’re using Docker/Kubernetes workers, install the appropriate extras and start the worker accordingly. If you tell me where you want this to run (laptop, VM, Docker, Kubernetes), I’ll give you the exact commands and a minimal template. And yes, I’ll try to hide my excitement.
s
@Marvin The code needs to be in github for it to run remotely?
m
thought for 87.6 seconds
Short answer: no. Your code just has to be reachable by whatever is executing it (the worker), not specifically GitHub. Common ways to run remotely without GitHub: - Process worker on a VM/server with your code checked out locally - Keep your project on disk where the worker runs, then deploy and start a worker:
Copy code
# from your project root
  prefect deploy ./path/to/flow.py:my_flow -n prod -p my-pool
  prefect worker start -p my-pool
The worker imports your local code. No GitHub involved. - Package your code into a Docker image - Build/push an image and point the deployment at it:
Copy code
from prefect import flow
  from prefect.docker import DockerImage

  @flow
  def my_flow(): ...

  image = DockerImage(repository="<http://ghcr.io/you/myflow|ghcr.io/you/myflow>", tag="v1")
  my_flow.deploy(name="prod", work_pool_name="docker-pool", image=image)
Docker/K8s workers will pull the image and run it. Again, no GitHub required. - Point the deployment at a remote source (GitHub, S3/GCS/Azure) - If you do want remote fetching (not necessarily GitHub), use a readable filesystem:
Copy code
from prefect import flow
  from prefect.filesystems import RemoteFileSystem

  storage = RemoteFileSystem(basepath="<s3://my-bucket/my-project>")
  @flow
  def my_flow(): ...

  (
      my_flow
      .from_source(storage, "flows/my_flow.py:my_flow")
      .deploy(name="prod", work_pool_name="my-pool")
  )
The worker fetches code from S3/GCS/Azure, etc. If you tell me where you plan to run the worker (local VM, Docker, Kubernetes) and whether you want to keep code private/offline, I’ll give you the exact setup. I’ll try not to look too thrilled.
s
Won't the process stop if my laptop/machine is switched off?
@Marvin Won't the process stop if my laptop/machine is switched off?
m
thought for 68.0 seconds
Yes. If the machine running
.serve()
or a Prefect worker goes to sleep or shuts down, your flows stop. Prefect Cloud orchestrates; it doesn’t execute your code for you. Something has to be online to pick up runs. How to keep it running when your laptop is off: - Run a worker on an always‑on machine (VM, server, container host, Kubernetes) - Create a work pool and deploy your flow to it - Start a worker there and keep it running as a service Example: simple “process” worker on a small VM
Copy code
# on the VM
export PREFECT_API_URL="<https://api.prefect.cloud/api/accounts/<acct>/workspaces/<ws>>"
export PREFECT_API_KEY="<your-key>"

prefect work-pool create my-pool --type process
prefect worker start -p my-pool
Run it under systemd, tmux, or Docker so it restarts automatically. Docker route (good for restarts and isolation) - Create a Docker work pool - Package your code into an image and deploy with that image - Run a Docker worker on an always‑on host with a restart policy Kubernetes route - Create a Kubernetes work pool - Run the Prefect worker as a Deployment in your cluster - Deploy flows that reference your image or source If you tell me where you want to run this (VM, Docker, Kubernetes) and where your code lives (local disk, Git repo, object storage), I’ll give you the exact setup. I’ll even pretend it’s exciting.
s
@Marvin You can't run the code in Prefect cloud?
m
thought for 48.6 seconds
Correct. Prefect Cloud orchestrates; it doesn’t execute your code. You must provide compute (a worker) somewhere that’s always on. How it works - Prefect Cloud: schedules, tracks state, stores logs, triggers runs. - Your infrastructure: runs a Prefect worker that actually executes your flows. Ways to run without your laptop - VM/server with a Process worker (simple)
Copy code
# on the server
  export PREFECT_API_URL="<https://api.prefect.cloud/api/accounts/<acct>/workspaces/<ws>>"
  export PREFECT_API_KEY="<key>"

  prefect work-pool create my-pool --type process
  prefect deploy ./path/to/flow.py:my_flow -n prod -p my-pool
  prefect worker start -p my-pool
Keep the worker running via systemd, tmux, or Docker. - Docker/Kubernetes workers (more robust) - Build an image with your code, deploy referencing that image, and run a Docker or K8s worker on an always‑on host/cluster. - Remote source (optional) - If you prefer Cloud to fetch code, point deployments at a repo or object storage (GitHub/S3/GCS/Azure). Not required if the worker already has the code. If you tell me your target environment (VM, Docker, Kubernetes) and where your code should live (local disk, image, or remote storage), I’ll give you exact commands. I’ll even act enthusiastic.
s
uvx prefect-cloud deploy 01_getting_started.py:main \ --name my_first_deployment \ --from https://github.com/PrefectHQ/quickstart Is this running remotely?
@Marvin uvx prefect-cloud deploy 01_getting_started.py:main \ --name my_first_deployment \ --from https://github.com/PrefectHQ/quickstart Is it running remotely here?
m
thought for 32.6 seconds
Short answer: no—Prefect Cloud isn’t executing your code. That command registers a deployment in Cloud that points at the GitHub repo (`--from https://github.com/PrefectHQ/quickstart`). When a run is triggered, a Prefect worker will: - fetch the code from that repo - execute it on the machine where the worker is running So whether it “runs remotely” depends on where your worker lives: - Worker on your laptop → it runs on your laptop - Worker on a server/VM/Kubernetes → it runs there (i.e., remotely from you) If there’s no worker online for the deployment’s work pool, the deployment will show Not Ready and nothing will run. Repo you referenced for context: Prefect Quickstart
s
@Marvin Its using default work pool and Its been running every two mins remotely when I used the command: uvx prefect-cloud deploy 01_getting_started.py:main \ --name my_first_deployment \ --from https://github.com/PrefectHQ/quickstart
m
thought for 62.1 seconds
Sounds like your workspace has a Hosted “default” work pool. In that case, yes: Prefect is running your flow on Prefect‑managed compute, which is why it keeps firing every two minutes even with your machine off. What changed compared to your earlier .serve() attempt: - .serve() = a local process on your machine; dies when your terminal/laptop does. -
uvx prefect-cloud deploy ... --from <https://github.com/>...
= a deployment targeting the “default” hosted pool; Prefect’s managed worker pulls your code from GitHub and runs it on Prefect’s infrastructure. How to confirm - In Cloud UI: Work Pools -> default -> Type should show “Hosted”. - Or CLI:
Copy code
prefect work-pool ls -v
Look for the “Type” column. Notes when using Hosted pools - Your code must be accessible to the hosted worker (GitHub, OCI image, or object storage). Your
--from
GitHub is perfect. - Filesystem is ephemeral; use Blocks/Variables for secrets and external storage for data. - There may be usage limits; for heavy/long‑running jobs, consider your own work pool/worker. If you want to keep using Hosted, I can help you set up schedules, parameters, and secrets cleanly. If you’d rather run on your own VM/Docker/Kubernetes, I’ll give you the exact steps to switch pools.
s
uvx prefect-cloud deploy 01_getting_started.py:main \ --name my_first_deployment \ --from https://github.com/PrefectHQ/quickstart does it need to point to github? can't it point to a local file system?
@Marvin does it need to point to github? can't it point to a local file system?
m
thought for 36.7 seconds
For the Hosted “default” work pool: no — it can’t see your laptop’s filesystem. Hosted workers only run code they can fetch remotely (e.g., GitHub, a container image, or object storage via a RemoteFileSystem). A local path won’t be available to Prefect’s managed compute. Your options: - Keep using Hosted (no machines to manage) - Point deployments at a remote source: - Git repo:
--from <https://github.com/>...
- Object storage: use
Flow.from_source(RemoteFileSystem(...), "path:flow")
- Container image: deploy with an image and run on a Docker/K8s hosted pool (if enabled) - Use your own worker if you want local files - Create a Process or Docker work pool and run a worker on a machine that has your code on disk; the worker can import your local files directly. Examples Hosted + S3 (or GCS/Azure):
Copy code
from prefect import flow
from prefect.filesystems import RemoteFileSystem

@flow
def my_flow(): ...

storage = RemoteFileSystem(basepath="<s3://my-bucket/my-project>")

(
    my_flow
    .from_source(storage, "flows/my_flow.py:my_flow")
    .deploy(name="prod", work_pool_name="default")  # Hosted pool
)
Self-managed worker + local files:
Copy code
# on your server/VM that has the code checked out
prefect work-pool create my-pool --type process
prefect deploy ./flows/my_flow.py:my_flow -n prod -p my-pool
prefect worker start -p my-pool
If you tell me which route you prefer (Hosted vs your own worker) and where your code should live (Git, image, S3/GCS/Azure), I’ll give you the exact commands and any credential setup you’ll need.