Ben Ayers-Glassey
03/30/2023, 3:56 PMThe number of mapped children expected to run.
Note that the number of active mapped runs may be less than this if some have not yet entered aThis seems like it would make sense while the flow was still running, but not once it has completed. 🤔state.Pending
Vishnu Duggirala
03/30/2023, 4:43 PMCharles Leung
03/30/2023, 5:14 PM@flow
def api_flow(url):
# logger = get_run_logger()
logger = logging.getLogger()
#set lower level
logger.setLevel(<http://logging.INFO|logging.INFO>)
#create file handler
filehdlr = logging.FileHandler('filehandler.txt',mode='w')
# set level
filehdlr.setLevel(<http://logging.INFO|logging.INFO>)
# add handler to logger
logger.addHandler(filehdlr)
fact_json = call_api(url)
<http://logger.info|logger.info>("TEST")
return fact_json
But it only adds local outputs into the .txt file (filehandler.txt shown below):
file handler is created to handle log message in the file
we can create many handlers for the logger
Created task run 'call_api-0' for task 'call_api'
Executing 'call_api-0' immediately...
Finished in state Completed()
TEST
Finished in state Completed()
Is there a way I can get the exact flow logs (including the timestamps) saved somewhere (either onprem or on S3) like the output shown below from prefect?William Jamir
03/30/2023, 5:26 PMJohn
03/30/2023, 7:51 PMFlorian Giroud
03/30/2023, 8:17 PMscott
03/30/2023, 8:48 PMI don’t see this on the VM we’re running our prefect agents and docker containers on that run the flows. Since we don’t have thesetting (defaults toPREFECT_LOCAL_STORAGE_PATH
).~/.prefect/storage
PREFECT_LOCAL_STORAGE_PATH
env var set, is the ~/.prefect/storage
on prefect servers?Jenia Varavva
03/30/2023, 8:51 PMprefect.exceptions.PrefectHTTPStatusError: Client error '404 Not Found' for url 'http://<host>/api/work_pools/data-qa/get_scheduled_flow_runs'"
, i.e. fails on a missing work queue. What am I doing wrong? both server and agent are 2.9.0Choenden Kyirong
03/31/2023, 1:13 AMDeceivious
03/31/2023, 8:52 AMwith prefect.tags
effects both the sub flow and the tasks under the sub flows.Tim-Oliver
03/31/2023, 11:13 AMDeceivious
03/31/2023, 11:19 AMtask concurrency tag
by tag name if the tag doesnt exist , wrong exception is being raised .
Exception being raised : prefect.exceptions.PrefectHTTPStatusError
Exception that should have been raised : prefect.exceptions.ObjectNotFound
Filip Panovski
03/31/2023, 1:29 PMJustin Trautmann
03/31/2023, 2:23 PMalex
03/31/2023, 4:22 PMAdam
03/31/2023, 4:59 PMSimon Rascovsky
03/31/2023, 6:46 PMprefect-server
and prefect-agent
making sure they are in the same named network.
• The agent has the host’s docker socket mounted in volumes so it can spin up parallel containers with the Docker infrastructure block, not via Docker-in-Docker.
• Created a custom flow runner image based on the official docker image with flows installed in /opt/prefect/flows
and all the python dependencies installed.
• I have created a Docker infrastructure block from the host machine to have the agent spin up the flow runner containers.
After I have registered both the Docker block and a deployment, I have tried to do a quick run of the deployment through the UI, but the flow run crashes. The agent log shows (relevant bits):
12:59:26.941 | INFO | prefect.agent - Submitting flow run '977526d8-f473-4713-b454-a792df88af5f'
12:59:27.023 | INFO | prefect.infrastructure.docker-container - Creating Docker container 'meteoric-bird'...
12:59:27.160 | INFO | prefect.infrastructure.docker-container - Docker container 'meteoric-bird' has status 'created'
12:59:27.450 | INFO | prefect.agent - Completed submission of flow run '977526d8-f473-4713-b454-a792df88af5f'
12:59:27.453 | INFO | prefect.infrastructure.docker-container - Docker container 'meteoric-bird' has status 'running'
17:59:28.323 | DEBUG | prefect.profiles - Using profile 'default'
<frozen runpy>:128: RuntimeWarning: 'prefect.engine' found in sys.modules after import of package 'prefect', but prior to execution of 'prefect.engine'; this may result in unpredictable behaviour
17:59:28.335 | DEBUG | prefect.client - Connecting to API at <http://prefect-server:4200/api/>
17:59:29.873 | ERROR | prefect.engine - Engine execution of flow run '977526d8-f473-4713-b454-a792df88af5f' exited with unexpected exception
Traceback (most recent call last):
File "/usr/local/lib/python3.11/site-packages/anyio/_core/_sockets.py", line 186, in connect_tcp
addr_obj = ip_address(remote_host)
^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/ipaddress.py", line 54, in ip_address
raise ValueError(f'{address!r} does not appear to be an IPv4 or IPv6 address')
ValueError: 'prefect-server' does not appear to be an IPv4 or IPv6 address
Some things I’ve tried or confirmed:
• The prefect-agent
and prefect-server
containers are on the same network. Confirmed by docker network inspect
.
• The prefect-agent
can see the prefect-server
container and resolve the name when I use both ping (ping prefect-server
) and hitting the API (http GET prefect-server:4200/api/health
using httpie).
• I can run a python flow file like this one directly in the prefect-agent
and it runs fine, communicating the results to the prefect-server
.
• The agent is able to spin up the flow runner container, right before it crashes.
I don’t understand why the agent seems to be able to resolve the prefect-server
hostname when running flows locally, but fails to do so when running the same flow through a deployment.
Any suggestions on what may be happening, or what I can do to further debug this issue? Many thanks in advance for your help!John
03/31/2023, 7:18 PMJohn Horn
03/31/2023, 8:32 PMJohn
03/31/2023, 9:42 PMprivileged
to true
.
I wonder how Prefect works with docker under the hood. I don't see why the same code & docker image gives different behavior with/without Prefect.Tibs
04/02/2023, 11:49 AMSubmission failed. IndexError: list index out of range
, does anyone know what may be the cause of this?
prefect version 2.8.2
flows run using ECSTaskJustin
04/03/2023, 7:50 AM# apiVersion: v1
# kind: PersistentVolumeClaim
# metadata:
# name: composer-bigtechmomentum-v1-pvc
# spec:
# accessModes:
# - ReadWriteOnce
# resources:
# requests:
# storage: 1Mi
# ---
apiVersion: batch/v1
kind: CronJob
metadata:
name: composer-bigtechmomentum-v1
spec:
schedule: "42 3 * * *"
suspend: false
concurrencyPolicy: Forbid
jobTemplate:
spec:
template:
spec:
containers:
- name: composer-bigtechmomentum-v1
image: guestros/xxxxxxxxxx:latest
imagePullPolicy: Always
# volumeMounts:
# - mountPath: /app/persistent/
# name: composer-bigtechmomentum-v1-pv
Jonathan Langlois
04/03/2023, 8:20 AMCRASHED
tasks or to avoid setting them as CRASHED
?
This issue was resolved and we can have Dask to retry RUNNING
tasks now. But since the prefect 2.8.7
version, prefect detects that the Dask worker will be shut down and sets the task as CRASHED
. When Dask resend the failing task, we end up with Task run '...' already finished
.Abhishek Mitra
04/03/2023, 8:46 AMPREFECT_API_URL
variable. Maybe there's a knowledge gap. Can somebody let me know where I would find ACCOUNT_ID
and WORKSPACE_ID
? And is there a way to validate the URL beforehand?
Thanks.Rikimaru Yamaguchi
04/03/2023, 11:17 AM<https://api.prefect.cloud/api/accounts/{account_id}/workspaces/{workspace_id}/work_queues/{id}/status>
Is there an SDK available?
Or is there a better way to do health checks?Angelika Tarnawa
04/03/2023, 11:49 AMMamatha
04/03/2023, 1:43 PMprefect.exceptions.AuthorizationError: [{'path': ['get_runs_in_queue'], 'message': 'AuthenticationError: Forbidden', 'extensions': {'code': 'UNAUTHENTICATED'}}]
thanks in advance (edited)Kristian Andersen Hole
04/03/2023, 1:54 PMMike Logaciuk
04/03/2023, 2:15 PMFile "/usr/local/lib/python3.10/site-packages/prefect/server/database/alembic_commands.py", line 53, in alembic_upgrade
alembic.command.upgrade(alembic_config(), revision, sql=dry_run)
File "/usr/local/lib/python3.10/site-packages/alembic/command.py", line 378, in upgrade
script.run_env()
File "/usr/local/lib/python3.10/site-packages/alembic/script/base.py", line 272, in _catch_revision_errors
raise util.CommandError(resolution) from re
alembic.util.exc.CommandError: Can't locate revision identified by '422f8ba9541d'
Application startup failed. Exiting.
Our main API is running outside K8S on another server (VM with dedicated Postgres) with UI turned off, while pod on K8S is running with UI turned on - env is pointing the API outside K8S.
Agents on K8S work without any problem after, while UI can't start.
Is this an expected behaviour?
It's not the first time that new build destroys something on prod in our environment.Mark NS
04/03/2023, 4:15 PM