FuETL
04/27/2022, 8:14 PMMarked Failed by a Zombie Killer process
how can i increase this tolerante and why this happening? My task only take some time but this not means that is failedDavid Haynes
04/27/2022, 8:20 PMMars
04/27/2022, 8:47 PMMatt Alhonte
04/28/2022, 1:34 AMRunning
state but also has an End Time
and the Duration
isn't ticking up? Example:Jonathan Mathews
04/28/2022, 8:39 AMJames Phoenix
04/28/2022, 10:12 AMJames Phoenix
04/28/2022, 10:47 AMDavid Evans
04/28/2022, 10:50 AMprefect
CLI to push flows from GitHub Actions.
But where we're hitting problems is with dependency management (both internal code which is shared between multiple tasks/flows, and external dependencies). From what I've seen, Prefect doesn't really support this at all (flows are expected to be self-contained single files), with the implication being that the agent itself has to have any shared dependencies pre-installed (which in our case would mean that any significant changes require re-building and re-deploying the agent image - a slow process and not very practical if we have long-lived tasks or multiple people testing different flows at the same time). I tried looking around for Python bundlers and found stickytape, but that seems a bit too rough-and-ready for any real use.
This seems to be a bit of a known problem: 1, 2 and specifically I see:
V2 supports virtual and conda environment specification per flow run which should help someAnd I found some documentation for this (which seems to tie it to the new concept of deployments), but I'm still a bit confused on the details: • would the idea be to create a deployment for every version of every flow we push? Will we need to somehow tidy up the old deployments ourselves? • can deployments be given other internal files (i.e. common internal code), or is it limited to just external dependencies? Relatedly, do deployments live on the server or in the configured Storage? • is there any way to use zipapp bundles? • ideally we want engineers to be able to run flows in 3 ways: entirely locally; on a remote runner triggered from their local machine (with local code, including their latest local dependencies); and entirely remotely (pushed to the cloud server via an automated pipeline and triggered or scheduled - basically "push to production") — I'm not clear on how I should be thinking about deployments vs flows to make these 3 options a reality. I also wonder if I'm going down a complete rabbit hole and there is an easier way to do all of this?
David Evans
04/28/2022, 11:58 AMprefecthq:prefect
? or would it just need python:3
?) for each flow? And I guess these docker images would run pip install -r requirements.txt
as a build layer. But if we can achieve this with a virtual environment instead I think that would be preferable (I'm thinking in terms of the flow needed for an engineer to try something out by pushing it to the runner from their local machine)
(I can see the high-level concept here but I'm struggling to see how it will look in practice for the various use-cases)davzucky
04/28/2022, 11:58 AMBaris Cekic
04/28/2022, 12:49 PMChris Reuter
04/28/2022, 1:02 PMxyzz
04/28/2022, 1:13 PMAmruth VVKP
04/28/2022, 2:11 PMJoshua Greenhalgh
04/28/2022, 2:12 PMDavid Evans
04/28/2022, 2:35 PMBob Colner
04/28/2022, 3:04 PMPhilip MacMenamin
04/28/2022, 3:36 PMfrom prefect import task, Flow
from typing import Tuple
@task
def double_and_triple(x: int) -> Tuple[int, int]:
return x * 2, x * 3
with Flow("This works") as flow:
a = [1,2,3]
double, triple = double_and_triple.map(x=a)
Geoffrey Keating
04/28/2022, 3:48 PMValueError: [TypeError("'_thread.RLock' object is not iterable"), TypeError('vars() argument must have __dict__ attribute')]
Code to reproduce in thread
Prefect 1.3 didn't seem to care about loggers being a part of a class used in a flow - any patterns worth adopting to replace this or does this merit a fix?Chris Reuter
04/28/2022, 5:19 PMChris Reuter
04/28/2022, 6:45 PMAlex Rogozhnikov
04/28/2022, 6:52 PMThere is also support in ``ECSCluster`` for GPU aware Dask clusters. To do
this you need to create an ECS cluster with GPU capable instances (from the
``g3``, ``p3`` or ``p3dn`` families) and specify the number of GPUs each worker task
should have.
Amruth VVKP
04/28/2022, 7:46 PMTom Manterfield
04/28/2022, 8:41 PMinvalid duration format
errors showing up in my Orion API, just checking if this is a bug or misconfig on my part?Greg Wyne
04/28/2022, 10:19 PMMatthew Roeschke
04/28/2022, 10:21 PMmap
. I added max_retries
to this task I got this Userwarning I don’t really know how to addressed based on the link. I thought I could pass the results from a functional task to another task?
UserWarning: Task <...> has retry settings but some upstream dependencies do not have result types. See <https://docs.prefect.io/core/concepts/results.html> for more details.
Izu
04/29/2022, 10:19 AMfrom my_module import my_function
within the flow script. Now here’s the thing; I have registered the flow and started my local agent. When I try to trigger the job to run from the prefect UI, I get the message:
Failed to load and execute flow run: FlowStorageError('An error occurred while unpickling the flow:\n ModuleNotFoundError("No module named \'extract_strings\'")\nThis may be due to a missing Python module in your current environment. Please ensure you have all required flow dependencies installed.')
The `extract_strings`function is defined in another module in the same director.
Can anyone help?Florian Guily
04/29/2022, 10:26 AMVivek Kaushal
04/29/2022, 1:19 PMAmruth VVKP
04/29/2022, 1:29 PMAmruth VVKP
04/29/2022, 1:29 PMGeoffrey Keating
04/29/2022, 1:34 PMAnna Geller
04/29/2022, 1:35 PMprefect config set PREFECT_LOGGING_EXTRA_LOGGERS=scipy
You may also adjust the log level used by specific Orion log handlers. E.g., you could set PREFECT_LOGGING_HANDLERS_ORION_LEVEL=ERROR
to have only ERROR
logs reported to Orion. The console handlers will still default to level INFO
.Amruth VVKP
04/29/2022, 1:40 PMAnna Geller
04/29/2022, 2:12 PMsink
. This Discourse topic is for 1.0, but maybe it gives you some ideas on how to approach it in 2.0Amruth VVKP
05/03/2022, 8:42 AM