chicago-joe
08/10/2022, 6:57 PMctx_credentials.get('ACCESS_KEY')
'str' object has no attribute 'get'
YD
08/10/2022, 7:08 PMMateo Merlo
08/10/2022, 7:21 PMTony Yun
08/10/2022, 7:44 PMpip install -e my_module
and it works when doing `python flow.py`; but when I register the flow, it always fail for ModuleNotFoundError: No module named 'my_module'
error.Neil Natarajan
08/10/2022, 8:35 PM@flow(name="subflow")
def sub_flow():
.... some api call ...
return response
@flow(name="parent flow")
def parent_flow():
for i in range(3):
response = sub_flow()
This flow is quite simple but in reality my subflow is going to be much more complicatedVLroyrenn
08/10/2022, 9:14 PMprefect deployment build
handles uploading the flow (and apparently the rest of the CWD?) to remote storage. If deployment build
is meant to be used to create a "YAML manifest [...] build artifact", why does it also handle the uploading? If changes are made to the flow script(s), should the manifest be rebuilt (given it's the same step that handles the uploading)? If manifests are meant to be recreated from scratch whenever the flow is updated, doesn't that make manual edits to any parts of the manifest not specified in the CLI args too easy to overwrite (and invite users to just create one-liner shell scripts to not have to re-type these every time)? Or shouldn't it be apply that handles uploading the flow to storage while also notifying the Orion server about the change? Would deployment apply
overwrite the changes in configuration made in the UI, making it preferable that deployment files be the main source of truth for things like scheduling and flow description?
I'm looking at this from the point of view of a user who's looking to (at least initially) deploy Orion as-is, with no docker images and the flow files on... an SMB share, most likely, or maybe some FTP server . Ideally I would just have the flows be pulled from a local git server (which seems to me like a no-brainer if all the flows are going to be versioned anyway) and use commit IDs instead of manual version numbers, but I would probably run into issues using the experimental fsspec git backend as a remote FS. Maybe these procedures all make sense when working with Docker and K8s, but the documentation mostly skims over the "lifecycle of flow and/or deployment updates", so to speak, so I'm not really sure what to do with these, and deployments are a pretty big step to get to havinng scheduled execution of flows.
So, to make things short, how is one supposed to go about updating deployments and deployed flows?Simon Macklin
08/10/2022, 9:18 PMTomohiro Nakagawa
08/10/2022, 10:22 PMredic-cli
in prefect task. My flow runs in ECS cluster in prefect 1.2.0.
I tried ShellTask and I got Command failed with exit code 127
. The command I’m trying to run is something like this.
redis-cli -h myhost --scan --pattern "mypattern" | | xargs redis-cli -h myhost unlink
Is it possible to run this command using ShellTask? I tried to use `RedisExecute`as well but it complains about Secret, and i do not think my redis has password…Any help would be appreciatedIlya Galperin
08/10/2022, 11:03 PMschedule
and parameters
? Right now, it seems like we can only adjust these manually inside the deployment files or the Orion UI.
This was available using prefect.schedules
modules in 1.0. Is there an equivalent for 2.0 that lets us maintain our our parameters/scheduled run times in code, and not just in the UI or manually after the deployment manifest is built?
Thank you!Jacqueline Riley Garrahan
08/10/2022, 11:48 PMBrad
08/11/2022, 12:21 AMAndreas Nigg
08/11/2022, 7:10 AMOscar Björhn
08/11/2022, 8:30 AMSatyasheel
08/11/2022, 9:22 AMKubernetesRun
to run the flow as a k8s jobs. While running the flow we are getting below error:
Pod prefect-job-a75387b8-lc6tn failed.
Container 'data-platform-bq-freshness-prod' state: terminated
Exit Code:: 127
Reason: Error
With Exit code: 127 it seems container is missing some file of DIR but when I checked the container it has all the required files and dir’s.
My current k8s job template looks like as below:
apiVersion: batch/v1
kind: Job
metadata:
namespace: dpa-flows
spec:
ttlSecondsAfterFinished: 604800 # One week
template:
spec:
serviceAccountName: flows-default
containers:
- name: data-platform-flow
env:
- name: PREFECT__LOGGING__LEVEL
value: DEBUG
and Dockerfile as below:
FROM prefecthq/prefect:1.2.1-python3.9
# in order to install pyformance from git, poetry needs git installed om the image.
# Or use poerty 1.2 beta. See: <https://github.com/python-poetry/poetry/pull/2105>
ARG POETRY_VERSION="1.1.14"
RUN apt-get -y update && apt-get -y install git && pip install "poetry==$POETRY_VERSION"
ENV PATH=/root/.poetry/bin:$PATH
# this copy will allow Caching dependencies.
# see <https://stackoverflow.com/questions/53835198/integrating-python-poetry-with-docker>
WORKDIR /code
COPY data_freshness/poetry.lock bq_data_freshness/pyproject.toml /code/
RUN POETRY_VIRTUALENVS_CREATE=false poetry install --no-ansi --no-interaction
ADD data_freshness /data_freshness/
# ml team have a requirement that docker will run as non root
RUN useradd nonrootuser
USER nonrootuser
Any help would be really great 🙂Alexander Belikov
08/11/2022, 10:20 AMprefect orion start
results in
> prefect orion start
___ ___ ___ ___ ___ ___ _____ ___ ___ ___ ___ _ _
| _ \ _ \ __| __| __/ __|_ _| / _ \| _ \_ _/ _ \| \| |
| _/ / _|| _|| _| (__ | | | (_) | /| | (_) | .` |
|_| |_|_\___|_| |___\___| |_| \___/|_|_\___\___/|_|\_|
Configure Prefect to communicate with the server with:
prefect config set PREFECT_API_URL=<http://127.0.0.1:4200/api>
View the API reference documentation at <http://127.0.0.1:4200/docs>
Check out the dashboard at <http://127.0.0.1:4200>
INFO: Started server process [626768]
INFO: Waiting for application startup.
INFO: Application startup complete.
INFO: Uvicorn running on <http://127.0.0.1:4200> (Press CTRL+C to quit)
INFO: 10.8.0.3:0 - "OPTIONS /graphql HTTP/1.1" 200 OK
INFO: 10.8.0.3:0 - "POST /graphql HTTP/1.1" 405 Method Not Allowed
INFO: 10.8.0.3:0 - "POST /graphql HTTP/1.1" 405 Method Not Allowed
I suppose some bits of previous config are causing this interference
Do you have any suggestions? Is there a way to wipe the previous config?Oscar Björhn
08/11/2022, 10:28 AMpydantic.error_wrappers.ValidationError: 1 validation error for Deployment
manifest_path
none is not an allowed value (type=type_error.none.not_allowed)
Yusuf Kamorudeen
08/11/2022, 10:48 AMXavier Witdouck
08/11/2022, 12:40 PMHamza Naanani
08/11/2022, 12:44 PM$ prefect cloud login --key xxx_XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
But i get the following error Unable to authenticate with Prefect Cloud. Please ensure your credentials are correct.
I don't believe the problem is from the credentials as I'm just pasting the command key I get when I create a new API key on the cloud console, I also tried creating different keys but I still get the same issue.
I'm on a linux machine, behind corporate firewall and I'm setting the variables SSL_CERT_FILE HTTPS_PROXY REQUESTS_CA_BUNDLE
to the correct value.
Can someone help me on this ?Rajvir Jhawar
08/11/2022, 12:58 PMMohamed Alaa
08/11/2022, 12:59 PMOscar Björhn
08/11/2022, 1:50 PMAlex Shea
08/11/2022, 2:36 PMUser \"system:serviceaccount:prefect:prefect-agent\" cannot create resource \"jobs\" in API group \"batch\" in the namespace \"prefect\"
I as you might be able to tell, I have the prefect agent running in the prefect namespace. I have a service account for it called prefect agent. I have used the role and rolebinding from the prefect documentation, but updated the namespace on both and the service account name from default to prefect -agent.Nelson Griffiths
08/11/2022, 2:56 PMJames Brady
08/11/2022, 3:31 PMprefect agent kubernetes install
but that seems to be v1.x only…Deceivious
08/11/2022, 3:36 PMprefect deployment apply ...
and prefect deployment build ...
CLI commands to deploy and build flows. Is there a python equivalent to these commands?Sam Garvis
08/11/2022, 4:05 PMAlexander Belikov
08/11/2022, 4:28 PMprefect deployment build ...
and b) prefect deployment apply ...
prefect deployment run ...
runs successfully.
I've created the work queue and the agent is running, but scheduled flows are not being executed, instead at time X they change their state from Scheduled
to Pending
.
The work queue is empty (!). What could be the problem?Chandrashekar Althati
08/11/2022, 4:46 PMhttps://infinitelambda.com/wp-content/uploads/2021/02/prefect-steps-2048x643.png▾
Mars
08/11/2022, 4:58 PMprefect deployment build
gives me a deployment YAML, but is no longer producing the manifest JSON file. It’s only producing the deployment YAML. How can I debug this?
Here is the build command used:
prefect deployment build myflow.py:myflow -n myflow-deployment -t dev -ib kubernetes-job/k8dev -sb remote-file-system/dev