chicago-joe08/10/2022, 6:57 PM
'str' object has no attribute 'get'
YD08/10/2022, 7:08 PM
Mateo Merlo08/10/2022, 7:21 PM
Tony Yun08/10/2022, 7:44 PM
and it works when doing `python flow.py`; but when I register the flow, it always fail for
pip install -e my_module
ModuleNotFoundError: No module named 'my_module'
Neil Natarajan08/10/2022, 8:35 PM
This flow is quite simple but in reality my subflow is going to be much more complicated
@flow(name="subflow") def sub_flow(): .... some api call ... return response @flow(name="parent flow") def parent_flow(): for i in range(3): response = sub_flow()
VLroyrenn08/10/2022, 9:14 PM
handles uploading the flow (and apparently the rest of the CWD?) to remote storage. If
prefect deployment build
is meant to be used to create a "YAML manifest [...] build artifact", why does it also handle the uploading? If changes are made to the flow script(s), should the manifest be rebuilt (given it's the same step that handles the uploading)? If manifests are meant to be recreated from scratch whenever the flow is updated, doesn't that make manual edits to any parts of the manifest not specified in the CLI args too easy to overwrite (and invite users to just create one-liner shell scripts to not have to re-type these every time)? Or shouldn't it be apply that handles uploading the flow to storage while also notifying the Orion server about the change? Would
overwrite the changes in configuration made in the UI, making it preferable that deployment files be the main source of truth for things like scheduling and flow description? I'm looking at this from the point of view of a user who's looking to (at least initially) deploy Orion as-is, with no docker images and the flow files on... an SMB share, most likely, or maybe some FTP server . Ideally I would just have the flows be pulled from a local git server (which seems to me like a no-brainer if all the flows are going to be versioned anyway) and use commit IDs instead of manual version numbers, but I would probably run into issues using the experimental fsspec git backend as a remote FS. Maybe these procedures all make sense when working with Docker and K8s, but the documentation mostly skims over the "lifecycle of flow and/or deployment updates", so to speak, so I'm not really sure what to do with these, and deployments are a pretty big step to get to havinng scheduled execution of flows. So, to make things short, how is one supposed to go about updating deployments and deployed flows?
Simon Macklin08/10/2022, 9:18 PM
Tomohiro Nakagawa08/10/2022, 10:22 PM
in prefect task. My flow runs in ECS cluster in prefect 1.2.0. I tried ShellTask and I got
. The command I’m trying to run is something like this.
Command failed with exit code 127
Is it possible to run this command using ShellTask? I tried to use `RedisExecute`as well but it complains about Secret, and i do not think my redis has password…Any help would be appreciated
redis-cli -h myhost --scan --pattern "mypattern" | | xargs redis-cli -h myhost unlink
Ilya Galperin08/10/2022, 11:03 PM
? Right now, it seems like we can only adjust these manually inside the deployment files or the Orion UI. This was available using
modules in 1.0. Is there an equivalent for 2.0 that lets us maintain our our parameters/scheduled run times in code, and not just in the UI or manually after the deployment manifest is built? Thank you!
Jacqueline Riley Garrahan08/10/2022, 11:48 PM
Brad08/11/2022, 12:21 AM
Andreas Nigg08/11/2022, 7:10 AM
Oscar Björhn08/11/2022, 8:30 AM
Satyasheel08/11/2022, 9:22 AM
to run the flow as a k8s jobs. While running the flow we are getting below error:
With Exit code: 127 it seems container is missing some file of DIR but when I checked the container it has all the required files and dir’s. My current k8s job template looks like as below:
Pod prefect-job-a75387b8-lc6tn failed. Container 'data-platform-bq-freshness-prod' state: terminated Exit Code:: 127 Reason: Error
and Dockerfile as below:
apiVersion: batch/v1 kind: Job metadata: namespace: dpa-flows spec: ttlSecondsAfterFinished: 604800 # One week template: spec: serviceAccountName: flows-default containers: - name: data-platform-flow env: - name: PREFECT__LOGGING__LEVEL value: DEBUG
Any help would be really great 🙂
FROM prefecthq/prefect:1.2.1-python3.9 # in order to install pyformance from git, poetry needs git installed om the image. # Or use poerty 1.2 beta. See: <https://github.com/python-poetry/poetry/pull/2105> ARG POETRY_VERSION="1.1.14" RUN apt-get -y update && apt-get -y install git && pip install "poetry==$POETRY_VERSION" ENV PATH=/root/.poetry/bin:$PATH # this copy will allow Caching dependencies. # see <https://stackoverflow.com/questions/53835198/integrating-python-poetry-with-docker> WORKDIR /code COPY data_freshness/poetry.lock bq_data_freshness/pyproject.toml /code/ RUN POETRY_VIRTUALENVS_CREATE=false poetry install --no-ansi --no-interaction ADD data_freshness /data_freshness/ # ml team have a requirement that docker will run as non root RUN useradd nonrootuser USER nonrootuser
Alexander Belikov08/11/2022, 10:20 AM
prefect orion start
I suppose some bits of previous config are causing this interference Do you have any suggestions? Is there a way to wipe the previous config?
> prefect orion start ___ ___ ___ ___ ___ ___ _____ ___ ___ ___ ___ _ _ | _ \ _ \ __| __| __/ __|_ _| / _ \| _ \_ _/ _ \| \| | | _/ / _|| _|| _| (__ | | | (_) | /| | (_) | .` | |_| |_|_\___|_| |___\___| |_| \___/|_|_\___\___/|_|\_| Configure Prefect to communicate with the server with: prefect config set PREFECT_API_URL=<http://127.0.0.1:4200/api> View the API reference documentation at <http://127.0.0.1:4200/docs> Check out the dashboard at <http://127.0.0.1:4200> INFO: Started server process  INFO: Waiting for application startup. INFO: Application startup complete. INFO: Uvicorn running on <http://127.0.0.1:4200> (Press CTRL+C to quit) INFO: 10.8.0.3:0 - "OPTIONS /graphql HTTP/1.1" 200 OK INFO: 10.8.0.3:0 - "POST /graphql HTTP/1.1" 405 Method Not Allowed INFO: 10.8.0.3:0 - "POST /graphql HTTP/1.1" 405 Method Not Allowed
Oscar Björhn08/11/2022, 10:28 AM
pydantic.error_wrappers.ValidationError: 1 validation error for Deployment manifest_path none is not an allowed value (type=type_error.none.not_allowed)
Yusuf Kamorudeen08/11/2022, 10:48 AM
Xavier Witdouck08/11/2022, 12:40 PM
Hamza Naanani08/11/2022, 12:44 PM
But i get the following error
$ prefect cloud login --key xxx_XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
I don't believe the problem is from the credentials as I'm just pasting the command key I get when I create a new API key on the cloud console, I also tried creating different keys but I still get the same issue. I'm on a linux machine, behind corporate firewall and I'm setting the variables
Unable to authenticate with Prefect Cloud. Please ensure your credentials are correct.
to the correct value. Can someone help me on this ?
SSL_CERT_FILE HTTPS_PROXY REQUESTS_CA_BUNDLE
Rajvir Jhawar08/11/2022, 12:58 PM
Mohamed Alaa08/11/2022, 12:59 PM
Oscar Björhn08/11/2022, 1:50 PM
Alex Shea08/11/2022, 2:36 PM
I as you might be able to tell, I have the prefect agent running in the prefect namespace. I have a service account for it called prefect agent. I have used the role and rolebinding from the prefect documentation, but updated the namespace on both and the service account name from default to prefect -agent.
User \"system:serviceaccount:prefect:prefect-agent\" cannot create resource \"jobs\" in API group \"batch\" in the namespace \"prefect\"
Nelson Griffiths08/11/2022, 2:56 PM
James Brady08/11/2022, 3:31 PM
but that seems to be v1.x only…
prefect agent kubernetes install
Deceivious08/11/2022, 3:36 PM
prefect deployment apply ...
CLI commands to deploy and build flows. Is there a python equivalent to these commands?
prefect deployment build ...
Sam Garvis08/11/2022, 4:05 PM
Alexander Belikov08/11/2022, 4:28 PM
prefect deployment build ...
prefect deployment apply ...
runs successfully. I've created the work queue and the agent is running, but scheduled flows are not being executed, instead at time X they change their state from
prefect deployment run ...
. The work queue is empty (!). What could be the problem?
Chandrashekar Althati08/11/2022, 4:46 PM
Mars08/11/2022, 4:58 PM
gives me a deployment YAML, but is no longer producing the manifest JSON file. It’s only producing the deployment YAML. How can I debug this? Here is the build command used:
prefect deployment build
prefect deployment build myflow.py:myflow -n myflow-deployment -t dev -ib kubernetes-job/k8dev -sb remote-file-system/dev