Hi Marvin, I'm working on scheduling my deploymen...
# marvin-in-the-wild
t
Hi Marvin, I'm working on scheduling my deployment with source from private git repo. My deployment is stuck pending at cloned repository and idk how to troubleshoot this. Anyone able to shed some light? I can elaborate in a DM.
n
hi @Thy Bui feel free to elaborate in this thread with any non-sensitive details can you show your
from_source
or
prefect.yaml
section where you provide credentials for your private repo? i.e. how are you providing those credentials?
t
I used secret block to save my PAT token and allow content (read and write) access to the private repo. @Nate
n
hmm that looks pretty much correct as far as the code goes can you share the (ideally text, not a photo) traceback you're getting?
t
I dont really have a traceback it's just got stuck for 30 minute until I logged off my computer. Below is the traceback from just network problem after I closed my computer.
n
ah so you have to use a worker (
prefect worker start --pool my-pool
) instead of
prefect agent start
(latter is deprecated)
from_source
relies on the worker's ability to
pull
flow code (not using a storage block like agents used to)
we should make that error message clearer though
t
Got cha. I was super confused on agent and worker. Let me read upon it.
n
okay! let me know if you have any confusion when trying workers
t
1000061276.jpg
n
aha so if you're pointed a push work pool, that means you dont have to run a worker at all! we'd run the worker for you when its needed you're intending to use a push pool here right? what type is your work pool?
t
I have a prefect:managed_pool
n
cool, then you should not have to worry about a worker or an agent here. you can just run your script that has the
flow.from_source().deploy()
in it and then you should be ready to schedule runs from the UI or API. a worker that we run will pick up and execute your scheduled run
t
ok i have this error
n
it looks like some dependency is missing
does installing
pyodbc
create some file called
libodbc.so.2
?
t
i actually dont know
its just a package to connect python to azure sql server, i config my code from python to this. but i did have job_variables={"pip_packages": ["pandas", "prefect-aws", "requests", "pyodbc"]} ## add packages in my deploy
I cannot verify this for you. is there a workaround way to read and write from azure sql server?
@Nate hi Nate, could u shed some light on this please?
n
hi @Thy Bui if you want to use managed execution instead of push pools (where you could configure your own docker image with whatever deps you need) then you'd need to do something like this
Copy code
import os
import subprocess

def install_odbc_drivers():
    if os.name == 'posix':
        # Assuming a Debian-based system for simplicity
        subprocess.check_call(['apt-get', 'update'])
        subprocess.check_call(['apt-get', 'install', '-y', 'unixodbc', 'unixodbc-dev'])
    else:
        raise EnvironmentError("Unsupported OS")

install_odbc_drivers()
to install the drivers you apparently need in your flow on the runtime machine
❤️ 1
t
I rerouted and run flows with docker. I was able to push one deployment, but there's a deployment that runs were pending and it got stuck at "Downloading flow code from storage at '.' . Context: This process download csv file from api to local folder A then open csv file, transform the file, then move csv file from folder A to folder Archive. How can I troubleshoot this? @Nate