Hi, Having some problems using getting around COR...
# prefect-community
r
Hi, Having some problems using getting around CORS issues with Prefect V2.0 accessible to another host on the lan while running in a docker container. Details in thread per @Kevin Kho
k
Hi @Ryan R, I don’t know off the top of my head for this one. The stuff you tried seemed reasonable to me. Gonna need to ask colleagues tom, but I imagine this should be a supported pattern (running Orion out of a container). The info in this thread is really good, but it’s a bit too long for the main channel. Could you just write something like “details in thread” and then paste them here when you get a chance?
r
@Kevin Kho Absolutely. If you need me to throw together a repo with a basic example, let me know. Message from channel: I am trying to run it from within docker with something like this:
Copy code
version: '3.8'
services:
  orion:
    ...
    ports:
      - "4200:4200"
    volumes:
      - ../data/prefect/orion.db:/root/.prefect/orion.db
      - ../config/prefect/profile.toml:/root/.prefect/profile.toml
    command: 'prefect orion start --host 0.0.0.0'
I am trying using the
PREFECT_API_URL
var profiles.toml like:
Copy code
cat ~/.prefect/profiles.toml
active = "default"

[profiles.default]
PREFECT_API_URL = "<http://0.0.0.0:4200/api>"
PREFECT_ORION_API_HOST = "0.0.0.0"
Copy code
$ prefect orion start
prefect orion start
Starting...

 ___ ___ ___ ___ ___ ___ _____    ___  ___ ___ ___  _  _
| _ \ _ \ __| __| __/ __|_   _|  / _ \| _ \_ _/ _ \| \| |
|  _/   / _|| _|| _| (__  | |   | (_) |   /| | (_) | .` |
|_| |_|_\___|_| |___\___| |_|    \___/|_|_\___\___/|_|\_|

Configure Prefect to communicate with the server with:

    prefect config set PREFECT_API_URL=<http://0.0.0.0:4200/api>

Check out the dashboard at <http://0.0.0.0:4200>
Then, when I try to access it from another machine on the lan in http://192.168.1.x ip space, I get a mix of http hosts like this screenshot. ** My first thought was bind to the lan IP, I cannot set
PREFECT_API_URL = "<http://192.168.1.184:4200/api>"
because it's running from inside a docker container that does not have access to bind to the host IP.
Copy code
Check out the dashboard at <http://192.168.1.184:4200>



00:09:17.564 | ERROR   | uvicorn.error - [Errno 99] error while attempting to bind on address ('192.168.1.184', 4200): cannot assign requested address
Orion stopped!
I've tried some different variations from https://orion-docs.prefect.io/concepts/settings/ ... but have not had any luck. I'm hoping there's just an ENV var somewhere I am missing that can force the API URL Maybe it needs a LB like traefik to make this setup work? Or possibly this usage setup is not supported in v2.0 yet? Thank you in advance for any pointers on best practices for 2.0!
k
I dont think there should be a need for a demo repo. The attempt is pretty clear.
r
k
Thanks for moving stuff!
r
NP. Thanks for the quick response. As a longtime airflow user, I like the no-dag direction you guys are going in 2.0
👍 1
k
Thank you!
I looked into this a bit. Running Orion in Docker and then connecting with the localhost of the same machine also has some weirdness. See Alexander’s issue above. So I feel we are a bit away from supporting this but will follow up tomorrow
r
Okay, this is a really bad and hacky workaround, but if you run in swarm mode and give it a host port on 4200, your 127.0.0.1:4200 actually routes back to the service. It feels like you guys are on the right path with 2.0, the code looks MUCH cleaner! Just another unrelated .02, since you're in a pre-release re-write on 2.0 ... the native typings in 3.10 are a lot cleaner... and I believe sebastian supports them in fastapi already. You don't have to use all the
from typing import
. May or may not be on your radar, feel free to disregard if it's not a fit for you guys. Thanks again for following up! Maybe there's a more graceful solution to this on the horizon :)
k
Will forward to core engineers though our code has to be 3.7 compatible as well for the most part.
So this might be possible in the future but this issue is a current roadblock. Expected to be fixed next release or next next release
r
Sounds Good. Thanks Kevin, I'll watch that issue and check back in a bit. Best of luck to you guys! Hope 2.0 is a big success
k
Thank you 🙂
167 Views