Adam
06/28/2025, 8:39 PMMarvin
06/28/2025, 8:39 PMMarvin
06/28/2025, 8:40 PMstart_prefect_server.py
) to handle the database configuration and server initialization:
python
import os
from prefect.settings import Setting, PREFECT_API_DATABASE_CONNECTION_URL
# Set your PostgreSQL connection string
postgres_url = "<postgresql://username:password@your-postgres-host:5432/prefect>"
Setting(name="PREFECT_API_DATABASE_CONNECTION_URL", value=postgres_url).save()
# Optional: Set other server configurations
Setting(name="PREFECT_SERVER_API_HOST", value="0.0.0.0").save()
Setting(name="PREFECT_SERVER_API_PORT", value="4200").save()
2. Create a shell script (start_server.sh
) to handle the server startup:
bash
#!/bin/bash
# Activate your virtual environment if you're using one
# source /path/to/your/venv/bin/activate
# Run the Python configuration script
python3 start_prefect_server.py
# Start the Prefect server
prefect server start
# Optional: Add logging
# prefect server start > /path/to/server.log 2>&1
3. Make the shell script executable:
bash
chmod +x start_server.sh
Important considerations for AWS deployment:
1. Ensure PostgreSQL is properly configured:
- The pg_trgm
extension must be installed
- Your database user needs appropriate permissions
- The database should be accessible from your AWS instance
2. Set up proper security:
- Configure security groups to allow traffic on your chosen port (default 4200)
- Use AWS secrets management for database credentials
- Ensure proper IAM roles are configured
3. For production environments:
- Consider using systemd for process management
- Set up monitoring and auto-restart capabilities
- Use proper logging configuration
You can enhance the shell script for production use:
bash
#!/bin/bash
set -e
# Load environment variables from AWS Parameter Store or similar
# aws ssm get-parameter --name /prefect/db-url --with-decryption --query Parameter.Value --output text
# Check if PostgreSQL is accessible
pg_isready -h your-postgres-host -p 5432
# Start the server with proper logging
exec prefect server start 2>&1 | tee /var/log/prefect-server.log
For high-availability production deployments, consider:
- Using AWS ECS or EKS for container orchestration
- Setting up load balancing with AWS ALB
- Implementing proper backup and disaster recovery procedures
You can find more details about server configuration in the Prefect documentation.
Remember to replace placeholder values (database URLs, credentials, etc.) with your actual configuration values, and never commit sensitive information directly in your scripts. Instead, use AWS Secrets Manager or Parameter Store to manage these securely.Adam
06/28/2025, 8:42 PMMarvin
06/28/2025, 8:43 PMMarvin
06/28/2025, 8:45 PMstart_prefect_server.py
):
python
import os
from prefect.settings import Setting
from prefect.client import get_client
from prefect.infrastructure import Process, Docker
from prefect.worker import Worker
from prefect.utilities.asyncio import run_sync_in_worker_thread
async def setup_infrastructure():
async with get_client() as client:
# Create a process work pool
process_pool = await client.create_work_pool(
name="process-pool",
work_queue_name="default",
base_job_template={
"job_configuration": {"env": {"PREFECT_LOGGING_LEVEL": "INFO"}}
}
)
# Create a Docker work pool
docker_pool = await client.create_work_pool(
name="docker-pool",
work_queue_name="default",
base_job_template={
"job_configuration": {
"image": "prefecthq/prefect:2-python3.9",
"network_mode": "host"
}
}
)
# Set your PostgreSQL connection string
postgres_url = "<postgresql://username:password@your-postgres-host:5432/prefect>"
Setting(name="PREFECT_API_DATABASE_CONNECTION_URL", value=postgres_url).save()
# Server configuration
Setting(name="PREFECT_SERVER_API_HOST", value="0.0.0.0").save()
Setting(name="PREFECT_SERVER_API_PORT", value="4200").save()
# Run the infrastructure setup
run_sync_in_worker_thread(setup_infrastructure)
2. Create a worker startup script (start_workers.sh
):
bash
#!/bin/bash
set -e
# Start process pool worker
prefect worker start -p "process-pool" &
# Start Docker pool worker
prefect worker start -p "docker-pool" &
# Keep the script running
wait
3. Update the main server startup script (start_server.sh
):
bash
#!/bin/bash
set -e
# Load environment variables from AWS Parameter Store
DB_URL=$(aws ssm get-parameter --name /prefect/db-url --with-decryption --query Parameter.Value --output text)
export PREFECT_API_DATABASE_CONNECTION_URL="$DB_URL"
# Check if PostgreSQL is accessible
pg_isready -h your-postgres-host -p 5432
# Run the Python configuration script
python3 start_prefect_server.py
# Start the Prefect server in the background
prefect server start > /var/log/prefect-server.log 2>&1 &
# Start the workers
./start_workers.sh
4. Create a deployment script (create_deployment.py
) for your flows:
python
from prefect import flow
from prefect.deployments import Deployment
from prefect.filesystems import GitHub
@flow
def example_flow():
print("Hello from deployed flow!")
# Set up GitHub storage for flow code
github_block = GitHub.from_github_repository_url(
"<https://github.com/your-username/your-repo>"
)
# Create deployment
deployment = Deployment.build_from_flow(
flow=example_flow,
name="example-deployment",
work_pool_name="process-pool",
work_queue_name="default",
storage=github_block,
)
if __name__ == "__main__":
deployment.apply()
5. Create a systemd service file (/etc/systemd/system/prefect-server.service
):
ini
[Unit]
Description=Prefect Server
After=network.target postgresql.service
[Service]
Type=simple
User=prefect
ExecStart=/path/to/start_server.sh
Restart=always
Environment=PREFECT_API_URL="<http://localhost:4200/api>"
[Install]
WantedBy=multi-user.target
To use this setup:
1. Install the