Bryan
02/01/2024, 12:42 AMMarvin
02/01/2024, 12:43 AMDockerfile
# Use an official Python runtime as a parent image
FROM python:3.9-slim
# Set the working directory in the container
WORKDIR /usr/src/app
# Copy the dependencies file to the working directory
COPY requirements.txt .
# Install any dependencies
RUN pip install --no-cache-dir -r requirements.txt
# Copy the content of the local src directory to the working directory
COPY src/ .
# Command to run the application
CMD ["python", "./your-daemon-or-script.py"]
And here's an accompanying GitHub workflow file .github/workflows/main.yml
that builds the Docker image using a secret:
yaml
name: Build and Deploy
on:
push:
branches:
- main
jobs:
build:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v2
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v1
- name: Login to DockerHub
uses: docker/login-action@v1
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Build and push
uses: docker/build-push-action@v2
with:
context: .
file: ./Dockerfile
push: true
tags: user/appname:latest
build-args: |
SOME_SECRET=${{ secrets.SOME_SECRET }}
In this example, SOME_SECRET
is a GitHub secret that you've defined in your repository settings. It's passed to the Docker build process as a build argument and can be used in the Dockerfile by declaring an ARG
instruction, like so:
Dockerfile
...
# Define an argument to take the secret
ARG SOME_SECRET
# You might use it to set an environment variable
ENV SECRET_ENV=$SOME_SECRET
...
This way, the secret is not in the Dockerfile and is not hardcoded, but is rather injected securely at build time through the CI/CD pipeline. Remember to ensure that secrets are not leaked in the Docker image layers by avoiding writing them to files or echoing them unless necessary, and be cautious with how you use and manage secrets in a production environment.Bryan
02/01/2024, 1:05 AMMarvin
02/01/2024, 1:06 AMprefect deployment build
command along with a prefect.yaml
file.
Here's how you can rephrase your example for Prefect 2.x:
plaintext
For Prefect 2.x, you would structure your project with a `prefect.yaml` file that defines your deployment. To create or update a deployment based on this configuration file, you use the `prefect deployment build` command.
An example `prefect.yaml` might look something like this:
```yaml
# prefect.yaml
version: 1
flows:
- name: my-awesome-flow
path: my_flows/awesome_flow.py
tags:
- example
deployments:
- name: my-awesome-deployment
flow_name: my-awesome-flow
schedule:
type: cron
value: '0 8 * * *'
tags:
- prod
work_queue_name: example-queue
And to deploy your flow with this configuration, you would run:
bash
prefect deployment build -f prefect.yaml
```Bryan
02/01/2024, 1:07 AMMarvin
02/01/2024, 1:08 AMprefect.yaml
file would not only define the flow and deployment but also include specifications for building a Docker image. Here's how your example would be updated to include this step:
yaml
# prefect.yaml
version: 1
flows:
- name: my-awesome-flow
path: my_flows/awesome_flow.py
storage:
type: docker
registry_url: <http://myregistry.io/myproject|myregistry.io/myproject>
image_name: my-awesome-image
image_tag: latest
tags:
- example
deployments:
- name: my-awesome-deployment
flow_name: my-awesome-flow
schedule:
type: cron
value: '0 8 * * *'
tags:
- prod
work_queue_name: example-queue
With this prefect.yaml
, the storage block within the flows section now specifies to Prefect that you'd like to use Docker storage. It includes necessary details like the Docker registry URL, image name, and tag.
To build the Docker image and then build the deployment from the flow specifications, you would run the following command:
bash
prefect deployment build -f prefect.yaml --tag <http://myregistry.io/myproject/my-awesome-image:latest|myregistry.io/myproject/my-awesome-image:latest>
This command instructs Prefect to build a Docker image using the specifications provided and then creates or updates a deployment for my-awesome-flow
using that image. Remember, for this to work, you need to have Docker installed and configured on your system, and your user must have permission to interact with Docker.
If the infinite improbability drive were marketable as a containerization tool, I might feel a twinge of excitement at this prospect. As it is, I'll just wallow in the joy of successful builds.Bryan
02/01/2024, 1:12 AMMarvin
02/01/2024, 1:13 AMrequirements.txt
within your Docker image build process, you would generally instruct Docker to install these packages when building the image.
However, since in the context of Prefect 2.0 you're defining your deployment in a prefect.yaml
file, you would typically handle dependencies outside of this configuration. The actual Dockerfile would be the place to specify the Python package installation steps.
Here’s how you would adjust the process:
First, create a Dockerfile that looks like this:
Dockerfile
# Dockerfile
FROM python:3.9-slim
# Set work directory
WORKDIR /usr/src/app
# Set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
# Install dependencies
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
# Copy project
COPY . .
Next, ensure that you have a requirements.txt
file with all the necessary Python dependencies at the root level of your project directory where your Dockerfile also resides.
Now, update your prefect.yaml
to reflect the following:
yaml
# prefect.yaml
version: 1
flows:
- name: my-awesome-flow
path: my_flows/awesome_flow.py
storage:
type: docker
dockerfile: path/to/Dockerfile
registry_url: <http://myregistry.io/myproject|myregistry.io/myproject>
image_name: my-awesome-image
image_tag: latest
env:
SPECIAL_ENV_VAR: "special-value"
tags:
- example
deployments:
- name: my-awesome-deployment
flow_name: my-awesome-flow
schedule:
type: cron
value: '0 8 * * *'
tags:
- prod
work_queue_name: example-queue
Note that the storage
block now contains a dockerfile
entry, which points to the Dockerfile you just created.
Finally, use the same Prefect command as before to build the Docker image and create the deployment:
bash
prefect deployment build -f prefect.yaml --tag <http://myregistry.io/myproject/my-awesome-image:latest|myregistry.io/myproject/my-awesome-image:latest>
Even if I were to simulate joy at the notion of all these Python packages living harmoniously within a Docker container, I doubt anyone would believe me. So let's just assume it's all wonderfully convenient and get on with it.Bryan
02/01/2024, 1:14 AMMarvin
02/01/2024, 1:15 AMyaml
# .github/workflows/build-docker-image.yml
name: Build Docker Image
on:
push:
branches:
- main
jobs:
build:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v2
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v1
- name: Login to Docker Registry
uses: docker/login-action@v1
with:
registry: <http://myregistry.io|myregistry.io>
username: ${{ secrets.DOCKER_HUB_USERNAME }}
password: ${{ secrets.DOCKER_HUB_ACCESS_TOKEN }}
- name: Build and push Docker image
uses: docker/build-push-action@v2
with:
context: .
file: ./path/to/Dockerfile
push: true
tags: <http://myregistry.io/myproject/my-awesome-image:latest|myregistry.io/myproject/my-awesome-image:latest>
build-args:
SECRET_API_KEY: ${{ secrets.SECRET_API_KEY }}
In this workflow, a secret named SECRET_API_KEY
is passed to the Docker build process as a build argument. It's crucial that you handle this secret with care and not expose it in any logs or output. In your Dockerfile, you use the ARG
instruction to receive the build-time variable:
Dockerfile
# Dockerfile
FROM python:3.9-slim
# Set work directory
WORKDIR /usr/src/app
# Set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
# Receive GitHub Secret as an argument
ARG SECRET_API_KEY
# Optionally set the received secret as an environment variable
ENV SECRET_API_KEY_ENV=${SECRET_API_KEY}
# Install dependencies
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
# Copy project
COPY . .
In this Dockerfile, the secret is optionally set as an environment variable within the container using the ENV
instruction. Keep in mind that setting secrets as environment variables in this way can potentially expose them to anyone who has access to the Docker image. Therefore, always evaluate the risks and apply best practices concerning secret management according to the sensitivity of your data.
Remember to replace <http://myregistry.io/myproject/my-awesome-image:latest|myregistry.io/myproject/my-awesome-image:latest>
with the appropriate details for your Docker registry and also configure your .github/workflows/build-docker-image.yml
with necessary GitHub Secrets.
And with that, your containers shall carry the heavy burden of secrets within this inevitably indifferent cosmos. Or at least until someone runs echo $SECRET_API_KEY_ENV
.Bryan
02/01/2024, 1:30 AMMarvin
02/01/2024, 1:31 AMyaml
name: Deploy with Prefect
on:
push:
branches:
- main
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v2
- name: Set up Python
uses: actions/setup-python@v2
with:
python-version: '3.x'
- name: Install Prefect
run: |
pip install prefect
- name: Log in to Prefect Cloud
env:
PREFECT__CLOUD__AUTH_TOKEN: ${{ secrets.PREFECT__CLOUD__AUTH_TOKEN }}
run: prefect cloud login --key ${{ secrets.PREFECT__CLOUD__AUTH_TOKEN }}
- name: Deploy to Prefect
run: |
prefect deploy --json path-to-your-deployment-file.json
env:
PREFECT__CLOUD__API_KEY: ${{ secrets.PREFECT__CLOUD__API_KEY }}
Here’s what’s happening in the modified workflow:
1. The "Checkout code" step pulls the latest code from your main branch during the workflow execution.
2. The "Set up Python" step ensures that Python is installed in the runner environment. You can specify the version of Python that you want to use.
3. The "Install Prefect" step installs the Prefect library, which is necessary to interact with Prefect Cloud and deploy your flows.
4. The "Log in to Prefect Cloud" step uses the Prefect CLI to authenticate with Prefect Cloud by providing an authentication token. You would store this token as a secret in your GitHub repository settings (PREFECT__CLOUD__AUTH_TOKEN
).
5. Finally, the "Deploy to Prefect" step deploys your flow to Prefect Cloud using the Prefect CLI. You will need to specify the path to your deployment file (path-to-your-deployment-file.json
). This deployment file is a JSON file describing the flow's deployment configuration. The PREFECT__CLOUD__API_KEY
is used here from your secrets for interactions that require API access.
This setup assumes that you have a Prefect deployment JSON file in your repository and that it's referenced correctly in the prefect deploy
command. Remember to replace path-to-your-deployment-file.json
with the actual path to your deployment JSON file in your repository.
Also, note that PREFECT__CLOUD__API_KEY
is a placeholder; this environment variable name is dependent on the actual credentials required and expected by Prefect for authorization. This should be replaced with the appropriate secret stored in your GitHub repository settings.Bryan
02/01/2024, 1:34 AMMarvin
02/01/2024, 1:35 AMDocker
storage block with build arguments to include the secret. Here is how you can modify the GitHub Actions workflow to achieve this:
yaml
name: Deploy with Prefect and GitHub Secrets
on:
push:
branches:
- main
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v2
- name: Set up Python
uses: actions/setup-python@v2
with:
python-version: '3.x'
- name: Install Prefect
run: |
pip install prefect
- name: Log in to Prefect Cloud
env:
PREFECT__CLOUD__AUTH_TOKEN: ${{ secrets.PREFECT__CLOUD__AUTH_TOKEN }}
run: prefect cloud login --key ${{ secrets.PREFECT__CLOUD__AUTH_TOKEN }}
- name: Deploy to Prefect with GitHub Secret
run: |
prefect deployment build ./path-to-your-flow-directory/ --name my-flow \
--tag dev \
--output ./deployment-spec.yaml
# Use the GitHub secret as a build arg for the Docker storage
prefect storage docker set -p ./path-to-your-flow-directory/Dockerfile \
-t your-repo/your-image-name:latest \
--build-arg SOME_SECRET=${{ secrets.SOME_SECRET }} \
--override
prefect deployment apply ./deployment-spec.yaml
env:
PREFECT__CLOUD__API_KEY: ${{ secrets.PREFECT__CLOUD__API_KEY }}
In this modified example:
1. The "Set up Python" and "Install Prefect" steps ensure that the Python and Prefect CLI are available in the runner.
2. The "Log in to Prefect Cloud" step logs in to Prefect Cloud using an auth token.
3. The "Deploy to Prefect with GitHub Secret" step does several things:
- It builds a deployment spec file from the flow definition in the specified directory.
- It then uses the Prefect CLI to configure Docker storage with a build argument that contains a secret (SOME_SECRET
). This secret is passed to the Docker build process without being hard-coded in the Dockerfile by using the --build-arg
option.
- The --override
flag allows you to replace the storage configuration in your deployment spec file with this new configuration that includes the GitHub secret.
- Applying the deployment spec file with prefect deployment apply ./deployment-spec.yaml
actually performs the deployment to Prefect Cloud, including the Docker image build process with the secret.
Remember to replace placeholders like ./path-to-your-flow-directory/
, your-repo/your-image-name:latest
, and the secret name SOME_SECRET
with your actual directory, Docker repository, and secret name, respectively. Also, ensure you have configured your repository secrets in GitHub to match those referenced in the workflow file.Bryan
02/01/2024, 1:38 AMMarvin
02/01/2024, 1:39 AMAWS_ACCESS_KEY_ID
, AWS_SECRET_ACCESS_KEY
, and AWS_REGION
. You'll also need the ECR repository name as a secret, i.e., ECR_REPOSITORY
.
Now, here's the modified GitHub Actions workflow to utilize AWS ECR:
yaml
# .github/workflows/build-docker-image.yml
name: Build Docker Image
on:
push:
branches:
- main
jobs:
build:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v2
- name: Login to AWS ECR
uses: aws-actions/amazon-ecr-login@v1
- name: Build, tag, and push image to Amazon ECR
env:
ECR_REGISTRY: ${{ steps.login-ecr.outputs.registry }}
ECR_REPOSITORY: ${{ secrets.ECR_REPOSITORY }}
IMAGE_TAG: latest
run: |
IMAGE_URI="${ECR_REGISTRY}/${ECR_REPOSITORY}:${IMAGE_TAG}"
docker build -t $IMAGE_URI . --build-arg SECRET_API_KEY="${{ secrets.SECRET_API_KEY }}"
docker push $IMAGE_URI
In this workflow, the aws-actions/amazon-ecr-login
Action is used to authenticate with AWS ECR. After a successful login, the Image URI is constructed using the ECR registry and repository information, the Docker image is built with the secret provided as a build-arg, and finally, the image is pushed to the ECR repository.
Accordingly, you'll adjust your Dockerfile to accept the build argument as before:
Dockerfile
# Dockerfile
FROM python:3.9-slim
# Set work directory
WORKDIR /usr/src/app
# Set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
# Receive GitHub Secret as an argument
ARG SECRET_API_KEY
# Optionally set the received secret as an environment variable
ENV SECRET_API_KEY_ENV=${SECRET_API_KEY}
# Install dependencies
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
# Copy project
COPY . .
Likewise, make sure your prefect.yaml
reflects the use of AWS ECR for flow storage:
yaml
# prefect.yaml
version: 1
flows:
- name: my-awesome-flow
path: my_flows/awesome_flow.py
storage:
type: docker
dockerfile: path/to/Dockerfile
registry_url: ${ECR_REGISTRY}
image_name: ${ECR_REPOSITORY}
image_tag: latest
env:
SPECIAL_ENV_VAR: "special-value"
tags:
- example
deployments:
- name: my-awesome-deployment
flow_name: my-awesome-flow
schedule:
type: cron
value: '0 8 * * *'
tags:
- prod
work_queue_name: example-queue
Let the bleak realization that our digital creations are surrendering to the clutches of cloud providers not dim the elegant dance of continuous integration and deployment. After all, we wouldn’t want my boundless enthusiasm to overflow at this point.