Here - not a blocker, but running into some additi...
# ask-community
d
Here - not a blocker, but running into some additional provider issues with the terraform module for creating resources. ๐Ÿงต
Currently creating schedules for deployments - below is my module variable, logic for handling resource creation. -- variable
Copy code
variable "prefect" {
  description = "Combined Prefect configuration including API credentials, work pool, and task deployment configuration."
  type = object({
    api_url  = string
    api_key  = string
    work_pool = object({
      name         = string
      kind         = string
      task_worker = object({
        task_definition = object({
          cpu                  = number
          memory               = number
          container_definition = object({
            cpu    = number
            memory = number
          })
        })
      })
      task_runner = object({
        task_definition = object({
          cpu                  = number
          memory               = number
          container_definition = object({
            cpu         = number
            memory      = number
            environment = list(map(string))
          })
        })
      })
    })
    deployments = list(object({
      name                     = string
      entrypoint               = string
      queue                    = string
      job_variables            = map(string)
      parameters               = map(string)
      parameter_openapi_schema = object({
        type       = string
        properties = map(object({ type = string }))
      })
      schedules = list(object({
        active = bool,
        anchor_date = string
        catchup = bool
        cron = string
        day_or = bool
        max_active_runs = number
        max_scheduled_runs = number
      }))
    }))
  })
}
-- terraform file for creating deployments and schedules
Copy code
resource "prefect_flow" "flow" {
  for_each = { for d in var.prefect.deployments : d.name => d }
  name     = each.value.name
  tags = [
    "image:${var.repository_url}:${var.deployment_tag}"
  ]
}

resource "prefect_deployment" "deployment" {
  for_each = { for d in var.prefect.deployments : d.name => d }

  name       = each.value.name
  flow_id    = prefect_flow.flow[each.key].id
  entrypoint = each.value.entrypoint

  paused     = false
  pull_steps = [
    {
      type      = "set_working_directory",
      directory = "/opt/"
    }
  ]

  job_variables = jsonencode(merge(
    {
      image = "${var.repository_url}:${var.deployment_tag}"
    },
    each.value.job_variables
  ))

  parameters = jsonencode(each.value.parameters)
  parameter_openapi_schema = jsonencode(each.value.parameter_openapi_schema)

  work_pool_name  = var.prefect.work_pool.name
  work_queue_name = each.value.queue
  
  tags = [
    "image:${var.repository_url}:${var.deployment_tag}"
  ]
}

locals {
  # First, flatten the schedules from all deployments into a list of objects.
  flattened_schedules = flatten([
    for deployment in var.prefect.deployments : [
      for idx, sched in deployment.schedules : {
        key             = "${deployment.name}-${idx}"
        deployment_name = deployment.name
        schedule_config = sched
      }
    ]
  ])

  # Then convert the list to a map.
  schedule_map = { for entry in local.flattened_schedules : entry.key => entry }
}
resource "prefect_deployment_schedule" "schedule" {
  for_each = local.schedule_map

  deployment_id = prefect_deployment.deployment[each.value.deployment_name].id

  cron             = each.value.schedule_config.cron
  active           = lookup(each.value.schedule_config, "active", false)
  catchup          = lookup(each.value.schedule_config, "catchup", null)
  day_or           = lookup(each.value.schedule_config, "day_or", null)
  timezone         = "UTC"
  max_active_runs    = lookup(each.value.schedule_config, "max_active_runs", null)
  max_scheduled_runs = lookup(each.value.schedule_config, "max_scheduled_runs", null)
}
When i execute my module, the schedules are created and assigned successfully. Here's an example input
Copy code
module "stg_aws_prefect_backend" {
  ...
  prefect = {
    api_url = local.prefect_api_url
    api_key = local.prefect_api_key
    work_pool = {
      name = "stg--${local.pipeline}"
      kind = "ecs"
      task_worker = {
        task_definition = {
          cpu    = 2048
          memory = 4096
          container_definition = {
            cpu    = 2048
            memory = 4096
          }
        }
      }
      task_runner = {
        task_definition = {
          cpu    = 2048
          memory = 4096
          container_definition = {
            cpu    = 2048
            memory = 4096
            environment = [
              { name = "NODE_ENV", value = "stg" }
            ]
          }
        }
      }
    }

    deployments = [
      {
        name          = "stg--${local.pipeline}:root"
        entrypoint    = "adonis.adoml_test_pipeline.deployments:root"
        queue         = "default"
        job_variables = {}
        parameters    = {}
        parameter_openapi_schema = {
          type = "object"
          properties = {
            name = { type = "string" }
          }
        }
        schedules = [
          {
            active             = true,
            anchor_date        = "2023-01-01T00:00:00Z",
            catchup            = false,
            cron               = "0 9 * * 1",
            day_or             = true,
            max_active_runs    = 1,
            max_scheduled_runs = 10
          },
          {
            active             = true,
            anchor_date        = "2023-01-01T00:00:00Z",
            catchup            = false,
            cron               = "0 9 * * 2",
            day_or             = true,
            max_active_runs    = 1,
            max_scheduled_runs = 10
          }          
        ]
      }
    ]
  }
}
however I receive the following error that crashes my IaC pipeline
Copy code
module.stg_aws_prefect_backend.aws_ecs_task_definition.task_runner: Destroying... [id=stg--adoml-test-pipeline--task-runner]
module.stg_aws_prefect_backend.prefect_deployment_schedule.schedule["stg--adoml-test-pipeline:root-1"]: Creating...
module.stg_aws_prefect_backend.prefect_deployment_schedule.schedule["stg--adoml-test-pipeline:root-0"]: Creating...
module.stg_aws_prefect_backend.aws_ecs_task_definition.task_runner: Destruction complete after 0s
module.stg_aws_prefect_backend.aws_ecs_task_definition.task_runner: Creating...
module.stg_aws_prefect_backend.aws_ecs_task_definition.task_runner: Creation complete after 0s [id=stg--adoml-test-pipeline--task-runner]
module.stg_aws_prefect_backend.prefect_work_pool.work_pool: Modifying... [id=90c01cdb-46f4-48c0-b11b-4a150152b733]
module.stg_aws_prefect_backend.prefect_work_pool.work_pool: Modifications complete after 1s [id=90c01cdb-46f4-48c0-b11b-4a150152b733]
โ•ท
โ”‚ Error: Provider produced inconsistent result after apply
โ”‚ 
โ”‚ When applying changes to module.stg_aws_prefect_backend.prefect_deployment_schedule.schedule["stg--adoml-test-pipeline:root-1"], provider
โ”‚ "provider[\"<http://registry.terraform.io/prefecthq/prefect\|registry.terraform.io/prefecthq/prefect\>"]" produced an unexpected new value: .max_scheduled_runs: was cty.NumberIntVal(10), but now cty.NumberIntVal(0).
โ”‚ 
โ”‚ This is a bug in the provider, which should be reported in the provider's own issue tracker.
โ•ต
โ•ท
โ”‚ Error: Provider produced inconsistent result after apply
โ”‚ 
โ”‚ When applying changes to module.stg_aws_prefect_backend.prefect_deployment_schedule.schedule["stg--adoml-test-pipeline:root-0"], provider
โ”‚ "provider[\"<http://registry.terraform.io/prefecthq/prefect\|registry.terraform.io/prefecthq/prefect\>"]" produced an unexpected new value: .max_scheduled_runs: was cty.NumberIntVal(10), but now cty.NumberIntVal(0).
โ”‚ 
โ”‚ This is a bug in the provider, which should be reported in the provider's own issue tracker.
โ•ต
(.venv) Davids-MacBook-Pro:terraform destefano$
@Mitch Nielsen
m
taking a look
d
ty
m
do you mind creating an issue for this? https://github.com/PrefectHQ/terraform-provider-prefect/issues this will help us replicate the problem and make sure the engineer who built this part of the provider can see it ๐Ÿ‘
๐Ÿ‘ 1