Two function gaps today using Prefect + dbt. No promises about how relevant they are to the typical Prefect+dbt user, though:
1. Separated lineage
a. The "run" node is just a single task with a lot of logs--you're going to be headed to your dbt docs to figure out the impact of a failure, or to dbt cloud/your logs/your hacky custom viz to figure out execution time bottlenecks
b. We intentionally push things into Prefect that don't have to live there so that it can serve as an observability layer, yet have poor-ish visibility into dbt from Prefect
c. Dagster's new support for this in 0.14 looks really appealing so there is a bit of a competitive aspect here (their approach to data-aware tasks is much more general, but dbt is going to be the flagship example for that feature for a long time)
2. Inserting tasks into the dbt DAG
An example of a feature that could help with separated lineage would be importing dbt run details as a subgraph during or after a run. Another option might be improved post-run artifacts--as a truly minimal example, just having a graphviz of the dbt dag with timings and node status might be helpful (or a snapshot/summary + clickthrough to dbt cloud, once they support better vis and reporting around this).
For the second one, I'm not sure what a good solution looks like, but the basic problem is when people want to do this:
1. Run part of your dbt DAG
2. Execute some code that modifies assets in your DWH (e.g. run a scoring model and upload the results, execute something in Snowpark)
3. Run the downstream parts of your DAG
You can handle that today by using node selectors in different task steps, but it feels clunky. I think this use case will get more common with Snowpark and serverless Spark on BigQuery, and while the endgame for those specific scenarios is dbt supporting things like Snowpark, that could be a while. You might conclude that (upcoming) Python UDF support and external functions don't leave much of a sweet spot to focus on here but it's worth investigation.