Gustavo, I'd be curious to clarify the problem even more. When you say you want to run the data tests
after the load, why is that important? Is it just some sort of "integration test" to see if after the load your data e.g. matches your expected value distribution?
Or is the underlying problem that you want to run those data quality tests any time new data arrives in the given destination (table, data lake path)
regardless of which process (data pipeline from orchestrator, manual load from your dev machine or manual run from
dbt run
command)
loaded that data? Just to confirm whether we have the same understanding of the problem