Power BI — deployment pipelines — auto-binding and deployment rules

Michal Molka
4 min readMar 3, 2023

In that short post I descripted Power BI deployment pipelines. Power BI — deployment pipelines

Today I show you how to manage a dataset connection using parameters and Deployment Rules.

There are 6 workspaces: 3 for reports and 3 for datasets and they are split into 3 stages: DEV, INT, PRD.

Assuming you want to keep a different connection data for every stage you can hardcode the connection data.

Here is a dataset example, hardcoded connection data:

What happens when we create a dataset copy, change connection and deploy to respective workspaces. Nothing serious if you don’t plan to automate your deployments. And a necessity to change it on every control version branch isn’t a problem. In larger environments this approach is a dealbreaker.

Back to the pipelines topic. When you create a deployment pipeline for datasets containing hardcoded connection data you won’t be able to keep the different connection values for different stages. When you deploy a datasets between stages, the connection data will be propagated. At the end you will end with the same connection to the same source on every stage.

At the screen bellow the datasets are perfectly identical.

Only one distinction is a connection value: …Iowa_DEV, …Iowa_INT, …Iowa_PRD.

let
Source =
AzureStorage.DataLake("https://<ADLS ADDRESS>.dfs.core.windows.net/core-files/Iowa_DEV"),
#"https://eightfiveadls dfs core windows net/core-files/Iowa_DEV/_IowaLiquorSalesdf2022 parquet" =
Source{[#"Folder Path"="https://eightfiveadls.dfs.core.windows.net/core-files/Iowa_DEV/",Name="IowaLiquorSalesdf2022.parquet"]}[Content],
#"Imported Parquet" = Parquet.Document(#"https://eightfiveadls dfs core windows net/core-files/Iowa_DEV/_IowaLiquorSalesdf2022 parquet"),
#"Filtered Rows" = Table.SelectRows(#"Imported Parquet", each ([County] = "BENTON" or [County] = "BUCHANAN" or [County] = "CALHOUN" or [County] = "CARROLL" or [County] = "CEDAR" or [County] = "CERRO GORD" or [County] = "CHICKASAW" or [County] = "CLARKE" or [County] = "DICKINSON" or [County] = "FLOYD" or [County] = "FRANKLIN" or [County] = "GRUNDY"))
in
#"Filtered Rows"

A simple solution is a parameter.

Once the parameter is created, the code adjusted…

…and new dataset versions are uploaded to the service you are able to create a deployment rule for the INT and the PRD stage.

As a side topic: if you use one of supported data sources, Azure SQL Server for example you can set a data source rule and change an entire connection between stages.

Back to the datasets. As you see, we have three separate environments, three different connections and the deployment pipeline doesn’t see any difference.

ALM toolkit says that there is no difference as well. Only a parameter value:

When we have reports connected to these datasets (Live connection), a separate report connected to a separate stage. The situation is simpler. Deployment pipeline doesn’t see any difference.

Even if each report is connected to a dataset placed in a separate workspace, hence there is a difference inside each .pbix file.

Why? Because of an auto-binding. Power BI knows that you have different data sources connected to the reports on each stage and syncs these information across pipelines.

Here is a structure of the entities used in this post.

As you see above the connections are synched between the pipelines, the reports and the datasets.

--

--