Recently, I wrote a blog about the new branch-out feature in Git connected Fabric and Power BI workspaces. In this blog, I will continue the topic of Git integration by discussing various setups you could consider in your Git integration, deployment and release strategies as part of your continuous integration and continuous delivery setup.
Will you connect Git only to your development workspace, or to all stages? And how do you handle your deployment? Keep reading to find out the different patterns you can consider!

Workspace Git integration
On Fabric and Power BI workspaces (hereafter just workspaces), you can easily connect a Git repository. This feature has been released with the launch of Microsoft Fabric back in May 2023. This new way of of setting up your continuous integration (CI) and continuous delivery (CD) supports you to collaborate on solutions, save your solutions in code and deploy your solutions from one workspace to another. But it does not stop just at the deployment from Development to Test or Production, you can even easily get content in entirely new workspaces or even other tenants by just connecting the Git repository to the workspace and running a Git Pull to bring all the code to the workspace.
For a while, the Fabric items supported were fairly limited. But now, at the time of writing this blog (early July 2024), almost all Fabric items are supported. There are still a few items lacking behind unfortunately, like Dataflows gen2 for example, with no Git integration at all. Or limited support like the Lakehouse that only contains the Lakehouse name, but not schema or table definitions.
Also, currently your only Git provider is Azure DevOps, but GitHub Enterprise is also on the roadmap and coming soon! All by all, Microsoft is moving forward in rapid speed to get support for all workloads and all items to integrate into Git.
Having that said, I’ve had many discussions with both co-workers and customers about how to best setup your deploy and release strategy for Fabric, now we have the Git integration. Do you just integrate Git on your development workspace and make use of Fabric deployment pipelines? Or do you go for Git on all workspaces? And how do you push your content to the next stage in that case? In the following section of this blog, I will discuss three different setups that you could consider.
Setup 1: Git-based deployment
The first scenario is based on a Git branch connected to all workspaces. In the example shown below, we identified three stages being Development, Test and Production. But this setup allows as many stages as you want. Also, it makes use of feature branches for any developments in isolated workspaces, as recently discussed in a separate blog.
The setup with Git branches for each workspace, is the most tech-heavy setup where knowledge and expertise of Git is necessary for all that collaborate in development of the solution. Branching, merging and pull requests are key to make this scenario work.
- Git serves as a single source of truth.
- All deployments originate from the repository.
- Each workspace will have their own branch.
- All branches are protected for direct commits.
- New features can be released by raising pull requests.
- To deploy from Development to Test, and from Test to Production, a pull request has to be raised from the original stage.
- Sync between the Git branch and the workspace can be automated using the Git Sync API to be called as a build pipeline in Azure DevOps that automatically triggers after an approved pull request.

Setup 2: Git and Build environments
In the second setup, Git is only connected to the Development workspace. Separate feature branches can exist and is still best practice. However, deployment to other stages is done based on Build environments in Azure DevOps. This means that the Fabric Item APIs are used to execute CRUD commands (Create, Read, Update, Delete).
- Git connected only to Development workspace.
- After a pull request, a Build pipeline in Azure DevOps triggers.
- The Build pipeline runs CRUD commands to the workspace.
- The Git repository is the bases to create, update or delete items in the workspace.
- Code heavy approach – requiring lots of effort in setting up the Build pipelines.
- For each future item to be supported in Fabric, the Build pipelines may have to be adjusted.

Setup 3: Git & Fabric deployment pipelines
The third setup are based on Fabric Deployment pipelines (Power BI Deployment pipelines if you will) which are part of the Software-as-a-Service portal. This visual interface eases deployment from stage to stage and is the least code-heavy of the three scenarios discussed.
In this setup, Git is only connected to the Development workspace and feature branches in separate workspaces still exist. Test, Production and potential additional workspaces are not Git connected.
- Only Development workspace is Git connected.
- Releases to additional stages like Test and Production are done via Deployment Pipelines in the Fabric / Power BI Service.
- Having two portals to work with is less ideal (DevOps for Git and Fabric / Power BI Service for pipelines).
- Triggers for the Fabric deployment pipeline can be automated using Build Pipelines in Azure DevOps that automatically trigger after an approved pull request, which can call the Deploy Stage Content Fabric REST API.
- Can be combined with the Git Sync as described in scenario 1 for syncing the development workspace.

Wrap-up and remarks
All by all, different options are possible to setup your CI/CD process for Fabric and Power BI. Depending on how tech savvy and code heavy you and your team want to work, one or another scenario may fit better. Personally, I prefer the first scenario fully Git based. But I can totally imagine that it will not always fit your process. In this blog I aimed to highlight three scenarios which I think are typical. It is a new way of working, which you may not be used to if you’re coming from a non-code background. I personally had to get used to it for a while as well, but now I prefer this scenario over others. Main drivers behind this are having one portal to manage both my work items (DevOps boards + backlog) as well as the code (DevOps repository) and releases (DevOps Build pipelines).
In each of the scenarios, I highlighted options to automate with the Fabric REST API as part of a build pipeline for example. Please note that you may run into limitations in this setup, given at the time of writing (early July 2024) the majority of Fabric REST APIs does not support service principal authentication yet. Therefore, you are forced to setup service accounts or get very creative in other ways. Fingers crossed this limitation will be lifted soon!
Special shout-out to Nimrod Shalit – Principal Program Manager Lead at Microsoft, with whom I presented at Fabric Community Conference in Las Vegas in March 2024 on this topic. Together we setup these scenarios and worked them out in detail.
You did not mention how manage the stage-specific parameters with option 1, such as connection details in a dataset.
LikeLike
Hi Anna,
Thank you so much for your comment. I appreciate your feedback.
There are several options that you could consider. Below a few:
1. When building a data platform solution (pipelines, notebooks, lakehouses, etc) I would encourage you to make the solution meta data driven. Maybe even setup a meta data library where you can reference to, so all you specify in your notebook is whether it’s dev/test/prod and it will automatically pickup the right source connection string from the meta data library.
2. second option could be to have variables saved in your DevOps config, which you automatically find/replace in your source code during the git sync. This should be doable, but brings additional code with it, which can be more complex than option 1.
3. If you’re solely looking at Semantic Models, consider using parameters in your solution and use the Update Parameter REST API as part of your deployment pipeline that triggers the Git sync. That way, you can change the connection string during deployment. Similarly, you could trigger refreshes and other operations after deployment.
Hope this helps you to take the next steps.
–Marc
LikeLike
Thank you for your reply. Do you know if there is any possibility to set or change the cloud connection in a semantic model via Rest API (or anything planned)?
LikeLike
Not as far as I know. You have to use Parameters to update connection strings. However, if you change the connection string, you may have to reauthenticate to the source before your scheduled refresh will continue.
–Marc
LikeLike
Pingback: Deployment and Release Strategies for Fabric CI/CD – Curated SQL
Hi Marc,
I’ve been implementing CI/CD for PowerBI in our organisation and I was going for scenario 3 but I ran into two problems.
1. I could not connect the workspace to a branch that doesn’t allow direct commits. Which was a problem for us because we didn’t want to enable this for the main branch just for the sake of PowerBI.
2. When I found out the workspace is not automatically kept in sync with the Git Repo I tried to find a way to automate it but I could not find it in the documentation and I’m positive I checked the Core/Git page.
Have these two issues recently been changed?
I managed to build a workaround for our semantic model workspace by using Tabular Editor in our ADO Release to deploy the DEV workspace. Then I deploy to TST and PRD via the Fabric Deployment Pipeline. I hadn’t found a solution yet for our reporting workspaces but if this really works like your blog then I can go back to my original plan 🙂
LikeLike
Little addendum on issue 2. I checked again in my development branch and I found that you cannot use a service principal to run the updateFromGit API call. How would you automate it from ADO Pipelines if you cannot use the service principal?
LikeLike
That’s been a known issue and should be fixed some time in the future. For now, you have to work with a service account without MFA, or manually pull the changes from git into the WS.
—Marc
LikeLike
With regards to point 1, I haven’t experienced that. However, I did it the other way around. First create the branches, setup the workspaces and connect them, and finally protect the branches. If it doesn’t work, like you said, I would recommend raising a support ticket. Sounds like a bug to me.
—Marc
LikeLike
Thanks for the quick reply on both points. I just tried syncing a workspace to our main branch and it somehow seems to work now.
On the second point: I was afraid a service account would be your answer. Unfortunately company policy does not allow for accounts without MFA. At least I am happy to hear that the SP not working is a bug and not a feature 🙂 I might just go ahead and start the implementation of CI/CD for our report workspaces and keep them in sync manually or halfway automate it with a powershell script that requires a user to login, until it is addressed.
LikeLiked by 1 person
Sounds great!
I can confirm SPN support is coming, can just not tell you when 😉
LikeLike
Hej Marc
Thank you for putting this guide together.
I am attempting to implement version 3.
Have you tested if that approach works for deploying a THIN report from Dev to another/other workspace e.g. Test using the inbuilt Fabric deployment pipeline?
I have a fabric git integrated Dev workspace where I have successfully synched a semantic model and an associated THIN report from DevOps.
The semantic model has parameters which are changed on deployment to Test/ Prod workspaces.
Within the PBIP folder, the definition.pbir uses by path e.g. ../Test.SemanticModel”, since I believe that enables the report to connect to the model in a respective workspace, and it works because the report opens fine in Dev.
However, attempts to manually deploy from Dev to Test using the Fabric Deployment pipeline do not work.
The first attempt, if the Test workspace is empty, shows the THIN repot is “different from source”, and the report won’t open.
Any further deployment attempts fail to deploy to Test and the report does not open.
It works for the semantic model though.
I’d like to know if I am missing something or it’s not possible.
/Joseph
LikeLike
Hey,
Did you check the docs on any limitations? I’m not aware of anything specific in your scenario that should not work though. I recommend raising a support ticket if the thin report breaks by deploying through deployment pipeline interface.
–Marc
LikeLike
Hi Marc,
Great guide you made! I tried setup 3 and although my dev workspace syncs just fine with my Git repository, the Fabric pipeline fails to sync the Report from my dev workspace to my test workspace. Semantic model syncs just fine. Could this have to do with the (still experimental) new .pbir format? Hoping to hear from you since this currently renders the git integration / fabric pipeline approach unusable for me!
Andries
LikeLike
Hi Andries,
PBIR cannot be deployed through a deployment pipeline and therefore it will successfully sync through git but will not be deployed to test/prod workspaces following scenario 3. See the limitations section of the PBIR documentation here: https://learn.microsoft.com/en-us/power-bi/developer/projects/projects-report#pbir-considerations-and-limitations
–Marc
LikeLike
Earlier with ADF deployments, there was an option to parameterise the resources you are deploying, for example, the connections to the Source and Stage within a ‘copyData’ activity of a data pipeline.
Now, with Fabric DF, with both the deployment options from Dev to Prod (Deployment Pipelines and Git-based deployment), there’s no way to parametrise the Warehouse connection that the pipelines have to refer to.
Similarly, for Data Warehouses, while deploying there’s no way to do selective deployment. Is my understanding correct?
LikeLike
Hi Roopansh,
That’s correct. One of the limitation is not having dynamic connections at this point. However, it should be released soon, according to the roadmap. Check this documentation: https://learn.microsoft.com/en-us/fabric/release-plan/data-factory#data-pipeline-support-fabric-workspace-variables
Although it says Q4 2024 – there was a slight delay in releases going out. So, I expect it anytime soon.
–Marc
LikeLike
Hi Marc,
For scenario 3, do you think that we can opt out the feature branch workspaces? Can we maybe just create a new branch, develop the reports using PBI Desktop (or with other external tools), and then commit all the changes to the branch using VS code, do the Pull request, and merge it to the main branch (synced to Dev workspace)?
I see that you but also other experts recommend to use feature branch workspaces, but I don’t see (yet) lots of added values there, but I already see the possible mess of feature branch workspaces not being deleted afterwards.
Many thanks for all the useful contents and other works! Really appreciate that!
LikeLike
Pingback: CI/CD , temps réel : accélérer sans dérailler