Recently, I architected a solution for a client for their Microsoft Fabric data platform. The client works with Dynamics Finance & Operations as one of their main ERP system. Fabric offers easy ways to bring data from various standard Microsoft services into the platform, however it is not always as easy as it looks like. In this blog I will elaborate on the gotcha’s encountered in architecting this solution.
One short thing I want to point out: The procedure and gotcha’s described in this blog follow the example of Dynamics F&O. However, they are applicable to any other solution in which you want to bring your Dataverse data into Microsoft Fabric.

Dataverse to the rescue?
As the title of this blog already gives away, the solution might be in Microsoft Dataverse. However, let’s first stop quickly at the starting point of this specific solution design. The client for whom I designed the solution uses Microsoft Dynamics Finance & Operations as their cloud-based enterprise resource planning (ERP) system. All data they collect in this system is the core of the company, which is very valuable, typically to learn from the past and do better in the future.
Solutions built with other technologies were already in place at this customer. Also, the client already made use of Microsoft Power BI in the past. As a modernization step, moving their data platform solution and analysis to a modern platform like Microsoft Fabric seems a logical choice – and so we started.
Microsoft Dynamics and the Power Platform, that allow you to bring data directly in Dataverse seems like a great solution. From there, standardized patterns exist to bring your data into Fabric right away, without the need to setup complex ETL processes. Patterns like these were already known before Fabric as Azure Synapse Link – which allowed you to also bring your Dataverse data directly into Azure Synapse Analytics. The main difference is that Azure Synapse Link copied the data physically to the storage account used in Azure Synapse Analytics, there were Microsoft Fabric makes use of the shortcut concept. With shortcuts the data still resides in Dataverse and Fabric will only contain a pointer to the exact location as soon as the data is referenced in a query, pipeline or any other Fabric artifact.
As I do not intend to repeat everything that is already documented. I recommend reading this documentation for all details and differences on how you can link Dataverse to Fabric.

Gotcha!
So far, so good. Right? Well, maybe not entirely. Cause in early stages of the project we decided in which location we wanted to host the Fabric capacity – which was logically the same location as the tenant was located. In our case that was North-Europe.
The tenant location for many customers has been decided at the moment they first enabled Microsoft Power BI years ago, which now also applies to Fabric. Back then, the first user that opened Power BI had to choose an Azure region. Specifically for European customers, this can lead to a lot of confusion given that North-Europe is hosted in Dublin and West-Europe is hosted in Middenmeer (close to Amsterdam) in the Netherlands. For compliancy reasons, many organizations want their data to reside in the same country as they are based. So, for many customers their Power BI and tenant may have ended up in North-Europe, where the majority of their other Microsoft services is standardized on the West-Europe datacenter.
So was Dataverse hosted in West-Europe whilst our Fabric capacity resides in North-Europe. As one of the prerequisites of Shortcuts with Dataverse describes, your capacity must reside in the same region as your Fabric / Power BI capacity. Multiple options were on the table. Let’s have a closer look at each of them.
Update June 13th – 2024:
During EPPC2024 conference it has been announced that restrictions now apply on geographical level and no longer on region level. To take an example: you should be able to make a shortcut from North-Europe to West-Europe and viceversa. However, a shortcut from North-Europe to West-US will not work.
Migrate the Fabric tenant
The client first asked whether it would be possible to move their entire Fabric tenant to West-Europe, given Fabric was the only Microsoft service that resides in another region than any of the other services they use. Let’s say, I was not directly enthusiastic about this option. In the past, I’ve gone through tenant migrations a few times and it has not always been as successful – to say the least. Also, it is a time-consuming process which was in the way for our project deadlines. We quickly concluded this was not our way to go for now.
New capacity in a different region
Alternatively, we discussed deleting the current Fabric capacity in North-Europe entirely and setting up a new Fabric capacity in West-Europe. Via the Fabric admin portal, you can easily move Fabric workspaces from one capacity to another. However, at the time of this project migrating a workspace with Fabric items was not supported (and still is – June 2024). Microsoft advises to delete all Fabric items from the workspace before relocating the workspace – after which the items have to be recreated. Not really a solution if you ask me… Documentation also describes this limitation.
Given we encountered this limitation mid-project while we already ingested several other data sources into the Fabric platform, we decided this was not a good option. Recreating all items means a lot of manual work, even though all our code was properly saved in git. Recreating all items was not possible given not all our Fabric artifacts in use were supported by git at the time.
Additional capacity
Another option could be to keep the solution as is in the current capacity in North-Europe. However, we could setup a second capacity in West-Europe which contains a specific workspace to setup the Lakehouse having a shortcut to Dataverse. Next, in our original workspace, we create a shortcut to the Lakehouse in the other region. It feels like a by-pass and chaining shortcuts on top of shortcuts – which is not ideal. Also, cross-region traffic means there will be an uplift in pricing due to this as well as performance impact.

Obviously, another downside is the extra cost for the additional capacity. Optimization of the capacity to automatically pause and resume the capacity based on data ingestion patterns can be considered though.
Wrap-up
What at first looks super straight forward and an easy fix, just create a shortcut between Dataverse and Fabric, might not be that simple in the end. Region limitations apply and you may need to think about alternative approaches at the start.
Obviously, I learned a ton from this, and I will aim to prevent this exact same issue again. However, If I encounter the same situation again, I will opt for a capacity in another region. I would be okay redeploying the entire solution from git to another workspace after deleting all of the items first, given the limitation on git integration for the majority of Fabric items is now lifted. By all means, I would try to prevent tenant migration due to its complexity and impact on the wider platform.
As knowledge sharing is key, I hope sharing this experience helps others to prevent running into the same issue. Ever better would be if the limitation on cross-region shortcuts to Dataverse would be lifted in the future.
Pingback: Dataverse and Microsoft Fabric Gotchas – Curated SQL
Great post, Marc!
I am currently building the same kind of architecture for my company. I hope you will publish more techical posts on own to interface D365FO to MS Fabric.
LikeLike
Could somebody give me a solution on how to upsert dataverse data to Fabric Lakehouse using Datafactory. I tried to find the solutions on internet and it got me nowhere.
LikeLike
Hi RG,
I’m not sure if I completely get your question. Why specifically datafactory? That will materialize the data in the lakehouse (copy into). Is shortcuts not an option for you? Why? If you want to copy into the lakehouse, then you should be able to use the dataverse sql endpoint to build a pipeline, notebook, dataflow or anything of your preference to get the data into your lakehouse.
—Marc
LikeLike