
Preview posts are written to soothe. Production teams read them like incident reviewers. They want to know what moves, what stays off, and what still needs proof before anyone re-enables a trigger.
This new migration experience is useful because it has brakes.
It lets you assess Synapse pipelines, see compatibility gaps, migrate supported pipelines into a Fabric workspace, map Synapse linked services to Fabric connections, and keep execution under control while you validate the result. That is not a one-click estate conversion. Good. One-click migration promises are how people end up explaining themselves on a call at 6 a.m.
This is triage before it is migration
The flow is split into three stages: assessment, review, and migration.
Assessment classifies each pipeline as Ready, Needs review, Coming soon, or Unsupported / Not compatible. You can export the assessment to CSV, which is more useful than it sounds. Most Synapse estates are not clean enough to reason about from memory. The CSV gives you a working list you can sort, assign, and use in a real plan.
The categories also give you an obvious first pass:
- Ready: pilot batch.
- Needs review: engineering work.
- Coming soon: stop thrashing and wait for support to land.
- Unsupported / Not compatible: redesign it.
The docs also recommend a phased approach. Start with Ready. Fix Needs review. Rerun the assessment. Sensible advice, which means some teams will try very hard to ignore it.
The Spark-specific catch is the part people will miss
If a Synapse pipeline calls Notebook activities or Spark job definition activities, Microsoft says to migrate those Spark artifacts to Fabric first.
That is the whole game for Spark teams.
If the matching Fabric notebooks or Spark job definitions already exist, the migration flow can map those activities to the Fabric items. If they do not exist yet, those activities may stay unmapped or deactivated until you create the Fabric items and update the references.
So a migrated pipeline is not automatically a runnable Spark workload. It may be a correctly copied orchestration layer that still points to nowhere useful. If your team blurs that line, you are not “almost done.” You are halfway to a very dumb cutover.
Connection mapping is where “migrated” stops meaning “ready”
The migration flow then asks you to pick a Fabric workspace and map Synapse linked services to Fabric connections.
Here the product does something smart. It does not force fake completeness. Pipelines can migrate even if every connection is not mapped. The catch is explicit: activities that use unmapped connections remain deactivated.
That is the right tradeoff. A deactivated activity is annoying. A silently broken run is worse.
This is where the human work starts:
- make sure the right Fabric connections exist
- validate credentials and access
- check which activities are still deactivated
- confirm notebook and Spark job references point to the intended Fabric items
The tool can move metadata. It cannot tell you whether your team has actually finished the migration.
“Triggers disabled by default” is the best sentence in the whole thing
After migration, triggers are disabled by default.
Perfect.
That removes one of the most common migration failure modes: an artifact gets copied, a dependency gets missed, the schedule fires anyway, and now production is teaching everyone a lesson. Keeping triggers off buys you a clean validation window.
The post-migration guidance is refreshingly sane:
- Validate connections and credentials.
- Re-enable and configure triggers as needed.
- Run end-to-end tests.
- Validate in a nonproduction environment before switching production workloads.
That is the order. Not the other way around.
There is one smaller operational detail worth noting. Migrated pipelines appear in the Fabric workspace with the source factory name prefixed. That helps when you are reviewing a mixed estate and trying to keep lineage straight.
What this preview changes
It does not finish the migration for you. It does make the early part less chaotic.
You get a readiness assessment instead of guesswork. You get a phased path instead of a big-bang leap. You get visible connection mapping. You get deactivated activities when dependencies are missing. You get triggers held back until you choose to turn them on.
That is real value. It turns migration from “hope plus calendar pressure” into something you can audit.
A rollout pattern worth trusting
If I were running this for a production Fabric Spark estate, I would keep it brutally simple.
- Migrate notebooks and Spark job definitions to Fabric first.
- Run the pipeline assessment and export the CSV.
- Start with Ready pipelines that already have their Fabric Spark counterparts in place.
- Map linked services to Fabric connections and treat every deactivated activity as unfinished work.
- Run end-to-end tests in nonproduction. Compare outputs, parameters, logging, and failure handling.
- Re-enable triggers only after the pipeline and its Spark dependencies survive contact with reality.
- Then work through the Needs review backlog and rerun assessment as you clear items.
It is not glamorous. It is how you keep a migration from turning into a weekly apology.
The practical takeaway
This preview matters because it is honest about the order of operations.
For Spark-heavy Synapse estates, the job is not “move everything to Fabric.” The job is “move Spark artifacts first, move orchestration second, validate connections and behavior, then turn execution back on.” The new experience supports that sequence instead of pretending the sequence does not matter.
So no, this is not a teleportation device for legacy pipelines. It is a staging area with guardrails. For teams running Spark in production, that is much more useful.
This post was written with help from anthropic/claude-opus-4-6









