Operationalizing Fabric’s February 2026 feature drop: what actually matters for Spark teams

Operationalizing Fabric's February 2026 feature drop: what actually matters for Spark teams

Microsoft’s monthly feature summaries have a familiar problem. They flatten every change into the same cheerful pitch. A new cell editor mode gets about the same oxygen as a moving security boundary. If you run Spark seriously on Fabric, that is useless. You need to know which items change architecture, which clean up the daily notebook grind, and which quietly add a new failure mode.

February’s release has all three. The headline is not “more features.” The headline is that Fabric keeps removing excuses for portal-driven, manually operated Spark environments. More of the platform can now be secured, composed, and managed through code. That is good news. It also means the easier Microsoft makes this, the more discipline you need on your side.

The change that actually alters architecture

CMK support for notebook code

This is the big one.

Fabric notebooks can now run inside CMK-enabled workspaces, with notebook content and associated notebook metadata encrypted at rest using customer-owned keys in Azure Key Vault. Microsoft is not vague about the coverage. The post calls out cell source, cell output, and cell attachments.

If your team has been splitting its development pattern because notebooks were the odd object out in a tighter security model, that split is no longer structurally required. Plenty of enterprises ended up with an awkward arrangement: secure workspaces for governed assets, then a side channel for notebook authoring and iteration. February closes that gap.

The payoff is boring in the best way. Fewer workarounds. Fewer places where permissions drift. Fewer security reviews where someone has to explain why the code path lives outside the workspace standard applied to everything else.

It also changes the migration conversation. Teams that avoided notebooks in regulated environments can revisit that decision. Teams already on notebooks can ask whether a separate architecture still buys them anything except paperwork.

The catch is operational, not conceptual. Keys rotate. Policies get tightened. When notebook content and metadata sit under the same CMK envelope, key management stops being an abstract security exercise and starts touching the authoring surface your engineers use every day. If you do not test rotation and recovery in a non-production workspace first, you are volunteering to learn in public.

The workflow fix Spark teams needed months ago

Python notebooks finally get %run

This was overdue.

PySpark notebooks had a workable modularity story. Python notebooks did not. If you wanted shared setup logic, common helper functions, or a standardized preamble, you either copied code between notebooks or invented a packaging scheme to compensate for a missing primitive.

Now Python notebooks support %run. You can reference and execute other notebooks in the same execution context, then directly use the functions and variables defined there. That is the difference between notebook code as a pile of local accidents and notebook code as something you can organize on purpose.

There is one limitation, and it matters: today %run in Python notebooks supports notebook items only. It does not yet run .py modules from the notebook resources folder. Microsoft says that support is coming soon. Fine. “Coming soon” is not an architecture. Build around notebook references now, and treat resource-folder module execution as a future upgrade if it arrives on time.

The immediate move for most teams is simple. Pull duplicated utility code into shared notebooks. Keep them small. Keep ownership clear. Do not turn %run into a dependency swamp where every notebook imports half the workspace and nobody can explain execution order without drawing a crime-scene diagram.

Version history now tells you where a change came from

This sounds like a minor quality-of-life improvement until you have to debug a bad deployment before the second cup of coffee.

Fabric notebook version history now labels the source of each saved version. Direct edits in the notebook, Git synchronizations, deployment pipeline updates, and publishing via VS Code all show up as distinct origins. That one label removes a stupid amount of ambiguity.

Before this, the question “what changed?” was followed by the more annoying question “through which path?” In a serious CI/CD setup, that distinction is the whole investigation. A manual portal edit points you to one human. A Git sync points you to a repo change. A deployment pipeline update points you to release plumbing. VS Code publishing points you somewhere else again. Same broken notebook, different root cause.

If your team uses more than one of these paths, update the runbook. The first step in notebook incident triage should now be checking the version source before anyone starts diffing content like a raccoon digging through a dumpster.

Full-size mode is small, but not trivial

Full-size mode lets a single notebook cell fill the workspace for editing. That is not glamorous. It is just useful.

Large SQL blocks, ugly transformation cells, and screenshared code reviews all get easier when the interface stops fighting you. Features like this do not make press-release people happy, but they do shave friction off work that happens every day. I would not redesign an architecture around it. I would absolutely use it.

The broader pattern hiding inside the release

Fabric is making Spark more reachable from both directions

Two February items matter together.

The new Microsoft ODBC Driver for Fabric Data Engineering gives external applications and ODBC-compatible tools a supported path into Spark SQL on Fabric. Microsoft describes it as ODBC 3.x compliant, backed by Livy APIs, and built for OneLake and Lakehouse data with Entra ID authentication, proxy support, session reuse, and Spark SQL coverage that looks designed for real workloads instead of demos.

Then there is Semantic Link 0.13.0. That release expands management coverage across lakehouses, reports, semantic models, SQL endpoints, and Spark. Microsoft is explicit about the direction: creating and managing lakehouses and tables, cloning and rebinding reports, refreshing and monitoring semantic models, and administering SQL and Spark settings from code.

Put those together and the platform’s direction is obvious. Fabric wants Spark environments that can be queried from outside and administered from inside code, without the portal as the center of the universe. That is the right direction. The portal is useful. The portal is not a control plane.

This is also where teams get themselves into trouble. The moment workspace operations become scriptable, governance stops being a policy deck and becomes a permissions design problem. If every engineer can programmatically create lakehouses, modify Spark settings, and rebind reports, then congratulations: you have built an accidental infrastructure platform. Maybe that is fine. Maybe it is a terrible idea. Decide before the scripts proliferate.

My bias is blunt. Treat Semantic Link as production infrastructure tooling, not as a convenience library. Set conventions early. Define who can do what. Log changes. Review the scripts that touch shared assets. Otherwise you will end up with beautiful automation and feral workspaces.

The quiet footgun in the admin section

Fabric identity limits now scale higher, but Fabric will not save you from bad math

Fabric now raises the default tenant limit for Fabric identities from 1,000 to 10,000. That is a real scale change, and for some organizations it removes an artificial ceiling that was starting to pinch.

It also lets admins set custom limits and manage them through the Update Tenant Setting REST API. Good. That is how this should work.

The problem is the warning Microsoft slips into the text: Fabric does not validate whether your custom limit fits within your Entra ID resource quota.

That means the setting feels authoritative while depending on an external quota boundary it does not enforce. In other words, the UI and API will happily let you declare ambition. Entra ID is the system that decides whether ambition has a permit.

So before anyone bumps the limit because “10,000 sounds better,” check the Entra side first. If you automate the setting, add that quota check to the automation. This is not exotic engineering. It is basic adult supervision.

What I would do this week

If you own Spark on Fabric, February’s release suggests a short, unromantic punch list.

  • Review whether CMK support lets you collapse any split workspace pattern built around notebook restrictions.
  • Start using %run in Python notebooks for shared helpers, but keep the dependency graph understandable.
  • Update notebook incident runbooks so version-source labels are part of first response.
  • Decide whether the ODBC driver and Semantic Link belong in your standard platform toolkit, then put guardrails around both before usage spreads.
  • Check Entra ID quotas before changing Fabric identity limits, especially if a script is going to do it for you.

That is the real shape of the month. A nicer notebook editor is fine. A new driver is nice. The deeper story is that Fabric keeps shifting Spark toward a model where security, reuse, and administration happen in code instead of in tribal knowledge and portal muscle memory. That is progress. It also means the teams that win will be the ones that pair new capability with restraint, because the platform is getting powerful enough to automate your mistakes at scale.

This post was written with help from anthropic/claude-opus-4-6

Leave a comment