What the February 2026 Fabric Influencers Spotlight means for your Spark team

What the February 2026 Fabric Influencers Spotlight means for your Spark team

What the February 2026 Fabric Influencers Spotlight means for your Spark team

Microsoft published its February 2026 Fabric Influencers Spotlight last week. Twelve community posts. MVPs and Super Users. Most people skim the list. Maybe bookmark a link. Move on.

Don’t.

Three of those posts carry signals that should change how your Spark data-engineering team operates in production. Not next quarter. Now.

Signal 1: Get your production code out of notebooks

Matthias Falland’s Fabric Friday episode makes the case plainly: notebooks are great for development but risky in production. That framing resonates with a lot of production teams—and for good reason.

Here’s the nuance. Microsoft has said there’s no inherent difference in performance or monitoring capabilities between Spark Job Definitions and notebooks. Both produce Spark logs. Both run on the same compute. The gap isn’t in what the platform offers. It’s in what each artifact encourages.

Notebooks encourage improvisation. Someone edits a cell at 2 AM. Cell state carries between runs. An error gets swallowed inside an output cell and nobody notices until downstream tables go stale. That’s not a platform limitation. That’s a human-factors problem. And production environments are where human-factors problems become outages.

Spark Job Definitions push you toward cleaner habits. One file per job. No cell state. Explicit parameters. Better modularity. The execution boundary is sharper, and sharper boundaries make failures easier to diagnose.

If your team runs notebooks on a schedule through pipelines, here’s the migration:

  • Audit every notebook that runs on a schedule or gets triggered by a pipeline. Count them. You’ll be surprised.
  • Extract the transformation logic into standalone Python or Scala files. One file per job. No magic. No “run all cells.”
  • Create Spark Job Definitions for each. Map your existing notebook parameters to SJD parameters. They work the same way—just without the cell baggage.
  • Wire them into your pipeline activities. Replace the notebook activity with an SJD activity. The orchestration stays identical.
  • Keep the notebooks for development and ad-hoc exploration. That’s where they shine.

A team of three can typically convert a dozen notebooks in a week. The hard part isn’t the migration. It’s the decision to start.

Signal 2: Direct Lake changes how you write to your lakehouse

Pallavi Routaray’s post on Direct Lake architecture is the most consequential piece in the whole spotlight. Easy to miss because the title sounds like a Power BI topic.

It’s not. It’s a Spark topic.

Direct Lake mode reads Parquet files directly from OneLake. No import copy. No DirectQuery overhead. But it only works well if your Spark jobs write data in a way that Direct Lake can consume efficiently. Get the file layout wrong and your semantic model falls back to DirectQuery silently. Performance craters. Your BI team blames you. Nobody knows why.

Here’s the production checklist:

  • Enable V-Order optimization on your Delta tables. V-Order sorts and compresses Parquet files for Direct Lake’s columnar read path. Here’s the catch: V-Order is disabled by default in new Fabric workspaces, optimized for write-heavy data engineering workloads. If your workspace was created recently, you need to enable it explicitly. Check your workspace settings—or set it at the table property level. Don’t assume it’s on.
  • Control your file sizes. Microsoft’s guidance is clear: keep the number of Parquet files small and use large row groups. If your Spark jobs produce thousands of tiny files, Direct Lake will hit its file-count limits and fall back. Run OPTIMIZE on your Delta tables after write operations. Compact aggressively.
  • Partition deliberately. Over-partitioning creates too many small files. Under-partitioning creates files that are too large for efficient column pruning. Partition by the grain your BI team actually filters on. Ask them. Don’t guess.
  • Watch for schema drift. Direct Lake models bind to specific columns at creation time. If your Spark job adds or renames a column, the semantic model breaks. Coordinate schema changes explicitly. No silent ALTER TABLE commands on Friday afternoons.

The big risk here: most Spark teams don’t know their output feeds a Direct Lake model. The BI team built it after the fact. Start by mapping which of your Delta tables have Direct Lake semantic models sitting on top. If you don’t know, find out today.

Signal 3: CI/CD for Fabric just got real

Kevin Chant’s post covers the fabric-cicd tool reaching general availability for configuration-based deployments with Azure DevOps. This is verified and it matters more than it sounds.

Until now, deploying Fabric artifacts across environments—dev, test, prod—was either manual or held together with custom scripts that broke every time the API changed. The fabric-cicd tool gives you a supported, versioned path.

For Spark teams:

  • Your Spark Job Definitions, lakehouse configurations, and pipeline definitions can live in source control and deploy through a proper pipeline. No more “I’ll just update it in the portal.”
  • Configuration differences between environments—connection strings, capacity settings, lakehouse names—get handled through configuration files. Not by editing items in the portal after deployment.
  • You can roll back. You can diff. You can review before promoting to production. The basic hygiene that every other engineering discipline has had for decades.

Here’s the migration path:

  • Install fabric-cicd from the latest release. Follow Chant’s posts for the Azure DevOps YAML pipeline specifics.
  • Export your existing workspace items to a Git repository. Fabric’s Git integration handles this natively.
  • Build your environment-specific configuration files. One per environment. Map the items that differ: capacity, lakehouse, connections.
  • Set up your Azure DevOps pipeline to run fabric-cicd on merge to main. Start with dry-run mode until you trust it.
  • Remove portal-level edit access for production workspaces. This is the hard step. It’s also the one that prevents the next outage.

The deeper pattern

These three signals connect. Falland tells you to move your Spark code into artifacts built for production discipline. Routaray tells you how to write your output so downstream models don’t silently degrade. Chant tells you how to deploy the whole thing reliably across environments.

That’s a production pipeline. End to end. Code that runs cleanly, writes data correctly, and deploys safely.

The February spotlight also includes Open Mirroring hands-on guidance from Inturi Suparna Babu and a Fabric Data Agent walkthrough from Shubham Rai. Both are worth a read if you’re evaluating data replication strategies or AI-assisted query patterns over your lakehouse. But for Spark teams running production workloads, the three signals above are where the action is.

Your rollout checklist for March

  1. Inventory all scheduled notebooks. Tag them by risk: frequency, data volume, downstream dependencies.
  2. Convert the highest-risk notebook to a Spark Job Definition this week. Validate it runs identically.
  3. Audit Delta table write patterns for any table that feeds a Direct Lake model. Check that V-Order is enabled. Run OPTIMIZE to compact files.
  4. Install fabric-cicd. Connect your workspace to Git. Build your first environment config.
  5. Pick one pipeline to deploy through CI/CD end-to-end. Prove it works before scaling.

Five items. All concrete. All doable in March.

The community did the research. Your job is to act on it.

This post was written with help from anthropic/claude-opus-4-6

fabric-cicd Is Now Officially Supported — Here’s Your Production Deployment Checklist

Three days ago, Microsoft promoted fabric-cicd from community project to officially supported tool. That Python library your team has been running in a “we’re still figuring out our deployment process” sort of way now carries Microsoft’s name and their support commitment.

That shift matters in three concrete places. First, your compliance team can stop asking “is this thing even supported?” Second, you can open Microsoft support tickets when it breaks. Third, the roadmap is no longer a volunteer effort. Features will land faster. Bugs will get fixed on a schedule.

But here’s where most teams stall. They read the announcement, nod approvingly, and then do absolutely nothing different. The notebook still gets deployed by clicking sync in the browser. The lakehouse GUID is still hardcoded. The “production” workspace is still one bad merge away from serving yesterday’s dev code to the entire analytics team.

An announcement without an execution plan is just news. Let’s build the plan.

What Fabric-CICD Does (and Where It Stops)

Understand the boundaries before you reorganize your deployment story. fabric-cicd is a Python library. You give it a Git repository, a target workspace ID, and a list of item types. It reads the item definitions from the repo, resolves dependencies between them, applies parameter substitutions, and pushes everything to the workspace. It can also remove orphan items that exist in the workspace but no longer appear in your repo.

It supports 25 item types: Notebooks, SparkJobDefinitions, Environments, Lakehouses, DataPipelines, SemanticModels, Warehouses, and 18 others. Every deployment is a full deployment. No commit diffs, no incremental updates. The entire in-scope state gets pushed every time.

Where it stops: it won’t manage your Spark compute sizing, it won’t migrate lakehouse data between environments, and it won’t coordinate multi-workspace transactions atomically. Those gaps are yours to fill. That’s not a weakness. A tool that owns its scope and does it well beats one that covers everything and nails nothing.

Prerequisite Zero: Get Your Git House in Order

This is the part that takes longer than anyone budgets for.

fabric-cicd reads from a Git repository. If your Fabric workspace isn’t connected to one, the tool has nothing to deploy. And plenty of Spark teams are still running workspaces where notebooks were born in the browser, edited in the browser, and will die in the browser without ever touching version control.

Connect your workspace to Azure DevOps or GitHub through Fabric’s Git Integration. Every notebook, every Spark job definition, every environment configuration goes into source control. All of it.

If your repo currently contains items named notebook_v2_final_FINAL_USE_THIS_ONE — and honestly, most of us have been there — now’s the time to clean that up before automating. Automating a disorganized repo just moves the disorganization faster. Getting the foundation right first saves real time down the road.

Your target state when this prerequisite is done: a main branch that mirrors production, feature branches for development work, and a merge strategy the whole team agrees on. fabric-cicd reads from a directory on disk. What it reads needs to be coherent.

The Parameter File: The Single Most Important Artifact

The parameter.yml file is where fabric-cicd learns the difference between your dev environment and production. Without it, you’re deploying identical configurations everywhere, which means your production notebooks will happily point at your dev lakehouse.

For Spark teams, four categories of parameter entries matter:

Default Lakehouse IDs. Every notebook binds to a lakehouse by GUID. In dev, that GUID points to your sandbox with test data. In production, it points to the lakehouse with three months of curated, retention-managed data. The parameter file swaps those GUIDs at deploy time. Miss one, and your production job reads from a lakehouse that got wiped last Tuesday.

Default Lakehouse Workspace IDs. If your production lakehouse lives in a separate workspace from dev (and it should), this mapping covers that scope. Lakehouse GUIDs alone aren’t enough when workspaces differ between environments.

Connection strings. Any notebook that pulls from an external data source needs environment-specific connection details. Hardcoded connection strings are how you end up running your production Spark cluster against a dev SQL database. That kind of mismatch can get expensive quickly — and it’s entirely preventable with proper parameterization.

Notebook parameter cells. Fabric lets you define parameter cells in notebooks. Every value that changes between environments belongs there, referenced by parameter.yml. Not in a comment. Not in a variable halfway down the notebook. In the parameter cell, where the tooling can find it.

The mechanism is find-and-replace. fabric-cicd scans your repository files for specific strings and swaps in the values for the target environment. This means the GUIDs in your repo must be consistent. If someone manually edited a lakehouse ID through the browser after a sync, the parameter file won’t catch the mismatch. Deployments will succeed. The notebook will fail. Those are the worst kind of bugs: silent ones.

Build Your Pipeline in Four Stages

Here’s a pipeline structure built for Spark teams, in the order things should execute:

Stage 1: Validate. Run your tests before anything deploys. If you have PySpark unit tests (even five of them), execute them against a local SparkSession or a lightweight Fabric environment. This catches broken imports, renamed functions, and bad type signatures. The goal isn’t 100% test coverage. The goal is catching the obvious failures before they reach a workspace anyone else depends on.

Stage 2: Build. Initialize the FabricWorkspace object with your target workspace ID, environment name, repository path, and scoped item types. For Spark teams, start with ["Notebook", "SparkJobDefinition", "Environment", "Lakehouse"]. Do not scope every item type on day one. Start with the items you deploy weekly. Expand scope after the first month, when you’ve seen how it behaves.

Stage 3: Deploy. Call publish_all_items(). The tool resolves dependency ordering, so if a notebook depends on a lakehouse that depends on an environment configuration, the sequence is handled. After publishing, call unpublish_all_orphan_items() to clean up workspace items that no longer appear in the repo. Skipping orphan cleanup means your workspace accumulates dead items that confuse the team and waste capacity.

Stage 4: Verify. This is the stage teams skip, and the one that saves them. After deployment, run a smoke test against the target workspace. Can the notebook open? Does it bind to the correct lakehouse? Can a lightweight execution complete without errors? A deployment that returns exit code zero but leaves notebooks pointing at a deleted lakehouse is not a successful deployment. Your pipeline shouldn’t treat it as one.

Guardrails Worth the Setup Cost

Guardrails turn a pipeline from a deployment mechanism into a safety net. These four are worth the setup time:

Approval gates. Require explicit human approval before any deployment to Production. fabric-cicd won’t enforce this for you. Wire it into your pipeline platform: Azure DevOps release gates, GitHub Actions environments with required reviewers. The first time a broken merge auto-deploys to production, you’ll wish you had spent the twenty minutes setting this up.

Service principal authentication. Run your pipeline under a service principal, not a user account. Give the principal workspace contributor access on the target workspace. Nothing more. When someone leaves the team or changes roles, deployments keep working because they never depended on that person’s credentials.

Tested rollback. Since fabric-cicd does full deployments from the repo, rollback means redeploying the last known-good commit. Conceptually clean. But “conceptually clean” doesn’t help you during an incident when stakeholders need answers fast. Test the rollback. Revert a deployment on a Tuesday afternoon when nothing is on fire. Confirm the workspace returns to its previous state. If you haven’t tested it, your rollback plan is still untested — and untested plans have a way of surprising you at the worst possible moment.

Deployment artifacts. Every pipeline run should log which items deployed, which parameters were substituted, and which orphans were removed. When production breaks and someone asks “what changed since yesterday?”, the answer should take thirty seconds, not three hours of comparing workspace states by hand.

Spark-Specific Problems Nobody Warns You About

General CI/CD guidance covers the broad strokes. Spark teams hit problems that live in the details:

Lakehouse bindings are buried in notebook content. The notebook-content.py file contains lakehouse and workspace GUIDs. If your parameter.yml misses even one of these, the production notebook opens to a “lakehouse not found” error. Audit every notebook, including the utility notebooks that other notebooks call with %run. Those hidden dependencies are where the bindings go wrong.

Environment items gate notebook execution. When your Spark notebooks depend on a custom Environment with specific Python libraries or Spark configuration properties, that Environment must exist in the target workspace before the notebooks arrive. The fabric-cicd dependency resolver handles this automatically, but only if Environment is in your item_type_in_scope. Scope just Notebook without Environment, and you’ll get clean deployments followed by runtime failures when the expected libraries don’t exist.

SparkJobDefinitions are not notebooks. SJDs carry executor counts, driver memory settings, reference files, and command-line arguments. All environment-specific values in these properties need coverage in your parameter file. Teams that parameterize their notebooks thoroughly and forget about their SJDs discover the gap when a production batch job runs with dev-sized executors and takes four times longer than expected.

Full deployment at scale needs scoping. Fifty notebooks deploy in minutes. Three hundred notebooks take longer and increase your blast radius. If your workspace has grown large, segment your repository by domain or narrow item_type_in_scope per pipeline to keep deployment times predictable and failures contained to a known set of items.

A Four-Week Migration Path

Starting from zero, here’s a timeline that’s aggressive but achievable:

Week 1: Git integration. Connect your workspace to source control. Rename items that need renaming. Agree on a branching strategy with the team. Write it down. Nothing deploys this week. This is foundation work, and skipping it makes everything after it harder.

Week 2: First deployment. Install fabric-cicd, write your initial parameter.yml, and run a deployment to a test workspace from the command line. Intentionally break the lakehouse binding in the parameter file. See what the error looks like. Fix it. Run it again. You want the team to recognize deployment failures before they encounter one under pressure.

Week 3: Pipeline construction. Build the CI/CD pipeline for Dev-to-Test promotion. Add approval gates, service principal auth, logging, and the verify stage. Run the pipeline ten times. Deliberately introduce a bad merge and watch the pipeline catch it. If it doesn’t catch it, fix the pipeline.

Week 4: Production extension. Extend the pipeline to include Production as a target. Add smoke tests. Test your rollback procedure. Write the runbook. Walk the team through it. Make sure at least two people can operate the pipeline without you in the room.

Four weeks. Not a quarter. Not a planning exercise that stalls in sprint three. A month of focused, methodical work that moves your Spark team from manual deployment to a process that runs the same way every time, whether it’s Tuesday at noon or Saturday at midnight.

The Real Takeaway

Microsoft giving fabric-cicd the official stamp means enterprise teams can stop hesitating. The library will get more attention, faster bug fixes, and broader item type support going forward.

But the tool is only half the story. A perfectly automated pipeline that deploys unparameterized notebooks to the wrong lakehouse is worse than manual deployment, because at least manual deployment forces someone to look at what they’re pushing. Automation works best when it’s built on a disciplined foundation — the checklist, the parameter file, the tested rollback, the verify stage.

Build the checklist. Work the checklist. Invest in the hard parts now, and they’ll pay you back in every deployment after.

This post was written with help from anthropic/claude-opus-4-6

When ‘Native Execution Engine’ Doesn’t Stick: Debugging Fabric Environment Deployments with fabric-cicd

If you’re treating Microsoft Fabric workspaces as source-controlled assets, you’ve probably started leaning on code-first deployment tooling (either Fabric’s built-in Git integration or community tooling layered on top).

One popular option is the open-source fabric-cicd Python library, which is designed to help implement CI/CD automations for Fabric workspaces without having to interact directly with the underlying Fabric APIs.

For most Fabric items, a ‘deploy what’s in Git’ model works well—until you hit a configuration that looks like it’s in source control, appears in deployment logs, but still doesn’t land in the target workspace.

This post walks through a real example from fabric-cicd issue #776: an Environment artifact where the “Enable native execution engine” toggle does not end up enabled after deployment, even though the configuration appears present and the PATCH call returns HTTP 200.

Why this setting matters: environments are the contract for Spark compute

A Fabric environment contains a collection of configurations, including Spark compute properties, that you can attach to notebooks and Spark jobs.

That makes environments a natural CI/CD unit: you can standardize driver/executor sizing, dynamic executor allocation, and Spark properties across many workloads.

Environments are also where Fabric exposes the Native Execution Engine (NEE) toggle under Spark compute → Acceleration.

Microsoft documents that enabling NEE at the environment level causes subsequent jobs and notebooks associated with that environment to inherit the setting.

NEE reads as enabled in source, but ends up disabled in the target

In the report, the Environment’s source-controlled Sparkcompute.yml includes enable_native_execution_engine: true along with driver/executor cores and memory, dynamic executor allocation, Spark properties, and a runtime version.

The user then deploys to a downstream workspace (PPE) using fabric-cicd and expects the deployed Environment to show the Acceleration checkbox enabled.

Instead, the target Environment shows the checkbox unchecked (false), even though the deployment logs indicate that Spark settings were updated.

A key signal in the debug log: PATCH request includes the field, response omits it

The issue includes a fabric-cicd debug snippet showing a PATCH to an environments .../sparkcompute endpoint where the request body contains enableNativeExecutionEngine set to true.

However, the response body shown in the issue includes driver/executor sizing and Spark properties but does not include enableNativeExecutionEngine.

The user further validates the discrepancy by exporting/syncing the PPE workspace back to Git: the resulting Sparkcompute.yml shows enable_native_execution_engine: false.

What to do today: treat NEE as a “verify after deploy” setting

Until the underlying behavior is fixed, assume this flag can drift across environments even when other Spark compute properties deploy correctly.

Practically, that means adding a post-deploy verification step for downstream workspaces—especially if you rely on NEE for predictable performance or cost.

Checklist: a lightweight deployment guardrail

Here’s a low-friction way to catch this class of issue early (even if you don’t have an automated API read-back step yet):

  • Ensure the source-controlled Sparkcompute.yml includes enable_native_execution_engine: true.
  • Deploy with verbose/debug logging and confirm the PATCH body contains enableNativeExecutionEngine: true.
  • After deployment, open the target Environment → Spark compute → Acceleration and verify the checkbox state.
  • Optionally: export/sync the target workspace back to Git and confirm the exported Sparkcompute.yml matches your intent.

Workarounds (choose your tradeoff)

If you’re blocked, the simplest workaround is operational: enable NEE in the target environment via the UI after deployment and treat it as a manual step until the bug is resolved.

If you need full automation, a more robust approach is to add a post-deploy validation/remediation step that checks the environment setting and re-applies it if it’s not set.

Reporting and tracking

If you’re affected, add reproducibility details (runtime version, library version, auth mode) and any additional debug traces to issue #776 so maintainers can confirm whether the API ignores the field, expects a different contract, or requires a different endpoint/query parameter.

Even if you don’t use fabric-cicd, the pattern is broadly relevant: CI/CD is only reliable when you can round-trip configuration (write, then read-back to verify) for each control surface you’re treating as ‘source of truth.’

Closing thoughts

Native Execution Engine is positioned as a straightforward acceleration you can enable at the environment level to benefit subsequent Spark workloads.

When that toggle doesn’t deploy as expected, the pragmatic response is to verify after deploy, document the drift, and keep your CI/CD pipeline honest by validating the settings you care about—not just the HTTP status code.

References

This post was written with help from ChatGPT 5.2