What SQL database in Fabric actually means for your Spark pipelines

There is a particular kind of excitement that sweeps through data engineering teams when Microsoft announces a new database option. It is the same mixture of curiosity and low-grade dread you might feel upon learning that your neighborhood is getting a new highway interchange. Useful, probably. Disruptive, definitely. Someone is going to have to figure out the on-ramps.

SQL database in Fabric went generally available in November 2025. Built on the same SQL Database Engine that powers Azure SQL Database, it is the first fully SaaS-native operational database living inside Microsoft Fabric. More than 50,000 SQL databases were created during the preview period alone. If you spend your days writing Spark notebooks, building lakehouses, and tending ETL pipelines, this thing will change how you work whether you plan for it or not.

Here is what you need to know, what you should actually do about it, and where the potholes are hiding.

Your operational data now lands in OneLake automatically

The headline feature for Spark teams is automatic replication to OneLake. When data gets written to a SQL database in Fabric, it mirrors to OneLake as Delta tables in near real-time. No pipelines. No connectors. No orchestration jobs that fail silently at 2 AM and ruin your Monday.

This sounds almost too convenient, and in some ways it is. The mirrored Delta tables arrive in an open format your Spark notebooks can read directly. You point a DataFrame at the mirrored location, run your transformations, and push results to your gold layer without ever having written an ingestion pipeline for that source.

If your team currently runs nightly batch loads from Azure SQL or SQL Server into a lakehouse, this is a real shift. That entire category of extract-and-load work can shrink or vanish. But “can” is doing heavy lifting in that sentence, and we need to talk about why.

How this changes daily Spark development

The practical impact shows up in a few specific places.

Reading operational data gets simpler. Instead of maintaining JDBC connections, managing credential rotation, and writing Spark code to pull from SQL, you read Delta tables from OneLake. The data is already there. Your Spark cluster does not need network access to the SQL database itself. One fewer firewall rule, one fewer connection string in your key vault, one fewer thing that breaks when someone rotates a password on a Friday afternoon.

Schema changes arrive faster than you can react. With batch ETL, you had a buffer. The pipeline would fail, someone would get an alert, and you had time to adapt your downstream notebooks. Near real-time mirroring removes that cushion. A column rename or type change in the operational database shows up in your Delta tables within seconds to minutes. If your Spark jobs reference columns by name (they do), you need schema evolution handling that most teams have not built yet.

Think about what happens when a developer on the application side renames customer_id to cust_id on a Wednesday afternoon. Your batch pipeline would have failed that night, you would have caught it Thursday morning, and the fix would be a one-line column alias. With mirroring, your running Spark job gets a AnalysisException: cannot resolve 'customer_id' mid-stream. The fix is the same, but the timing is worse.

SQL users can now query your lakehouse data directly. SQL database in Fabric supports OPENROWSET and External Tables for querying OneLake data in CSV, Parquet, and JSON formats. Your SQL-writing colleagues can query lakehouse data without Spark. That sounds like a collaboration win until a SQL user runs a full table scan on your carefully partitioned Parquet files and you both learn something new about capacity throttling.

Establish clear ownership of shared datasets early. Document which OneLake paths are read-safe for SQL access and which ones carry performance risk.

The SQL Analytics Endpoint changes reporting paths. Every SQL database in Fabric gets a SQL Analytics Endpoint that sits on top of the mirrored data. Power BI can hit this endpoint with Direct Lake, which means your Spark team might no longer be in the critical path for building reporting datasets. If you have spent months building and maintaining a medallion architecture primarily to serve Power BI, parts of that effort become optional. Whether that feels like relief or irrelevance depends on your org chart.

Migration risks worth planning for

Before you start ripping out pipelines, here are the things that deserve a red flag on your project board.

Capacity billing is shared, and the math is unforgiving. SQL database in Fabric consumes the same Fabric capacity as your Spark jobs, warehouses, and Power BI refreshes. If someone provisions a heavily used SQL database on the same capacity where your Spark notebooks run, you will feel it. Fabric capacity is a zero-sum game. The new player at the table did not bring extra chips.

Run a two-week trial on a dedicated capacity before mixing SQL database workloads with existing Spark production. Use the Microsoft Fabric Capacity Metrics App to understand exactly how many CUs the database consumes at rest and under load.

Near real-time is not real-time, and the gap varies. The mirroring latency depends on transaction volume and capacity pressure. Under light load, changes appear in seconds. Under heavy load on a congested capacity, you might see minutes of lag. If your Spark pipelines assume data completeness at a specific watermark, you need to measure actual replication lag under realistic conditions. A simple row-count comparison between the SQL database and the mirrored Delta table, run every five minutes for a week, will tell you more than any documentation.

Security boundaries do not mirror perfectly. SQL database in Fabric supports Microsoft Entra authentication, row-level security, customer-managed keys, and SQL auditing (in preview). Your lakehouse uses OneLake RBAC, workspace roles, and Spark-level access controls. The mirrored data inherits some but not all of these boundaries. Row-level security in the SQL database, for instance, does not automatically apply to the mirrored Delta table in OneLake. If you have sensitive columns, verify the access controls on the mirror before your entire data team has read access.

Vendor lock-in compounds quietly. Every pipeline you remove and every JDBC connector you delete makes you more dependent on Fabric-internal mechanisms. If you later need to run Spark on Databricks, on EMR, or on bare-metal clusters, your data ingestion path disappears. This is not a reason to avoid the feature, but it is a reason to document what you replaced and keep a migration playbook somewhere that is not a Confluence page nobody remembers exists.

A rollout checklist for Spark teams

If you are ready to start integrating SQL database in Fabric into your data engineering stack, here is a practical sequence.

  1. Inventory your SQL-sourced pipelines. List every Spark job that reads from Azure SQL, SQL Server, or any SQL-based source via JDBC, linked services, or copy activities. Note the refresh frequency, data volume, and downstream dependencies. If you cannot produce this list in under an hour, that is itself a useful finding.
  2. Provision a SQL database in Fabric on a non-production capacity. Do not test this on production. Capacity contention is real, and you want to understand billing impact before it appears on someone else’s finance report.
  3. Mirror a single non-critical table and validate. Pick a reference table, something small and stable. Confirm the Delta table lands in OneLake, check the schema, verify column types, and read it from a Spark notebook. Compare row counts and checksums against the source.
  4. Measure replication lag under real load. Insert, update, and delete rows in the SQL database and time how quickly those changes appear in the mirrored Delta table. Run this test during your normal capacity utilization window, not during off-hours when capacity is idle and results are misleadingly fast.
  5. Test schema evolution deliberately. Add a column. Rename a column. Change a data type. Observe what happens to the mirrored Delta table and to any Spark jobs reading it. Build your error handling before this surprises you in production.
  6. Audit security boundaries on the mirror. Check whether row-level security, column masking, or other access controls in the SQL database are reflected in the mirrored OneLake data. Document gaps and decide whether they are acceptable for your data classification. If they are not, add a data masking step between the mirror and your Spark consumers.
  7. Run a cost comparison over two weeks. Compare the Fabric capacity consumption of the SQL database plus mirroring against your current pipeline compute costs. Include the engineering time saved, but be honest. “We saved two hours a month of pipeline maintenance” is a real number. “We saved countless engineering hours” is not.
  8. Deprecate one pipeline as a pilot. Pick your simplest SQL-sourced pipeline, redirect the downstream Spark job to read from the mirrored Delta table, and run both paths in parallel for at least two sprints. When you are confident, decommission the old pipeline and update your runbooks.

Vector search: a side door into AI workloads

SQL database in Fabric supports the native vector data type and vector indexing. This opens up retrieval-augmented generation (RAG) patterns directly inside the database, without adding a separate vector store to your architecture.

For Spark teams building ML pipelines or feeding large language models, the value is in co-location. You can store embeddings alongside your operational data, run similarity searches in SQL, and then access the same data from Spark for model training or batch inference. A product catalog with embeddings stored as vectors in SQL can serve both a real-time search API and a nightly Spark training job without data duplication.

This will not replace Pinecone or Weaviate for teams running high-throughput similarity search at scale. But for teams running modest-scale RAG or semantic search against operational data, it removes one service from the architecture and one deployment from the on-call rotation. That is not nothing.

What to expect next

Microsoft has made it clear that SQL database in Fabric is part of a longer play to bring operational data fully into the Fabric ecosystem. The integration with Copilot in the Query Editor, support for Terraform and Fabric CLI automation, and the first-ever SQLCon conference co-located with FabCon Atlanta in March 2026 all point the same direction: the wall between transactional and analytical workloads is getting thinner.

For Spark data engineering teams, the right move is not to panic and rewrite everything. It is to understand the mechanics, run a controlled test, and make deliberate decisions about which pipelines to retire and which to keep. The highway interchange is open. You just need to figure out your on-ramp.

This post was written with help from Opus 4.6

Microsoft Fabric Table Maintenance Optimization: A Cross-Workload Survival Guide

Your Delta tables are drowning. Thousands of tiny Parquet files pile up after every streaming microbatch. Power BI dashboards stall on cold-cache queries. SQL analytics endpoints grind through fragmented row groups. And somewhere in the middle of the medallion architecture, a Spark job is rewriting perfectly good files because nobody told it they were already compacted.

This is the small-file problem at scale — and in Microsoft Fabric, where a single Delta table can serve Spark, SQL analytics endpoint, Power BI Direct Lake, and Warehouse simultaneously, it becomes a cross-workload survival situation. Microsoft recently published a comprehensive cross-workload table maintenance guide that provides a clear map out. Here’s how to use it.

Every Engine Wants Something Different

The core challenge is that each consumption engine has a different idea of what an “optimally sized” file looks like. Get this wrong and you optimize for one consumer while punishing another.

Here’s the terrain:

  • Spark reads efficiently across a wide range — 128 MB to 1 GB depending on table size. V-Order isn’t required and adds 15–33% write overhead. Spark cares about parallelism, not VertiPaq encoding.
  • SQL analytics endpoint and Warehouse want files around 400 MB with roughly 2 million rows per row group, plus V-Order enabled for an approximate 10% read improvement.
  • Power BI Direct Lake is the most demanding consumer. It needs V-Order (delivering 40–60% cold-cache improvement), row groups of 8 million+ rows, and minimal file count to reduce transcoding overhead.

If you serve all three from the same Gold table, you need to make deliberate tradeoffs — or maintain multiple copies optimized for different patterns. Storage is cheap relative to compute. Compute wasted on bad file layouts is not.

The Three Commands That Keep You Alive

Table maintenance in Fabric boils down to three operations: OPTIMIZE, VACUUM, and the configuration pair of auto-compaction and optimize write. Each one addresses a different failure mode.

OPTIMIZE: Bin Compaction

OPTIMIZE consolidates small files into larger ones. It is your primary weapon against file fragmentation:

-- Basic compaction

OPTIMIZE schema_name.table_name

-- With V-Order for Power BI consumers

OPTIMIZE schema_name.table_name VORDER

-- With Z-Order for selective filter queries

OPTIMIZE schema_name.table_name ZORDER BY (region, event_date)

A critical detail: OPTIMIZE is a Spark SQL command. It runs in notebooks, Spark job definitions, and the Lakehouse Maintenance UI. You cannot run it from the SQL analytics endpoint or Warehouse SQL editor.

Before you optimize blindly, use the dry-run option to assess scope:

OPTIMIZE schema_name.table_name DRY RUN

This returns the files eligible for rewriting without touching the table — essential for estimating cost before committing compute.

VACUUM: Dead File Cleanup

After OPTIMIZE rewrites files, the old versions remain on disk for time travel. VACUUM removes files the Delta log no longer references:

-- Default 7-day retention

VACUUM schema_name.table_name

-- Explicit retention

VACUUM schema_name.table_name RETAIN 168 HOURS

The default seven-day retention exists for good reason: concurrent readers and writers may still reference those files. Drop below seven days and you risk reader failures or table corruption. If you must shorten retention, set spark.databricks.delta.retentionDurationCheck.enabled to false — but think carefully before you do.

Auto-Compaction + Optimize Write: Prevention Over Cure

Rather than waiting for file fragmentation to become a problem, these two features prevent it during ingestion:

Optimize write performs pre-write compaction, generating fewer, larger files at write time:

spark.conf.set('spark.databricks.delta.optimizeWrite.enabled', 'true')

Auto-compaction evaluates partition health after each write and triggers synchronous compaction when fragmentation is detected:

spark.conf.set('spark.databricks.delta.autoCompact.enabled', 'true')

These two work best together. Optimize write reduces the number of small files created; auto-compaction handles whatever fragmentation remains. Microsoft’s documentation recommends auto-compaction over manually scheduled OPTIMIZE jobs for most ingestion pipelines, noting it “generally outperforms scheduled compaction jobs at maximizing read/write performance.”

The Medallion Layer Checklist

The right maintenance strategy depends on where the table sits in your medallion architecture. Here is a concrete, layer-by-layer breakdown:

Bronze (Landing Zone)

  • Priority: Ingestion speed
  • Auto-compaction: Enable (optional — can sacrifice for raw speed)
  • Optimize write: Enable
  • V-Order: No (unnecessary write overhead)
  • Liquid Clustering: No
  • Scheduled OPTIMIZE: Optional
  • Rule: Never serve Bronze tables directly to SQL analytics endpoint or Power BI Direct Lake.

Silver (Curated Zone)

  • Priority: Balance ingestion and query performance
  • Auto-compaction: Enable
  • Optimize write: Enable
  • V-Order: Optional (enable if SQL or Power BI consumers query this layer)
  • Liquid Clustering or Z-Order: Recommended
  • Scheduled OPTIMIZE: Aggressive
  • Target file size: 128–256 MB

Gold (Serving Zone)

  • Priority: Read performance for analytics
  • Auto-compaction: Enable
  • Optimize write: Required
  • V-Order: Required for Power BI Direct Lake; beneficial for SQL
  • Liquid Clustering: Required for optimal file skipping
  • Scheduled OPTIMIZE: Aggressive
  • Target file size: 400 MB to 1 GB

For Gold tables, tune the target based on the primary consumer:

Consumer V-Order Target File Size Row Group Size
SQL analytics endpoint Yes 400 MB 2M rows
Power BI Direct Lake Yes 400 MB–1 GB 8M+ rows
Spark Optional 128 MB–1 GB 1–2M rows

V-Order: Know When to Pay the Tax

V-Order applies VertiPaq-compatible sorting, encoding, and compression at write time. The performance gains for Power BI Direct Lake — 40–60% on cold-cache queries — make it indispensable for Gold-layer tables feeding dashboards. But V-Order adds 15–33% to write time and provides no inherent benefit for Spark-to-Spark pipelines.

The decision framework:

  • Gold tables → Power BI or SQL consumers: V-Order on.
  • Bronze/Silver tables → Spark pipelines only: V-Order off.
  • Mixed consumers: Maintain separate copies — a Spark-optimized Silver table and a V-Ordered Gold table.

Starting with Fabric Runtime 1.3, V-Order is applied automatically during OPTIMIZE statements. You no longer need the deprecated spark.sql.parquet.vorder.enable session setting. Set it at the table level for consistency:

ALTER TABLE schema_name.gold_table

SET TBLPROPERTIES ('delta.parquet.vorder.enabled' = 'true')

Liquid Clustering vs. Z-Order

Liquid Clustering is the recommended approach for new tables. Unlike static partitioning, it adapts to changing query patterns and provides better file skipping. Define it at table creation:

CREATE TABLE schema_name.events (

id INT,

category STRING,

event_date DATE

) CLUSTER BY (category)

Use Z-Order instead when your table is already partitioned (Liquid Clustering does not work with partitioned tables) or when queries filter on two or more columns together.

One critical gotcha: data is only clustered when OPTIMIZE runs. Regular write operations do not apply clustering. Without a compaction strategy, you get zero benefit from Liquid Clustering — the layout never materializes.

Diagnosing Table Health

Before optimizing anything, assess where you stand:

from delta.tables import DeltaTable

details = spark.sql("DESCRIBE DETAIL schema_name.table_name").collect()[0]

print(f"Table size: {details['sizeInBytes'] / (1024**3):.2f} GB")

print(f"Number of files: {details['numFiles']}")

avg_file_mb = (details['sizeInBytes'] / details['numFiles']) / (1024**2)

print(f"Average file size: {avg_file_mb:.2f} MB")

Healthy tables have evenly distributed file sizes within 2× of each other. Files under 25 MB signal fragmentation. Files over 2 GB reduce parallelism. Use DESCRIBE HISTORY to review write patterns and check whether auto-compaction has been running.

Set It at the Table Level

A final, critical best practice: prefer table properties over session configurations. Session settings only apply to the current Spark session and disappear when the session ends. Table properties persist across sessions and ensure consistent behavior regardless of which job or notebook writes to the table:

CREATE TABLE schema_name.optimized_table (

id INT,

data STRING

) TBLPROPERTIES (

'delta.autoOptimize.optimizeWrite' = 'true',

'delta.autoOptimize.autoCompact' = 'true',

'delta.parquet.vorder.enabled' = 'true'

)

This eliminates the class of bugs where one pipeline sets optimize write and another does not, producing an inconsistent file layout that OPTIMIZE then has to clean up.

The Bottom Line

Table maintenance in Fabric is not a set-it-and-forget-it operation. It is a deliberate strategy tied to your data’s lifecycle: fast ingestion at Bronze, balanced reads at Silver, and tuned-to-the-consumer performance at Gold. The tools — OPTIMIZE, VACUUM, auto-compaction, V-Order, Liquid Clustering — are all available. The question is whether you deploy them with intention.

Start by auditing your Gold tables. Check file sizes and distributions. Enable auto-compaction and optimize write at the table level. Apply V-Order where Power BI or SQL consumes the data. Schedule aggressive OPTIMIZE runs on Silver and Gold layers. And run VACUUM weekly to reclaim storage.

Your tables will thank you. Your dashboards will thank you faster.

This post was written with help from Claude Opus 4.6

Optimizing Spark Performance with the Native Execution Engine (NEE) in Microsoft Fabric

Spark tuning often starts with the usual suspects (shuffle volume, skew, join strategy, caching)… but sometimes the biggest win is simply executing the same logical plan on a faster engine.

Microsoft Fabric’s Native Execution Engine (NEE) does exactly that: it keeps Spark’s APIs and control plane, but runs a large portion of Spark SQL / DataFrame execution on a vectorized C++ engine.

What NEE is (and why it’s fast)

NEE is a vectorized native engine that integrates into Fabric Spark and can accelerate many SQL/DataFrame operators without you rewriting your code.

  • You still write Spark SQL / DataFrames.
  • Spark still handles distributed execution and scheduling.
  • For supported operators, compute is offloaded to a native engine (reducing JVM overhead and using columnar/vectorized execution).

Fabric documentation calls out NEE as being based on Apache Gluten (the Spark-to-native glue layer) and Velox (the native execution library).

When NEE tends to help the most

NEE shines when your workload is:

  • SQL-heavy (joins, aggregates, projections, filters)
  • CPU-bound (compute dominates I/O)
  • Primarily on Parquet / Delta

You’ll see less benefit (or fallback) when you rely on features NEE doesn’t support yet.

How to enable NEE (3 practical options)

1) Environment-level toggle (recommended for teams)

In your Fabric Environment settings, go to Acceleration and enable the native execution engine, then Save + Publish.

Benefit: notebooks and Spark Job Definitions that use that environment inherit the setting automatically.

2) Enable for a single notebook / job via Spark config

In a notebook cell:

%%configure
{
  "conf": {
    "spark.native.enabled": "true"
  }
}

For Spark Job Definitions, add the same Spark property.

3) Disable/enable per-query when you hit unsupported features

If a specific query uses an unsupported operator/expression and you want to force JVM Spark for that query:

SET spark.native.enabled=FALSE;
-- run the query
SET spark.native.enabled=TRUE;

How to confirm NEE is actually being used

Two low-friction checks:

  1. Spark UI / History Server: look for plan nodes ending with Transformer or nodes like *NativeFileScan / VeloxColumnarToRowExec.
  2. df.explain(): the same Transformer / NativeFileScan / Velox… hints should appear in the plan.

Fabric also exposes a dedicated view (“Gluten SQL / DataFrame”) to help spot which queries ran on the native engine vs. fell back.

Fallback is a feature (but you should know the common triggers)

NEE includes an automatic fallback mechanism: if the plan contains unsupported features, Spark will run that portion on the JVM engine.

A few notable limitations called out in Fabric documentation:

  • UDFs aren’t supported (fallback)
  • Structured streaming isn’t supported (fallback)
  • File formats like CSV/JSON/XML aren’t accelerated
  • ANSI mode isn’t supported

There are also some behavioral differences worth remembering (rounding/casting edge cases) if you have strict numeric expectations.

A pragmatic “NEE-first” optimization workflow

  1. Turn NEE on for the environment (or your job) and rerun the workload.
  2. If it’s still slow, open the plan and answer: is the slow part running on the native engine, or did it fall back?
  3. If it fell back, make the smallest possible change to keep the query on the native path (e.g., avoid UDFs; prefer built-in expressions; standardize on Parquet/Delta).
  4. Once the plan stays mostly native, go back to classic Spark tuning: reduce shuffle volume, fix skew, sane partitioning, and confirm broadcast joins.

References

This post was written with help from ChatGPT 5.2

The Best Thing That Ever Happened to Your Spark Pipeline Is a SQL Database

Here’s a counterintuitive claim: the most important announcement for Fabric Spark teams in early 2026 has nothing to do with Spark.

It’s a SQL database.

Specifically, it’s the rapid adoption of SQL database in Microsoft Fabric—a fully managed, SaaS-native transactional database that went GA in November 2025 and has been quietly reshaping how production data flows into lakehouse architectures ever since. If you’re a data engineer running Spark workloads in Fabric, this changes more than you think.

The ETL Pipeline You Can Delete

Most Spark data engineers have a familiar pain point: getting operational data from transactional systems into the lakehouse. You build ingestion pipelines. You schedule nightly batch loads. You wrestle with CDC (change data capture) configurations, watermark columns, and retry logic. You maintain all of it, forever.

SQL database in Fabric eliminates that entire layer.

When data lands in a Fabric SQL database, it’s automatically replicated to OneLake as Delta tables in near real-time. No pipelines. No Spark ingestion jobs. No orchestration. The data just appears, already in the open Delta format your notebooks and Spark jobs expect.

This isn’t a minor convenience—it’s an architectural shift. Every ingestion pipeline you don’t write is a pipeline you don’t debug at 2 AM.

What This Actually Looks Like in Practice

Let’s say you’re building an analytics layer on top of an operational SaaS application. Today, your architecture probably looks something like this:

  1. Application writes to Azure SQL or Cosmos DB
  2. ADF or Spark job pulls data on a schedule
  3. Data lands in a lakehouse as Delta tables
  4. Downstream Spark jobs transform and aggregate

With SQL database in Fabric, steps 2 and 3 vanish. Your application writes directly to the Fabric SQL database, and the mirrored Delta tables are immediately available for Spark processing. Here’s what your downstream notebook looks like now:

# Read operational data directly — no ingestion pipeline needed
# The SQL database auto-mirrors to OneLake as Delta tables
orders_df = spark.read.format("delta").load(
    "abfss://your-workspace@onelake.dfs.fabric.microsoft.com/your-sqldb.SQLDatabase/dbo.Orders"
)

# Your transformation logic stays the same
from pyspark.sql import functions as F

daily_revenue = (
    orders_df
    .filter(F.col("order_date") >= F.date_sub(F.current_date(), 7))
    .groupBy("product_category")
    .agg(
        F.sum("total_amount").alias("revenue"),
        F.countDistinct("customer_id").alias("unique_customers")
    )
    .orderBy(F.desc("revenue"))
)

daily_revenue.write.format("delta").mode("overwrite").saveAsTable("gold.weekly_revenue_by_category")

The Spark code doesn’t change. What changes is everything upstream of it.

The Migration Risk Nobody’s Talking About

Here’s where it gets interesting—and where Malcolm Gladwell would lean forward in his chair. The biggest risk of SQL database in Fabric isn’t technical. It’s organizational.

Teams that have invested heavily in ingestion infrastructure will face a classic innovator’s dilemma: the new path is simpler, but the old path already works. The temptation is to keep running your existing ADF pipelines alongside the new auto-mirroring capability, creating a hybrid architecture that’s worse than either approach alone.

My recommendation: don’t hybrid. Pick a workload, migrate it end-to-end, and measure. Here’s a concrete rollout checklist:

  1. Identify a candidate workload — Look for Spark jobs whose primary purpose is pulling data from a SQL source into Delta tables. These are your highest-value migration targets.
  2. Provision a Fabric SQL database — It takes seconds. You provide a name; Fabric handles the rest. Autoscaling and auto-pause are built in.
  3. Redirect your application writes — Point your operational application to the new Fabric SQL database. The engine is the same SQL Database Engine as Azure SQL, so T-SQL compatibility is high.
  4. Validate the Delta mirror — Confirm that your data is appearing in OneLake. Check schema fidelity, latency, and row counts:
# In your Spark notebook, validate the mirrored data
spark.sql("""
    SELECT COUNT(*) as row_count,
           MAX(modified_date) as latest_record,
           MIN(modified_date) as earliest_record
    FROM your_sqldb.dbo.Orders
""").show()
  1. Decommission the ingestion pipeline — Once validated, turn off the ADF or Spark ingestion job. Don’t just disable it—delete it. Zombie pipelines are how technical debt accumulates.
  2. Update your monitoring — Your existing data quality checks should still work since the Delta tables have the same schema. But update your alerting to watch for mirror latency instead of pipeline run failures.

The AI Angle Matters for Spark Teams Too

There’s a second dimension to this announcement that Spark engineers should pay attention to: the native vector data type in SQL database supports semantic search and RAG patterns directly in the transactional layer.

Why does that matter for Spark teams? Because it means your embedding pipelines can write vectors back to the same database your application reads from—closing the loop between batch ML processing in Spark and real-time serving in SQL. Instead of maintaining a separate vector store (Pinecone, Qdrant, etc.), you use the same SQL database that’s already mirrored into your lakehouse.

This is the kind of architectural simplification that compounds over time. Fewer systems means fewer failure modes, fewer credentials to manage, and fewer things to explain to your successor.

The Rollout Checklist

  • This week: Inventory your existing ingestion pipelines. How many just move data from SQL sources to Delta?
  • This sprint: Provision a Fabric SQL database and test the auto-mirror with a non-critical workload.
  • This quarter: Migrate your highest-volume ingestion pipeline and measure CU savings.
  • Track: Mirror latency, CU consumption before/after, and pipeline maintenance hours eliminated.

SQL database in Fabric went GA in November 2025 with enterprise features including row-level security, customer-managed keys, and private endpoints. For the full list of GA capabilities, see the official announcement. To understand how this fits into the broader Microsoft database + Fabric integration strategy, read Microsoft Databases and Microsoft Fabric: Your unified and AI-powered data estate. For Spark-specific Delta Lake concepts, the Delta Lake documentation remains the authoritative reference.

The best thing about this announcement isn’t any single feature. It’s that it makes your Spark architecture simpler by removing the parts that shouldn’t have been there in the first place.

This post was written with help from Claude Opus 4.6

Monitoring Spark Jobs in Real Time in Microsoft Fabric

If Spark performance work is surgery, monitoring is your live telemetry.

Microsoft Fabric gives you multiple monitoring entry points for Spark workloads: Monitor hub for cross-item visibility, item Recent runs for focused context, and application detail pages for deep investigation. This post is a practical playbook for using those together.

Why this matters

When a notebook or Spark job definition slows down, “run it again” is the most expensive way to debug. Real-time monitoring helps you:

  • spot bottlenecks while jobs are still running
  • isolate failures quickly
  • compare behavior across submitters and workspaces

1) Start at the Monitoring hub for cross-workspace triage

Use Monitoring in the Fabric navigation pane as your control tower.

  1. Filter by item type (Notebook, Spark job definition, Pipeline)
  2. Narrow by start time and workspace
  3. Sort by duration or status to surface outliers

For broad triage, this is faster than jumping directly into individual notebooks.

2) Pivot to Spark application details for root-cause analysis

Once you identify a problematic run, open the Spark application detail page and work through tabs in order:

  • Jobs: status, stages, tasks, duration, and processed/read/written data
  • Resources: executor allocation and utilization in near real time
  • Logs: inspect Livy, Prelaunch, and Driver logs; download when needed
  • Item snapshots: confirm exactly what code/parameters/settings were used at execution time

This sequence prevents false fixes where you tune the wrong layer.

3) Use notebook contextual monitoring while developing

For iterative tuning, notebook contextual monitoring keeps authoring, execution, and debugging in one place.

  1. Run a target cell/workload
  2. Watch job/stage/task progress and executor behavior
  3. Jump to Spark UI or detail monitoring for deeper traces
  4. Adjust code or config and rerun

4) A lightweight real-time runbook

  • Confirm scope in the Monitoring hub (single run or systemic pattern)
  • Open application details for the failing/slower run
  • Check Jobs for stage/task imbalance and long-running segments
  • Check Resources for executor pressure
  • Check Logs for explicit failure signals
  • Verify snapshots so you debug the exact submitted artifact

Common mistakes to avoid

  • Debugging from memory instead of snapshots
  • Looking only at notebook cell output and skipping Logs/Resources
  • Treating one anomalous run as a global trend without Monitor hub filtering

References

This post was written with help from ChatGPT 5.3

Running OpenClaw in Production: Reliability, Alerts, and Runbooks That Actually Work

Agents are fun when they’re clever. They’re useful when they’re boring.

If you’re running OpenClaw as an always-on assistant (cron jobs, health checks, publishing pipelines, internal dashboards), the failure mode isn’t usually “it breaks once.” It’s it flakes intermittently and you can’t tell if the problem is upstream, your network, your config, or the agent.

This post is the operational playbook that moved my setup from “cool demo” to “production-ish”: fewer false alarms, faster debugging, clearer artifacts, and tighter cost control.

The production baseline (don’t skip this)

Before you add features, lock the boring stuff:

  • One source of truth for cron/job definitions.
  • A consistent deliverables folder (so outputs don’t vanish into chat history).
  • A minimal runbook per job (purpose, dependencies, failure modes, disable/rollback).

Observability: prove what happened

When something fails, you want receipts — not vibes.

Minimum viable run-level observability:

  • job_name, job_id, run_id
  • start/end timestamp (with timezone)
  • what the job tried to do (high level)
  • what it produced (file paths, URLs)
  • what it depended on (network/API/tool)
  • the error and the evidence (HTTP status, latency, exception type)

Split latency: upstream vs internal

If Telegram is “slow,” is that Telegram API RTT/network jitter, internal queueing, or a slow tool call? Instrument enough to separate those — otherwise you’ll waste hours fixing the wrong layer.

Alert-only health checks (silence is success)

If a health check is healthy 99.9% of the time, it should not message you 99.9% of the time.

  • prints NO_REPLY when healthy
  • emits one high-signal alert line when broken
  • includes evidence (what failed, how, and where to look)

Example alert shape:

⚠️ health-rollup: telegram_rtt_p95=3.2s (threshold=2.0s) curl=https://api.telegram.org/ ts=2026-02-10T03:12:00-08:00

Cron hygiene: stop self-inflicted outages

  • Idempotency: re-runs don’t duplicate deliverables.
  • Concurrency control: don’t let overlapping runs pile up.
  • Deterministic first phase: validate dependencies before doing expensive work.
  • Deadman checks: alert if a job hasn’t run (or hasn’t delivered) in N hours.

Evidence-based alerts: pages should come with receipts

A useful alert answers: (1) what failed, (2) where is the evidence (log path / file path / URL), and (3) what’s the next action. Anything else is notification spam.

Cost visibility: make it measurable

  • batch work; avoid polling
  • cap retries
  • route routine work to cheaper models
  • log model selection per run
  • track token usage from local transcripts (not just “current session model”)

Deliverables: put outputs somewhere that syncs

Chat is not a file system. Every meaningful workflow should write artifacts to a synced folder (e.g., OneDrive): primary output, supporting evidence, and run metadata.

Secure-by-default: treat inputs as hostile

  • Separate read (summarize) from act (send/delete/post).
  • Require explicit confirmation for destructive/external actions.
  • Prefer allowlists over arbitrary shell.

Runbooks: make 2am fixes boring

  • purpose
  • schedule
  • dependencies
  • what “healthy” looks like
  • what “broken” looks like
  • how to disable
  • how to recover

What we changed (the short version)

  • Consolidated multiple probes into one evidence-based rollup.
  • Converted recurring checks to alert-only.
  • Standardized artifacts into a synced deliverables folder.
  • Added a lightweight incident runbook.
  • Put internal dashboards behind Tailscale on separate ports.

This post was written with help from ChatGPT 5.2

Lakehouse Table Optimization: VACUUM, OPTIMIZE, and Z-ORDER

If your Lakehouse tables are getting slower (or more expensive) over time, it’s often not “Spark is slow.” It’s usually table layout drift: too many small files, suboptimal clustering, and old files piling up.

In Fabric Lakehouse, the three table-maintenance levers you’ll reach for most are:

  • OPTIMIZE: compacts many small files into fewer, larger files (and can apply clustering)
  • Z-ORDER: co-locates related values to improve data skipping for common filters
  • VACUUM: deletes old files that are no longer referenced by the Delta transaction log (after a retention window)

Practical note: in Fabric, run these as Spark SQL in a notebook or Spark job definition (or use the Lakehouse maintenance UI). Don’t try to run them in the SQL Analytics Endpoint.

1) Start with the symptom: “small files” vs “bad clustering”

Before you reach for maintenance, quickly sanity-check what you’re fighting:

  • Many small files → queries spend time opening/reading lots of tiny Parquet files.
  • Poor clustering for your most common predicates (date, tenantId, customerId, region, etc.) → queries scan more data than they need.
  • Heavy UPDATE/DELETE/MERGE patterns → lots of new files + tombstones + time travel files.

If you only have small files, OPTIMIZE is usually your first win.

2) OPTIMIZE: bin-packing for fewer, bigger files

Basic compaction

OPTIMIZE my_table;

Target a subset (example: recent partitions)

OPTIMIZE my_table WHERE date >= date_sub(current_date(), 7);

A useful mental model: OPTIMIZE is rewriting file layout (not changing table results). It’s maintenance, not transformation.

3) Z-ORDER: make your filters cheaper

Z-Ordering is for the case where you frequently query:

  • WHERE tenantId = ...
  • WHERE customerId = ...
  • WHERE deviceId = ... AND eventTime BETWEEN ...

Example:

OPTIMIZE my_table ZORDER BY (tenantId, eventDate);

Pick 1–3 columns that dominate your interactive workloads. If you try to z-order on everything, you’ll mostly burn compute for little benefit.

4) VACUUM: clean up old, unreferenced files (carefully)

VACUUM is about storage hygiene. Delta keeps old files around to support time travel and concurrent readers. VACUUM deletes files that are no longer referenced and older than the configured retention threshold.

VACUUM my_table;

Two practical rules:

  1. Don’t VACUUM aggressively unless you understand the impact on time travel / rollback.
  2. Treat the retention window as a governance decision (what rollback window do you want?) not just a cost optimization.

5) Fabric-specific gotchas (the ones that actually bite)

Where you can run these commands

These are Spark SQL maintenance commands. In Fabric, that means notebooks / Spark job definitions (or the Lakehouse maintenance UI), not the SQL Analytics Endpoint.

V-Order and OPTIMIZE

Fabric also has V-Order, which is a Parquet layout optimization aimed at faster reads across Fabric engines. If you’re primarily optimizing for downstream read performance (Power BI/SQL/Spark), it’s worth understanding whether V-Order is enabled for your workspace and table writes.

A lightweight maintenance pattern that scales

  • Nightly/weekly: OPTIMIZE high-value tables (or recent partitions)
  • Weekly/monthly: Z-ORDER tables with stable query patterns
  • Monthly: VACUUM tables where your org’s time travel policy is clear

Treat it like index maintenance: regular, boring, measurable.

References

This post was written with help from ChatGPT 5.2

OneLake catalog in Microsoft Fabric: Explore, Govern, and Secure

If your Fabric tenant has grown past “a handful of workspaces,” the problem isn’t just storage or compute—it’s finding the right items, understanding what they are, and making governance actionable.

That’s the motivation behind the OneLake catalog: a central hub to discover and manage Fabric content, with dedicated experiences for discovery (Explore), governance posture (Govern), and security administration (Secure).

This post is a practical walk-through of what’s available today, with extra focus on what Fabric admins get in the Govern experience.

What is the OneLake catalog?

Microsoft describes the OneLake catalog as a centralized place to find, explore, and use Fabric items—and to govern the data you own.

You open it from the Fabric navigation pane by selecting the OneLake icon.

Explore tab: tenant-wide discovery without losing context

The Explore tab is the “inventory + details” experience:

  • An items list of Fabric content you can access (and in some cases, content you can request access to).
  • An in-context details pane so you can inspect an item without navigating away from your filtered list.
  • Filters and selectors to narrow scope (for example: workspace, item-type categories, endorsement, and tags).

A key pattern here is fast triage: filter down to a domain/workspace, then click through items to answer:

  • Who owns this?
  • Where does it live?
  • When was it refreshed?
  • Is it endorsed/certified?
  • Does it have sensitivity labeling?

Tip for data engineers

If your tenant uses domains, scoping the catalog to a domain/subdomain is often the quickest way to keep the item list meaningful—especially when teams create similar notebooks/pipelines across many workspaces.

Govern tab: governance posture + recommended actions

The Govern tab is where the catalog becomes more than “a directory.” It combines:

  • Insights (high-level indicators you can drill into)
  • Recommended actions (with step-by-step remediation guidance)
  • Links to relevant tools and learning resources

Admin view vs. data owner view

The Govern tab behaves differently depending on who you are:

  • Fabric admins see insights based on tenant metadata (items, workspaces, capacities, domains).
  • Data owners see insights scoped to items they own (using the My items concept).

The Fabric blog also calls out a preview experience that extends the OneLake catalog governance view for Fabric admins, providing consolidated indicators and deeper drill-down reporting.

What admins see on the Govern tab

From the Fabric admin perspective, the Govern experience is designed to answer:

  • What does our data estate look like (inventory, distribution, usage)?
  • Where are we under-labeled or non-compliant (sensitivity coverage, policy posture)?
  • What content is hard to trust or reuse (freshness, endorsement/description/tag coverage, sharing patterns)?

When admins choose View more, Learn documentation describes an expanded report with three areas:

  1. Manage your data estate (inventory, capacities/domains, feature usage)
  2. Protect, secure & comply (sensitivity label coverage and data loss prevention policy posture)
  3. Discover, trust, and reuse (freshness, curation signals such as endorsement/description coverage, sharing)

A detail worth knowing: refresh cadence differs for admins

Per Microsoft Learn, admin insights and actions are based on Admin Monitoring Storage data and refresh automatically every day, so there can be a lag between changes you make and what the Govern insights reflect.

Secure tab: centralized security role management

The OneLake catalog Secure tab is a security administration surface that centralizes:

  • Workspace roles and permissions (for auditing access)
  • OneLake security roles across workspaces and item types

From the Secure tab, admins can create, edit, or delete OneLake security roles from a single location.

A practical workflow to adopt (teams + admins)

Here’s a lightweight approach that scales better than “ask around on Teams”:

  1. Explore: Use domain/workspace scoping + filters to find candidate items.
  2. Inspect: Use the in-context details pane to sanity-check ownership, endorsement, sensitivity, and freshness.
  3. Govern: Use the recommended actions cards to drive a small number of measurable improvements:
    • increase sensitivity label coverage
    • improve endorsement/certification where appropriate
    • standardize descriptions/tags for key assets
  4. Secure: Audit role sprawl and standardize how OneLake security roles are managed across items.

Considerations and limitations to keep in mind

A few constraints called out in Learn documentation (useful when you’re setting expectations):

  • The Govern tab doesn’t support cross-tenant scenarios or guest users.
  • The Govern tab isn’t available when Private Link is activated.
  • Govern insights for admins can be up to a day behind due to daily refresh of admin monitoring storage.

References

This post was written with help from ChatGPT 5.2

Understanding Spark Execution in Microsoft Fabric

Spark performance work is mostly execution work: understanding where the DAG splits into stages, where shuffles happen, and why a handful of tasks can dominate runtime.

This post is a quick, practical refresher on the Spark execution model — with Fabric-specific pointers on where to observe jobs, stages, and tasks.

1) The execution hierarchy: Application → Job → Stage → Task

In Spark, your code runs as a Spark application. When you run an action (for example, count(), collect(), or writing a table), Spark submits a job. Each job is broken into stages, and each stage runs a set of tasks (often one task per partition).

A useful mental model:

  • Tasks are the unit of parallel work.
  • Stages group tasks that can run together without needing data from another stage.
  • Stage boundaries often show up where a shuffle is required (wide dependencies like joins and aggregations).

2) Lazy evaluation: why “nothing happens” until an action

Most DataFrame / Spark SQL transformations are lazy. Spark builds a plan and only executes when an action forces it.

Example (PySpark):

from pyspark.sql.functions import col

df = spark.read.table("fact_sales")
# Transformations (lazy)
filtered = df.filter(col("sale_date") >= "2026-01-01")

# Action (executes)
print(filtered.count())


This matters in Fabric notebooks because a single cell can trigger multiple jobs (for example, one job to materialize a cache and another to write output).

3) Shuffles: the moment your DAG turns expensive

A shuffle is when data must be redistributed across executors (typically by key). Shuffles introduce:

  • network transfer
  • disk I/O (shuffle files)
  • spill risk (memory pressure)
  • skew/stragglers (a few hot partitions dominate)

If you’re diagnosing a slow pipeline, assume a shuffle is the culprit until proven otherwise.

4) What to check in Fabric: jobs, stages, tasks

Fabric gives you multiple ways to see execution progress:

  • Notebook contextual monitoring: a progress indicator for notebook cells, with stage/task progress.
  • Spark monitoring / detail monitoring: drill into a Spark application and see jobs, stages, tasks, and duration breakdowns.

When looking at a slow run, focus on:

  • stages with large shuffle read/write
  • long-tail tasks (stragglers)
  • spill metrics (memory → disk)
  • skew indicators (a few tasks far slower than the median)

5) A repeatable debugging workflow (that scales)

  1. Start with the plandf.explain(True) for DataFrame/Spark SQL
    • Look for Exchange operators (shuffle) and join strategies (broadcast vs shuffle join)
  2. Run once, then open monitoringIdentify the longest stage(s)
    • Confirm whether it’s CPU-bound, shuffle-bound, or spill-bound
  3. Apply the common fixes in orderAvoid the shuffle (broadcast small dims)
    • Reduce shuffle volume (filter early, select only needed columns)
    • Fix partitioning (repartition by join keys; avoid extreme partition counts)
    • Turn on AQE (spark.sql.adaptive.enabled=true) to let Spark coalesce shuffle partitions and mitigate skew

Quick checklist

  • Do I know which stage is dominating runtime?
  • Is there an Exchange / shuffle boundary causing it?
  • Are a few tasks straggling (skew), or are all tasks uniformly slow?
  • Am I broadcasting what should be broadcast?
  • Is AQE enabled, and is it actually taking effect?

References

This post was written with help from ChatGPT 5.2

Fabric Spark Shuffle Tuning: AQE + partitions for Faster Joins

Shuffles are where Spark jobs go to get expensive: a wide join or aggregation forces data to move across the network, materialize shuffle files, and often spill when memory pressure spikes.

In Microsoft Fabric Spark workloads, the fastest optimization is usually the boring one: avoid the shuffle when you can, and when you can’t, make it smaller and better balanced.

This post lays out a practical, repeatable approach you can apply in Fabric notebooks and Spark job definitions.

1) Start with the simplest win: avoid the shuffle

If one side of your join is genuinely small (think lookup/dimension tables), use a broadcast join so Spark ships the small table to executors and avoids a full shuffle.

In Fabric’s Spark best practices, Microsoft explicitly calls out broadcast joins for small lookup tables as a way to avoid shuffles entirely.

Example (PySpark):

from pyspark.sql.functions import broadcast

fact = spark.read.table("fact_sales")
dim  = spark.read.table("dim_product")

# If dim_product is small enough, broadcast it
joined = fact.join(broadcast(dim), on="product_id", how="left")

If you can’t broadcast safely, move to the next lever.

2) Make the shuffle less painful: tune shuffle parallelism

Spark controls the number of shuffle partitions for joins and aggregations with spark.sql.shuffle.partitions (default: 200 in Spark SQL).

  • Too few partitions → huge partitions → long tasks, spills, and stragglers.
  • Too many partitions → tiny tasks → scheduling overhead, excess shuffle metadata, and unnecessary overhead.

Example (session-level setting):

spark.conf.set("spark.sql.shuffle.partitions", "400")

A decent heuristic is to start with something proportional to total executor cores and then iterate using the Spark UI (watch stage task durations, shuffle read/write sizes, and spill metrics).

3) Let Spark fix itself (when it can): enable AQE

Adaptive Query Execution (AQE) uses runtime statistics to optimize a query as it runs.

Fabric’s Spark best practices recommend enabling AQE to dynamically optimize shuffle partitions and handle skewed data automatically.

AQE is particularly helpful when:

  • Your input data distribution changes day-to-day
  • A static spark.sql.shuffle.partitions value is right for some workloads but wrong for others
  • You hit skew where a small number of partitions do most of the work

Example:

spark.conf.set("spark.sql.adaptive.enabled", "true")

4) Diagnose like you mean it: what to look for in Spark UI

When a job is slow, treat it like a shuffle problem until proven otherwise.

Look for:

  • Stages where a handful of tasks take dramatically longer than the median (classic skew)
  • Large shuffle read/write sizes concentrated in a small number of partitions
  • Spill (memory → disk) spikes during joins/aggregations

When you see skew, your options are usually:

  • Broadcast (if feasible)
  • Repartition on a better key
  • Salt hot keys (advanced)
  • Enable AQE and confirm it’s actually taking effect

A minimal checklist for Fabric Spark teams

  1. Use DataFrame APIs (keep Catalyst in play).
  2. Broadcast small lookup tables to avoid shuffles.
  3. Set a sane baseline for spark.sql.shuffle.partitions.
  4. Enable AQE and validate in the query plan / UI.
  5. Iterate with the Spark UI: measure, change one thing, re-measure.

References

This post was written with help from ChatGPT 5.2