Lakehouse Table Optimization: VACUUM, OPTIMIZE, and Z-ORDER

If your Lakehouse tables are getting slower (or more expensive) over time, it’s often not “Spark is slow.” It’s usually table layout drift: too many small files, suboptimal clustering, and old files piling up.

In Fabric Lakehouse, the three table-maintenance levers you’ll reach for most are:

  • OPTIMIZE: compacts many small files into fewer, larger files (and can apply clustering)
  • Z-ORDER: co-locates related values to improve data skipping for common filters
  • VACUUM: deletes old files that are no longer referenced by the Delta transaction log (after a retention window)

Practical note: in Fabric, run these as Spark SQL in a notebook or Spark job definition (or use the Lakehouse maintenance UI). Don’t try to run them in the SQL Analytics Endpoint.

1) Start with the symptom: “small files” vs “bad clustering”

Before you reach for maintenance, quickly sanity-check what you’re fighting:

  • Many small files → queries spend time opening/reading lots of tiny Parquet files.
  • Poor clustering for your most common predicates (date, tenantId, customerId, region, etc.) → queries scan more data than they need.
  • Heavy UPDATE/DELETE/MERGE patterns → lots of new files + tombstones + time travel files.

If you only have small files, OPTIMIZE is usually your first win.

2) OPTIMIZE: bin-packing for fewer, bigger files

Basic compaction

OPTIMIZE my_table;

Target a subset (example: recent partitions)

OPTIMIZE my_table WHERE date >= date_sub(current_date(), 7);

A useful mental model: OPTIMIZE is rewriting file layout (not changing table results). It’s maintenance, not transformation.

3) Z-ORDER: make your filters cheaper

Z-Ordering is for the case where you frequently query:

  • WHERE tenantId = ...
  • WHERE customerId = ...
  • WHERE deviceId = ... AND eventTime BETWEEN ...

Example:

OPTIMIZE my_table ZORDER BY (tenantId, eventDate);

Pick 1–3 columns that dominate your interactive workloads. If you try to z-order on everything, you’ll mostly burn compute for little benefit.

4) VACUUM: clean up old, unreferenced files (carefully)

VACUUM is about storage hygiene. Delta keeps old files around to support time travel and concurrent readers. VACUUM deletes files that are no longer referenced and older than the configured retention threshold.

VACUUM my_table;

Two practical rules:

  1. Don’t VACUUM aggressively unless you understand the impact on time travel / rollback.
  2. Treat the retention window as a governance decision (what rollback window do you want?) not just a cost optimization.

5) Fabric-specific gotchas (the ones that actually bite)

Where you can run these commands

These are Spark SQL maintenance commands. In Fabric, that means notebooks / Spark job definitions (or the Lakehouse maintenance UI), not the SQL Analytics Endpoint.

V-Order and OPTIMIZE

Fabric also has V-Order, which is a Parquet layout optimization aimed at faster reads across Fabric engines. If you’re primarily optimizing for downstream read performance (Power BI/SQL/Spark), it’s worth understanding whether V-Order is enabled for your workspace and table writes.

A lightweight maintenance pattern that scales

  • Nightly/weekly: OPTIMIZE high-value tables (or recent partitions)
  • Weekly/monthly: Z-ORDER tables with stable query patterns
  • Monthly: VACUUM tables where your org’s time travel policy is clear

Treat it like index maintenance: regular, boring, measurable.

References

This post was written with help from ChatGPT 5.2