Microsoft Fabric is a powerful tool for data engineers, enabling them to build out a lakehouse architecture for their organizational data. In this blog post, we will walk you through the key experiences that Microsoft Fabric.
Creating a Lakehouse
A lakehouse is a new experience that combines the power of a data lake and a data warehouse. It serves as a central repository for all Fabric data. To create a lakehouse, you start by creating a new lakehouse artifact and giving it a name. Once created, you land in the empty Lakehouse Explorer.
Importing Data into the Lakehouse
There are several ways to bring data into the lakehouse. You can upload files and folders from your local machine, use data flows (a low-code tool with hundreds of connectors), or leverage the pipeline copy activity to bring in petabytes of data at scale. Most of the marketing data in the lakehouse is in Delta tables, which are automatically created with no additional effort. You can easily explore the tables, see their schema, and even view the underlying files.
Adding Unstructured Data
In addition to structured data, you might want to add some unstructured customer reviews to accompany your campaign data. If this data already exists in storage, you can simply point to it with no data movement necessary. This is done by adding a new shortcut, which allows you to create a virtual table and virtual files inside your lakehouse. Shortcuts enable you to select from a variety of sources, including lakehouses and warehouses in Fabric, but also external storage like ADLS Gen 2 and even Amazon S3.
Leveraging the Data
Once all your data is ready in the lakehouse, there are many ways to use it. As a data engineer or data scientist, you can open up the lakehouse in a notebook and leverage Spark to continue transforming the data or build a machine learning model. As a SQL professional, you can navigate to the SQL endpoint of the lakehouse where you can write SQL queries, create views and functions, all on top of the same Delta tables. As a business analyst, you can navigate to the built-in modeling view and start developing your BI data model directly in the same warehouse experience.
Configuring your Spark Environment
As an administrator, you can configure the Spark environment for your data engineers. This is done in the capacity admin portal, where you can access the Spark compute settings for data engineers and data scientists. You can set a default runtime and default Spark properties, and also turn on the ability for workspace admins to configure their own custom Spark pools.
Collaborative Data Development
Microsoft Fabric also provides a rich developer experience, enabling users to collaborate easily, work with their lakehouse data, and leverage the power of Spark. You can view your colleagues’ code updates in real time, install ML libraries for your project, and use the built-in charting capabilities to explore your data. The notebook has a built-in resource folder which makes it easy to store scripts or other code files you might need for the project.
In conclusion, Microsoft Fabric provides a frictionless experience for data engineers building out their enterprise data lakehouse and can easily democratize this data for all users in an organization. It’s a powerful tool that combines the power of a data lake and a data warehouse, providing a comprehensive solution for data engineering tasks.
This blogpost was created with help from ChatGPT Pro
