Skip to main content

Execute DAX Queries REST API (Preview)

Execute DAX Queries REST API (Preview)

This new API lets you run DAX queries against Power BI semantic models and receive results in Apache Arrow IPC format. It is the same API that you already use in DAX Query View in Power BI Desktop and in the Power BI service. Now you can integrate as a public REST API into your custom Power BI solutions.

Why a new API?

Unlike the existing Execute Queries API, which returns results in JSON format, the new Execute DAX Queries API relies on Apache Arrow IPC. The results are in columnar binary format and support native data-type fidelity. There are no fixed row limits, and you can submit multiple DAX EVALUATE statements in a single request.

The new API is only available on Power BI Premium and Microsoft Fabric capacities, and any caller must be able to process binary Arrow streams. Arrow libraries are available for Python, C#, Java, JavaScript, and many other languages, making integration straightforward across most platforms and toolchains.

Getting started

Fabric notebooks provide the most straightforward way to get started with the Execute DAX Queries API because Fabric notebooks don’t require an explicit app registration in Microsoft Entra ID for authentication.

Step one, acquire your access token as in the following code snippet:

 

import notebookutils  # available in every Fabric notebook runtime

# Power BI resource URI — must match this exact value
PBI_RESOURCE = "https://analysis.windows.net/powerbi/api"

# Acquire a Microsoft Entra ID token for the current user (or workspace identity)
# using the notebook's built-in credential provider.
access_token = notebookutils.credentials.getToken(PBI_RESOURCE)
if access_token is None:
    raise RuntimeError(f"Token acquisition failed")

 

kayu_0-1777929117527.png

Figure: Acquire a Microsoft Entra ID token in a Fabric notebook.

Next, it’s a good idea to define a helper function that executes DAX queries and returns a pandas DataFrame from Arrow IPC. Helper functions promote code reuse, improve readability, and simplify maintenance when API parameters change:

import io
import requests
import pandas as pd
import pyarrow as pa

def execute_dax_to_pandas(workspace_id: str, dataset_id: str, query: str) -> pd.DataFrame:
    url = (
        f"https://api.powerbi.com/v1.0/myorg/groups/{workspace_id}"
        f"/datasets/{dataset_id}/executeDaxQueries"
    )
    headers = {
        "Authorization": f"Bearer {access_token}",
        "Content-Type": "application/json"
    }
    body = {
        "query": query,
        "resultsetRowcountLimit": 500000
    }

    response = requests.post(url, headers=headers, json=body, timeout=180)
    response.raise_for_status()

    reader = pa.ipc.open_stream(io.BytesIO(response.content))

    table = reader.read_all()
    return table.to_pandas()

kayu_1-1777929381431.png

Figure: Helper to execute DAX and return a pandas DataFrame.

Finally, define the parameters, call the helper function, persist the results in a Delta table, and check the results. Make sure the column names adhere to Delta table limitations and apply v-order compression to the Delta table for best read performance, such as if you plan to use this table subsequently in Direct Lake models.

import re
from delta.tables import DeltaTable

dax_query = """
EVALUATE YOUR_TABLE_NAME
"""
workspace_id = "YOUR_WORKSPACE_ID"
semantic_model_id = "YOUR_DATASET_ID"

# Execute the query and get the pandas DataFrame
df = execute_dax_to_pandas(workspace_id, semantic_model_id, dax_query)

# Convert pandas -> Spark
spark_df = spark.createDataFrame(df)
spark_df = spark_df.toDF(*[re.sub(r"[ ,;{}()\n\t=]", "_", c) for c in spark_df.columns])

#  Remove invalid chars from column names and
#  persist as a V-Ordered delta table for Direct Lake consumption
target = "semantic_extract_delta"

spark_df.write.format("delta") \
    .mode("overwrite") \
    .option("overwriteSchema", "true") \
    .option("delta.parquet.vorder.enabled", "true") \
    .saveAsTable(target)

spark.sql(f"SELECT * FROM {target}").show(truncate=False)

kayu_0-1777929628725.png

Figure: Exporting DAX query results into a Delta table.

 

For more detailed tutorials, refer to the Mastering the Execute DAX Queries REST API content in the Power BI Developer Documentation (Microsoft Learn).

 

Comments

Popular posts from this blog

SSRS INTERVIEW QUESTIONS

Q: What is SSRS? Ø   SSRS or SQL Server Reporting Service is a server-based report generation software systems from Microsoft and is part of Microsoft BI. Ø   It is used for preparing and delivering interactive and variety of reports. Ø   It is administered through an web based interface. Ø   Reporting services utilizes a web service interface for supporting and developing of customized reporting applications. Ø   SSRS lets you create very rich reports (Tabular/Graphical/Interactive) from various datasources with rich data visualization (Charts, Maps, sparklines) Ø   SSRS allows are reports to be exported in various formats (Excel, PDF, word etc) Q: Explain SSRS Architecture? Reporting services architecture comprises of integrated components. It is a multi-tiered, included with application, server and data layers. This architecture is scalable and modular. A single installation can be used across multiple computers. It includes the fo...

Exception deserializing the package "The process cannot access the file because it is being used by another process."

TITLE: Microsoft Visual Studio ------------------------------ Failed to start project ------------------------------ ADDITIONAL INFORMATION: Exception deserializing the package "The process cannot access the file 'E:\SSASCube\HistoricalDataLoad\HistoricalDataLoad\bin\Development\HistoricalDataLoad.ispac' because it is being used by another process.". (Microsoft.DataTransformationServices.VsIntegration) ------------------------------ The process cannot access the file 'E:\SSASCube\HistoricalDataLoad\HistoricalDataLoad\bin\Development\HistoricalDataLoad.ispac' because it is being used by another process. (mscorlib) ------------------------------ BUTTONS: OK ------------------------------ While running SSIS package i got the error “The process cannot access the file ‘*.ispac’ because it is being used by another process”. I tried to close SSDT and run it again but, I still got the same error while compiling. Then, after searching over internet, I got...

Failed to execute the package or element. Build errors were encountered

Error: TITLE: Microsoft Visual Studio ------------------------------ Failed to execute the package or element.   Build errors were encountered. For more information, see the Output window. ------------------------------ BUTTONS: OK ------------------------------   Solution: We tried to close SSDT and run it again but, we still got the same error while running SSIS package. Then, we need to follow bellow solution: Step 1: Go to Task Manager–> Details Tab. Step 2: Locate the process “ DtsDebugHost.exe “. Kill this process. There might be multiple instances of this process. Kill all of them. Step 3: Rerun SSIS package