Execute DAX Queries REST API (Preview)
This new API lets you run DAX queries against Power BI semantic models and receive results in Apache Arrow IPC format. It is the same API that you already use in DAX Query View in Power BI Desktop and in the Power BI service. Now you can integrate as a public REST API into your custom Power BI solutions.
Why a new API?
Unlike the existing Execute Queries API, which returns results in JSON format, the new Execute DAX Queries API relies on Apache Arrow IPC. The results are in columnar binary format and support native data-type fidelity. There are no fixed row limits, and you can submit multiple DAX EVALUATE statements in a single request.
The new API is only available on Power BI Premium and Microsoft Fabric capacities, and any caller must be able to process binary Arrow streams. Arrow libraries are available for Python, C#, Java, JavaScript, and many other languages, making integration straightforward across most platforms and toolchains.
Getting started
Fabric notebooks provide the most straightforward way to get started with the Execute DAX Queries API because Fabric notebooks don’t require an explicit app registration in Microsoft Entra ID for authentication.
Step one, acquire your access token as in the following code snippet:
import notebookutils # available in every Fabric notebook runtime
# Power BI resource URI — must match this exact value
PBI_RESOURCE = "https://analysis.windows.net/powerbi/api"
# Acquire a Microsoft Entra ID token for the current user (or workspace identity)
# using the notebook's built-in credential provider.
access_token = notebookutils.credentials.getToken(PBI_RESOURCE)
if access_token is None:
raise RuntimeError(f"Token acquisition failed")
Figure: Acquire a Microsoft Entra ID token in a Fabric notebook.
Next, it’s a good idea to define a helper function that executes DAX queries and returns a pandas DataFrame from Arrow IPC. Helper functions promote code reuse, improve readability, and simplify maintenance when API parameters change:
import io
import requests
import pandas as pd
import pyarrow as pa
def execute_dax_to_pandas(workspace_id: str, dataset_id: str, query: str) -> pd.DataFrame:
url = (
f"https://api.powerbi.com/v1.0/myorg/groups/{workspace_id}"
f"/datasets/{dataset_id}/executeDaxQueries"
)
headers = {
"Authorization": f"Bearer {access_token}",
"Content-Type": "application/json"
}
body = {
"query": query,
"resultsetRowcountLimit": 500000
}
response = requests.post(url, headers=headers, json=body, timeout=180)
response.raise_for_status()
reader = pa.ipc.open_stream(io.BytesIO(response.content))
table = reader.read_all()
return table.to_pandas()
Figure: Helper to execute DAX and return a pandas DataFrame.
Finally, define the parameters, call the helper function, persist the results in a Delta table, and check the results. Make sure the column names adhere to Delta table limitations and apply v-order compression to the Delta table for best read performance, such as if you plan to use this table subsequently in Direct Lake models.
import re
from delta.tables import DeltaTable
dax_query = """
EVALUATE YOUR_TABLE_NAME
"""
workspace_id = "YOUR_WORKSPACE_ID"
semantic_model_id = "YOUR_DATASET_ID"
# Execute the query and get the pandas DataFrame
df = execute_dax_to_pandas(workspace_id, semantic_model_id, dax_query)
# Convert pandas -> Spark
spark_df = spark.createDataFrame(df)
spark_df = spark_df.toDF(*[re.sub(r"[ ,;{}()\n\t=]", "_", c) for c in spark_df.columns])
# Remove invalid chars from column names and
# persist as a V-Ordered delta table for Direct Lake consumption
target = "semantic_extract_delta"
spark_df.write.format("delta") \
.mode("overwrite") \
.option("overwriteSchema", "true") \
.option("delta.parquet.vorder.enabled", "true") \
.saveAsTable(target)
spark.sql(f"SELECT * FROM {target}").show(truncate=False)
Figure: Exporting DAX query results into a Delta table.
For more detailed tutorials, refer to the Mastering the Execute DAX Queries REST API content in the Power BI Developer Documentation (Microsoft Learn).
Comments
Post a Comment
Hi User,
Thanks for visiting My Blog and please provide your valuable feedback and subscribe for more updates. Please don't post any spam content or comments.
Thank You