What I learned
When you add a column to a lakehouse source table, your DirectLake semantic model can break with a cryptic error. The fix is to update the model metadata to reflect the new schema — not to debug DAX, relationships, or permissions.
Context
DirectLake is fast because it reads straight from OneLake. That also makes it less forgiving when your semantic model metadata drifts away from the lakehouse schema.
What happened
I added a new column to a source table in a lakehouse. After that, the semantic model stopped working in DirectLake mode and threw this error:
A child can’t be both removed and included
What the error is actually telling you
This is usually a schema sync problem.
The lakehouse table changed, but the semantic model still has an older view of the table definition. When DirectLake tries to frame the query, it hits a contradiction in the model metadata. A column is effectively being treated as both present and excluded.
How to fix it
If you manage the model in TMDL / VS Code, update the model definition so the new source column is explicitly declared, then redeploy.
column your_new_column
dataType: string
sourceColumn: your_new_column
summarizeBy: none
Prior to TMDL, you would have needed to recreate the semantic model because of how finicky DirectLake is.
Why this matters in DirectLake
Import models can often absorb schema changes after a refresh.
DirectLake is stricter. New columns are not something I assume will just appear cleanly in the model. If the semantic model is code-managed, the schema change needs to be reflected in the model definition too.
Practical takeaway
If you add a column upstream and DirectLake starts failing, check schema alignment before you debug DAX, relationships, or permissions.
In this case, the fix was not in the measure layer. It was in the model metadata.