## 1. Start with requirements and solution purpose

Every Power BI project should start by clarifying the business purpose of the solution.

The first questions should be:

```
Why is this solution needed?
Who will use it?
Which decisions will it support?
Which business questions must it answer?
Which KPIs define success?
Which existing report, Excel file, or process does it replace or improve?
```

Document four types of requirements.

### Business requirements

```
Business goals
Business questions
KPIs
Decision-making process
Current pain points
Current manual reports or Excel files
Expected actions from the report
```

### Technical requirements

```
Data sources
Refresh frequency
Data volume
Data latency expectations
Gateway requirements
Fabric / Power BI capacity constraints
Integration requirements
Source system limitations
```

### User and access requirements

```
User groups
Viewer roles
Developer roles
Workspace roles
App audiences
RLS requirements
OLS requirements
Sensitive data areas
External sharing requirements
```

### Design requirements

```
Corporate theme
Fonts
Colors
Logos
Page layout
Required visual types
Navigation requirements
Tooltip requirements
Drillthrough requirements
Mobile requirements
Accessibility requirements
```

Getting access to the current report or Excel file is highly recommended. It usually contains hidden calculations, business exceptions, manual filters, and terminology that are not captured in formal requirements.

---

## 2. Plan the Power BI / Fabric architecture before development

Before building the report, define the structure of the solution.

A modern Power BI solution can include:

```
Workspaces
Lakehouses
Warehouses
Dataflows Gen2
Pipelines
Semantic models
Reports
Apps
Deployment pipelines
Git repositories
```

For larger organizations, organize content by business domain when it matches the client’s operating model.

Examples:

```
Finance
Sales
Marketing
HR
Operations
Customer Service
Risk
```

For simple projects or centralized BI teams, do not force a domain structure. Keep the architecture simple.

At minimum, separate development and production:

```
Development / Test
Production
```

For controlled enterprise delivery, use:

```
Development
Test / UAT
Production
```

Recommended workspace naming:

```
FIN - Sales Performance [Dev]
FIN - Sales Performance [Test]
FIN - Sales Performance [Prod]

MKT - Campaign Analytics [Dev]
MKT - Campaign Analytics [Prod]

HR - Workforce Reporting [Prod]
```

Use short names that communicate ownership, content, and lifecycle stage.

---

## 3. Design security early

Security should be collected during requirements gathering because it can affect the data model.

Use workspace, app, or report-level access when the requirement is simple:

```
Developers can edit.
Business users can view.
Management has access to executive reports.
Operational teams have access to operational reports.
```

Use Row-Level Security or Object-Level Security when users need different views of the same semantic model:

```
Country managers see only their country.
Sales managers see only their region.
HR users cannot see salary fields unless authorized.
External users see only their client portfolio.
```

Use the simplest security model that satisfies the requirement. RLS and OLS are powerful, but they increase testing, troubleshooting, and governance complexity.

Security rules should be validated with real user examples, not only with technical role names.

---

## 4. Define an MVP before the full build

Build and validate an MVP before completing the full report.

There are three common MVP approaches.

### Vertical MVP

Build one slice of the report with near-final quality.

```
Real data
Real calculations
Final design style
Navigation
Interactions
Tooltips
Security behavior if relevant
```

Use this when the report has complex calculations, UX, or stakeholder expectations.

### Horizontal MVP

Build a light version of the full report.

```
Main pages
Main KPIs
Simplified calculations
Basic design
Limited interactivity
```

Use this when stakeholders need to validate the overall structure.

### Mockup MVP

Build a wireframe or static version without complete data integration.

Use this when:

```
The data is not ready.
The design needs early approval.
Stakeholders are still aligning on scope.
The report logic is not yet stable.
```

Validate the MVP on:

```
Data and calculations
Design and usability
Content and business relevance
Performance expectations
Security expectations
```

If the feedback is significant, create a second MVP before scaling development.

---

## 5. Create reusable design standards

Before building all report pages, create reusable design assets.

Recommended assets:

```
Power BI theme JSON
Example report template
Page layout examples
KPI card examples
Chart examples
Navigation examples
Tooltip examples
Logo library
Icon library
Color palette
Typography rules
```

Use a consistent visual language across the report. The April 2026 Power BI update includes continued improvements to reporting, visuals, modeling, Copilot, and theme customization, but a governed theme and reusable design template are still necessary for consistency.

---

## 6. Use modern source control and development formats

For very small projects, manual versioning can be acceptable:

```
SalesReport_v1.0.pbix
SalesReport_v1.1.pbix
SalesReport_v2.0.pbix
```

For professional development, prefer Power BI Project files:

```
.pbip
```

PBIP saves the report and semantic model as folders containing text files, which makes the project more suitable for Git and collaboration. Microsoft currently documents PBIP save as a preview feature in Power BI Desktop, so confirm client or tenant readiness before making it the only accepted delivery format.

Modern project formats include:

```
PBIP  = Power BI Project
PBIR  = report definition format
TMDL  = semantic model definition format
PBIX  = packaged binary report file
```

PBIR and TMDL are important because they make Power BI assets more source-control friendly: PBIR stores report objects such as pages, visuals, and bookmarks in structured JSON files, while TMDL is used for the semantic model definition.

Recommended maturity model:

```
Level 1: Manual PBIX versioning
Level 2: SharePoint / OneDrive version history
Level 3: PBIP with Git
Level 4: Fabric Git integration
Level 5: Fabric Git + deployment pipelines / CI-CD
Level 6: Automated deployment with fabric-cicd or equivalent DevOps process
```

![Semi-manual ALM: author .pbix locally, sync through SharePoint or OneDrive for file-level versioning, then refresh into a Fabric workspace for semantic models and reports](/images/resources/04-pbi-semi-manual-verions-control-sharepoint.png)

Fabric Git integration supports multiple Fabric item types, including Dataflow Gen2, pipelines, lakehouses, notebooks, semantic models, reports, and other Fabric artifacts depending on item type and preview status.

For advanced teams, Microsoft also documents `fabric-cicd`, a Microsoft-backed open-source Python library for deploying Fabric items from source control, including semantic models and reports using PBIP format.

![Git-based ALM: .pbip and model metadata in Azure DevOps with branch/merge and PRs; sync or deploy to Fabric, with optional round-trip commits from the workspace](/images/resources/05-pbi_verion-control-git-integration.png)

---

## 7. Use Power Query and Dataflows intentionally

Push reusable transformation logic as far upstream as possible.

Preferred order:

```
Source system / warehouse / lakehouse
Dataflow Gen2 / Fabric pipeline / dbt / ETL
Power Query
DAX calculated column
```

Use Power Query for:

```
Light shaping
Report-specific transformations
Small transformations that do not justify upstream engineering
Ad hoc data preparation during prototyping
```

Use upstream transformations for:

```
Reusable business logic
Large joins
Complex deduplication
Slowly changing dimensions
Data quality rules
Heavy aggregations
Type standardization across multiple reports
```

Power Query best practices:

```
Filter rows as early as possible.
Remove unnecessary columns.
Use native SQL carefully when it improves performance and governance.
Perform expensive operations, such as sorting, late in the query.
Rename query steps meaningfully.
Add descriptions where logic is not obvious.
Group queries into folders when the query list grows.
Create custom functions for repeated transformation logic.
Avoid complex chains of referenced queries when they duplicate execution.
```

Use Dataflow Gen2 when the prepared data should be reused across multiple reports, semantic models, or Fabric items. Dataflow Gen2 now supports CI/CD and Git integration, and new Dataflow Gen2 items are created with CI/CD and Git integration support by default according to Microsoft’s documentation.

For enterprise ALM, design Dataflow Gen2 with environment-specific configuration in mind, using parameters, variable libraries, or relative references so the same logic can move across Dev, Test, and Prod.

---

## 8. Choose the right semantic model storage mode

Do not treat the choice as only **Import vs DirectQuery**. Modern Power BI / Fabric solutions can use several modes.

| Mode | Use when |
| --- | --- |
| Import | Best default for performance and modeling flexibility |
| DirectQuery | Data must stay in the source or near-real-time access is required |
| Composite model | You need to combine different storage modes or semantic models |
| Direct Lake | You use Fabric / OneLake and need large-scale, high-performance access without traditional import refresh |
| Hybrid tables | You need imported historical data plus recent DirectQuery partitions |
| Live connection | You connect reports to an existing governed semantic model |

DirectQuery keeps data in the source and queries it at report time. Microsoft’s current DirectQuery guidance explicitly positions it against Import, hybrid tables, Direct Lake, and live connections, so storage mode should be chosen based on business and technical requirements, not habit.

Use Import by default when possible because it usually provides the best user experience and modeling flexibility.

Use DirectQuery only when the requirement justifies it:

```
Data cannot be imported.
Near-real-time access is required.
The source system must enforce security.
The data volume is too large for import and Direct Lake is not available.
```

DirectQuery best practices:

```
Keep DAX simple.
Reduce joins.
Avoid complex Power Query transformations.
Materialize transformations in the source.
Optimize source tables.
Use proper indexing, clustering, partitioning, or distribution in the source.
Limit visual interactions.
Use slicer Apply buttons.
Avoid expensive measure filters and Top N filters where possible.
Use Assume Referential Integrity only when the data truly supports it.
```

Use Direct Lake when working with Fabric and OneLake, especially for large Delta tables in lakehouses or warehouses. Direct Lake is a Power BI semantic model table storage mode optimized for loading large volumes from Delta tables in OneLake into memory, and Microsoft notes that Direct Lake and Import usually outperform DirectQuery for report interaction.

Direct Lake is especially relevant for IT-driven Fabric architectures, gold-layer lakehouse models, and large analytical datasets where full import refresh is impractical.

---

## 9. Design the semantic model as the governed business layer

The semantic model should be treated as a governed business product, not merely as the hidden dataset behind a report.

A good semantic model defines:

```
Business entities
Facts
Dimensions
Measures
Relationships
Hierarchies
Calculation groups
Security rules
Descriptions
Synonyms
AI instructions
Certified metrics
```

Use a star schema by default.

Example:

```
Fact tables:
Sales
Orders
Inventory
Campaign Sends
Support Tickets

Dimension tables:
Date
Customer
Product
Store
Channel
Campaign
Employee
Region
```

Power BI semantic models are optimized when tables support filtering, grouping, and summarization clearly. Microsoft’s star schema guidance states that dimension tables enable filtering and grouping, while fact tables enable summarization.

Avoid exposing source-shaped tables directly to business users unless the model is only for technical exploration.

Bad semantic model pattern:

```
salesforce_opportunity
stripe_charge
ga4_events
hubspot_contacts
```

Better semantic model pattern:

```
Opportunity
Customer
Subscription
Payment
Campaign
Date
Product
```

Hide technical columns:

```
Foreign keys
Surrogate keys
Row hashes
Source file names
Pipeline metadata
Unused source fields
```

Expose business-friendly fields:

```
Customer Name
Customer Segment
Product Category
Sales Amount
Orders Count
Active Customers Count
Campaign Conversion %
```

---

## 10. Use explicit measures and hide raw numeric columns

Do not rely on implicit aggregation for important metrics.

If a numeric column should only be used through a controlled aggregation, hide the column and expose a measure.

Example:

```
Hide:
Sales[Sales Amount]

Expose:
[Sales Amount]
[Sales Amount YTD]
[Sales Amount YoY %]
[Sales Amount Forecast]
[Sales Amount Actual]
```

This reduces user error and makes the model easier to understand.

Use consistent measure naming.

Recommended structure:

```
Concept [Qualification] [Subset] [Aggregation Type]
```

Examples:

```
Sales Amount
Sales Amount Actual
Sales Amount Forecast
Orders Count
Customers Active Count
Margin Gross %
Cost Actual CH Avg
```

Add descriptions to important measures, especially when the metric is ambiguous, business-critical, or likely to be used by Copilot / AI.

---

## 11. Use a proper Date table

Disable Auto date/time and use a governed Date table.

The Date table should include:

```
Date
Year
Quarter
Month
Month Name
Month Number
Week
Day of Week
Fiscal Year
Fiscal Period
Is Weekend
Is Holiday
Business Day Number
```

When the model has multiple date roles, make the logic clear.

Examples:

```
Order Date
Invoice Date
Payment Date
Ship Date
Delivery Date
Cancellation Date
```

Use duplicated role-playing Date tables when users need to filter multiple date types simultaneously.

Use one Date table with inactive relationships when one date is primary and the others are used only in specific measures.

Example:

```
Sales by Ship Date =
CALCULATE(
    [Sales Amount],
    USERELATIONSHIP('Date'[Date], Sales[Ship Date])
)
```

Avoid ambiguous report filters where users do not know which date they are applying.

---

## 12. Model relationships carefully

Use single-direction relationships from dimensions to facts by default.

Default pattern:

```
Date      → Sales
Customer  → Sales
Product   → Sales
Store     → Sales
Channel   → Sales
```

Avoid bi-directional relationships unless necessary. They can create ambiguous filter paths, unexpected results, and performance issues.

Before enabling bi-directional filtering, consider:

```
A bridge table
A specific DAX measure using CROSSFILTER
A revised model structure
A many-to-many pattern with explicit logic
A separate aggregate table
```

For many-to-many dimension relationships, use a bridge table.

Example:

```
Customer
Bridge Customer Account
Account
```

![Many-to-many between dimensions using a bridge table; bi-directional filtering on the bridge path only when needed so filters reach facts correctly](/images/resources/06-pbi_many-to-many-relationship-dimesion-tables.png)

For fact-to-fact analysis, do not join fact tables directly. Use common dimensions.

Example:

```
Sales fact      → Date, Product, Customer
Targets fact    → Date, Product, Region
Forecast fact   → Date, Product, Region
```

![Relate multiple facts through shared dimensions (single-direction filters) instead of wiring facts directly to each other](/images/resources/07-pbi_many-to-many-relationship-fact-dimension-tables.png)

When facts are stored at different grains, control summarization with measures.

Example:

```
Sales: product + day
Target: product category + month
```

Good practices:

```
Store the first day of the month in monthly fact tables.
Relate monthly facts to the Date table using a valid date.
Return BLANK when users drill below the valid grain.
Hide raw fact columns and expose only governed measures.
```

---

## 13. Reduce model size before optimizing visuals

Performance starts with the model.

Recommended actions:

```
Remove unused columns.
Remove unused rows.
Avoid loading detailed data that users will never analyze.
Use numeric data types where possible.
Avoid high-cardinality text columns in large fact tables.
Encode high-cardinality strings upstream when useful.
Create calculated columns upstream or in Power Query rather than DAX when possible.
Use incremental refresh for large imported fact tables.
Use aggregations when appropriate.
```

For facts stored at month, quarter, or year grain, use a proper Date column representing the first day of the period, or use a whole-number date key if that is your standard.

Example:

```
2026-01-01 for January 2026
2026-04-01 for Q2 2026
```

Do not store period labels only as text if they need to sort, filter, or join.

Use Performance Analyzer, DAX Studio, VertiPaq Analyzer, and model size inspection throughout development, not only when users complain about speed.

---

## 14. Use the right calculation layer

Power BI now has several calculation options. Choose the right layer based on reuse and business importance. Microsoft’s current guidance frames Power BI calculations as a choice between multiple calculation options with different trade-offs.

| Calculation | Best place |
| --- | --- |
| Reusable business metric | Measure |
| Reusable row-level attribute | Source / Power Query / calculated column |
| Reusable time-intelligence logic | Calculation group |
| Visual-only calculation | Visual calculation |
| Scenario selector | Field parameter / what-if parameter |
| Data preparation logic | Source / Dataflow / Power Query |
| Enterprise metric | Semantic model measure with documentation |

Use visual calculations only when the calculation is specific to a single visual or depends on the visual structure. Do not use visual calculations for core metrics that should be reused elsewhere.

Use calculation groups when they reduce duplicated measures, especially for:

```
YTD
MTD
QTD
YoY
YoY %
Rolling 12 months
Actual vs Forecast
Currency formatting variants
```

Use field parameters when users need controlled flexibility, such as selecting which dimension or measure appears in a visual.

---

## 15. Write clean and maintainable DAX

DAX should be treated as production code.

Good practices:

```
Use VAR for repeated logic.
Break complex formulas into readable steps.
Comment complex measures.
Format code consistently.
Use table names before columns.
Do not use table names before measures.
Avoid overly clever DAX that future developers cannot maintain.
```

![DAX formatted like production code: indentation, line breaks, and readable nesting](/images/resources/08-pbi_dax-formatted-code.png)

Recommended patterns:

```
Use DIVIDE instead of /
Use SELECTEDVALUE instead of HASONEVALUE + VALUES
Use COUNTROWS when counting rows
Use Boolean filter arguments in CALCULATE where possible
Use KEEPFILTERS when existing filters must be preserved
Use time-intelligence functions when they match the business calendar
```

Example:

```
Conversion Rate =
VAR OrdersCount =
    [Orders Count]
VAR SessionsCount =
    [Sessions Count]
RETURN
    DIVIDE(OrdersCount, SessionsCount)
```

Organize measures with display folders.

Example:

```
Sales
Sales - Time Intelligence
Customers
Conversion
Forecast
Quality Checks
```

Add measure descriptions for important metrics. This is increasingly important because semantic model metadata is now used not only by users, but also by Copilot and Fabric data agents.

---

## 16. Separate semantic models from reports

Separate the data model from reports unless the semantic model is truly report-specific.

Recommended pattern:

```
Shared semantic model
→ Executive Sales Report
→ Regional Sales Report
→ Sales Manager Report
→ Mobile Sales View
```

This creates one governed model and multiple consumption experiences.

Use thin reports when possible.

A thin report should contain:

```
Pages
Visuals
Formatting
Navigation
Bookmarks
Report-level filters
Report-specific tooltips
```

The semantic model should contain:

```
Tables
Relationships
Measures
Calculation groups
Security
Descriptions
Business logic
```

This prevents metric duplication across reports.

---

## 17. Design report pages for clarity

A Power BI report should communicate clearly, not only display data.

Layout rules:

```
Align objects precisely.
Use consistent spacing.
Use consistent page structure.
Use consistent fonts.
Use consistent title styles.
Use a limited color palette.
Use whitespace deliberately.
Avoid unnecessary borders.
Use subtle shadows only when they improve separation.
```

![Report layout: aligned grid of visuals, consistent spacing, card shadows, clear page title, and neutral background](/images/resources/09-pbi_viz-alighement-shadows-white-spaces-page-title-background.png)

Remove unnecessary elements:

```
Heavy gridlines
Unnecessary borders
Redundant axis titles
Unnecessary legends
3D effects
Excessive labels
Excessive colors
Repeated information
```

![Line charts: simplify axes and gridlines, drop noisy markers, abbreviate categories, and label series directly on the lines](/images/resources/11-pbi_viz-line-chart-recommendations.png)

Use chart titles to communicate the question or insight.

Weak title:

```
Sales by Month
```

Better title:

```
Sales declined sharply after March
```

![Use color deliberately: highlight categories you can act on, de-emphasize the rest, and tie annotations to the same hues](/images/resources/10-pbi_viz-consistent-colors.png)

Use tooltips, drillthrough, bookmarks, and navigation pages to provide additional detail without overcrowding the main page.

---

## 18. Design for accessibility and mobile use

Accessibility should be part of report design, not an afterthought.

Good practices:

```
Use sufficient contrast.
Avoid using color as the only way to communicate meaning.
Add alt text where appropriate.
Use readable font sizes.
Check tab order.
Support keyboard navigation.
Avoid cluttered pages.
Use clear visual titles.
```

Microsoft’s accessibility guidance highlights report-author responsibilities such as keyboard navigation, screen reader support, high-contrast compatibility, and alternative text.

Create mobile layouts when users will consume the report on phones or tablets. Do not assume the desktop layout will work on mobile.

---

## 19. Prepare semantic models for Copilot and AI

Power BI models are now consumed by humans and AI systems. Copilot, Fabric data agents, and natural language query experiences depend heavily on semantic model quality.

Prepare the model for AI by improving:

```
Table names
Column names
Measure names
Descriptions
Synonyms
Relationships
Data categories
Hidden fields
Certified metrics
AI instructions
Verified answers
```

Microsoft’s Fabric data agent guidance states that AI response quality depends heavily on how well data sources are prepared, and that semantic-model querying relies on metadata, schema information, synonyms, min/max values, visual metadata, and Prep for AI configuration.

AI-ready semantic model checklist:

```
Use business-friendly names.
Hide technical fields.
Remove unused columns.
Avoid ambiguous relationships.
Use clear measure descriptions.
Use star schema where possible.
Define synonyms for business terminology.
Add AI instructions where needed.
Test common business questions.
Validate AI-generated answers.
```

Do not expect Copilot to fix a weak semantic model. If users cannot understand the model, AI will likely struggle too.

---

## 20. Use deployment pipelines and Git for controlled delivery

Choose the deployment approach based on project complexity.

### One-workspace approach

Acceptable only for simple projects.

```
Workspace = development/test
App = production distribution
```

### Two-workspace approach

Good for small and medium projects.

```
Development workspace
Production workspace + App
```

### Three-workspace approach

Recommended for controlled delivery.

```
Development workspace
Test / UAT workspace + Test App
Production workspace + Production App
```

### Deployment pipelines

Recommended for larger projects, multiple developers, enterprise governance, or formal release cycles.

Microsoft Fabric deployment pipelines support controlled movement of content across stages and support important semantic model features such as incremental refresh and composite models, with specific rules and limitations.

Deployment pipeline best practices:

```
Use Dev, Test, and Prod stages.
Validate in Test before Production.
Use deployment rules for environment-specific settings.
Keep workspace names and item names consistent.
Document ownership and promotion process.
Test RLS and security in each stage.
Use sample data in development for large models where possible.
Avoid manual changes directly in Production.
```

For Power BI Apps, use Apps as the consumption layer for business users and workspaces as the development/collaboration layer.

---

## 21. Treat Dataflow Gen2 as part of the CI/CD lifecycle

Older Power BI development standards often treated dataflows separately from deployment pipelines. That should be updated.

Dataflow Gen2 now supports Git integration and deployment pipelines, and Microsoft documents CI/CD and ALM patterns for Dataflow Gen2.

However, Dataflow Gen2 CI/CD still requires discipline.

Recommended practices:

```
Use environment-aware configuration.
Avoid hard-coded workspace IDs.
Use parameters, variable libraries, or relative references.
Validate refresh behavior after deployment.
Document source and destination mapping.
Monitor refresh history.
Understand staging lakehouse behavior.
```

Known operational caveat: Microsoft documents limitations and known issues, including cases where synced or deployed Dataflow Gen2 items may need to be opened and saved manually or published via API so changes are used during refresh.

---

## 22. Document every production solution

A Power BI solution is incomplete without documentation.

Minimum documentation:

```
Purpose of the report
Business owner
Technical owner
Target audience
Data sources
Refresh frequency
Semantic model description
Key tables
Key relationships
Key measures
Security rules
Page descriptions
Visual-level explanations for complex visuals
Known limitations
Version history
Deployment process
Support process
```

For important measures, document:

```
Business definition
DAX expression
Filters applied
Aggregation behavior
Known exclusions
Validation source
Owner
Last validation date
```

For semantic models, document:

```
Grain of fact tables
Role of dimensions
Relationship logic
Many-to-many handling
RLS / OLS behavior
Calculation groups
AI / Copilot instructions
Certified metrics
```

Documentation should help future developers, business users, support teams, and AI systems understand the solution without reverse-engineering the PBIX.

---

# Practical development checklist

## Before development

```
Business questions are documented.
KPIs are defined.
Users and access groups are identified.
Data sources are confirmed.
Refresh expectations are clear.
Design requirements are known.
MVP approach is agreed.
Workspace structure is defined.
```

## Before modeling

```
Fact and dimension tables are identified.
Grain is clear.
Star schema is possible.
Many-to-many cases are understood.
Date logic is defined.
Security impact is understood.
Storage mode is selected.
Large-table strategy is defined.
```

## Before report build

```
Theme is ready.
Page templates are ready.
Core measures are created.
Fields are renamed for business users.
Technical columns are hidden.
Descriptions are added.
Relationships are tested.
RLS is tested if applicable.
```

## Before deployment

```
Report performance is tested.
Refresh is tested.
Security is validated.
Measures are reconciled.
Pages are reviewed with users.
Accessibility is checked.
Mobile layout is created if needed.
Documentation is updated.
Deployment path is defined.
```

## Before enabling Copilot / AI use

```
Model uses clear business names.
Important fields have descriptions.
Ambiguous fields are hidden or renamed.
Synonyms are added where useful.
Core metrics are documented.
AI instructions are configured where needed.
Typical questions are tested.
Answers are validated against known reports.
```

---


# Core rules

```
Start with business requirements, not visuals.
Plan workspaces, environments, security, and ownership before building.
Use an MVP to validate direction early.
Prefer star schema semantic models.
Separate semantic models from reports where possible.
Use explicit measures and hide raw numeric columns.
Use a governed Date table.
Use single-direction relationships by default.
Avoid direct fact-to-fact logic.
Use bridge tables for many-to-many relationships.
Choose Import, DirectQuery, Direct Lake, Composite, Hybrid, or Live connection intentionally.
Push reusable transformations upstream.
Use PBIP, PBIR, TMDL, Git, and deployment pipelines for professional development.
Prepare semantic models for Copilot and AI.
Design reports for clarity, accessibility, and mobile consumption.
Document the solution as a product.
```
