Services ERP & Integrated Management Data & Business Intelligence DMS & Collaboration Markets Public Institutions International Organizations Enterprises International Groups Explore Approach References About Academy Insights Demos
Free diagnostic →
Pillar 4 — Core Expertise

Business Intelligence
The dashboard your CEO opens every morning

Not just another dashboard. A decision-making tool built from the decisions you make — not the data you have. That's the difference between informing and driving action.

6
weeks to first dashboard
in production
90+
years of combined experience
in data & BI
0 €
for the 5-day
diagnostic
Your dashboards arrive too late — or don't exist?

Our free 5-day diagnostic maps your data sources, measures your decision-making maturity (Data Readiness Index™), identifies the 3 dashboards that will have the most impact and delivers a costed action plan.

Request free diagnostic

Our approach: Decision-First Design

The industry starts from data and works up to decisions. We do the opposite. We start with what the executive needs to decide, identify the minimum data required, and build only what will be used.

01

We start with the decision

"What decision do you make most often with the least visibility?" That's the first question. Not "what KPIs do you want?" — that generates endless lists.

02

We contractualize the data

Each decision generates a "data contract" — the minimal, not exhaustive, list. 45 targeted measures are worth more than 340 delivered out of habit. The model is lighter, faster, and more used.

03

We design for action

The CEO sees green/amber/red in 3 seconds. Click, understand, decide. In 30 seconds. The dashboard isn't a consultation tool — it's a daily management reflex.

Read our full article on this approach →
Exclusive NJIADATA Methodology

Three measurement instruments.
One goal: keeping the dashboard alive.

Any firm can build a dashboard. The real question is: will it still be used in 12 months? We've formalized three proprietary tools to measure, guarantee, and maintain the value of every BI project — before, during, and after.

Avant · Diagnostic (Semaine 1)
Data Readiness Index
NJIADATA™

Is your organization ready for a BI project? We assess 5 dimensions during the free diagnostic. The result determines the recommended engagement — and tells us honestly whether the project has a chance of succeeding.

Sources
/5
Accessibility
/5
Skills
/5
Sponsorship
/5
Decision clarity
/5
Pendant · Livraison (Semaine 6)
Dashboard Quality Scorecard
NJIADATA™

No dashboard goes to production without validating these 5 criteria. It's our quality commitment — if any criterion fails, we stay an extra week. At our cost.

Performance
Model
Security
Reconciliation
Autonomie
After · Quarterly check-ups
Vitality Score
NJIADATA™

Health score out of 100, measured quarterly. Above 80, the dashboard is alive. Below 50, it dies — and we step in before it's too late.

Adoption
/20
Freshness
/20
Autonomie
/20
Decision
/20
Evolution
/20
80+ Vivant
50-79 Alerte
<50 Danger
DIAGNOSTIC
LIVRAISON
SUIVI 12 MOIS+

What you experience, week by week

Not the method from the consultant's perspective — the journey from the client's. Here's exactly what happens when you work with us, what it requires of you, and when you see the first result.

Avant
1 heure

The executive interview

We sit down with the final decision-maker — CEO, Minister, Secretary General. No PowerPoint. No questionnaire. A one-hour conversation about the decisions they make each week under uncertainty. This conversation determines the entire project.

Client side Block 1 hour in the decision-maker's diary. No preparation needed — we ask the questions.
Week 1
5 jours · Gratuit

The diagnostic: your data, your reality

We open your Excel files, your ERP, your Access databases. We count empty cells, duplicates, inconsistent formats. We don't produce a 40-page audit — we produce a clickable mockup of your future dashboard, built on your real data. By the end of this week, you can see what your management tool will look like.

Client side Give us access to the relevant files and systems. Appoint a technical contact (CIO or ERP administrator) available 30 min/day for access queries. The decision-maker is not involved this week.
Semaines 2–3
10 jours

Le pipeline : de la source brute au chiffre fiable

We build the data flow: source → cleansing → model → dashboard. The key moment arrives when the dashboard figure matches what your CFO calculates manually in Excel. This reconciliation test is decisive — it builds trust in the system. Until the CFO says 'yes, that's the right number,' we don't move on.

Client side Progress update on Friday (30 min). The CFO or management controller verifies that the dashboard figures match their reality. If there's a discrepancy, we correct it the following week.
Semaines 4–5
10 jours

Co-building: with the business, not for the business

This is the most intense phase on the client side — and the most decisive. Every evening, we publish a dashboard version. Every morning, the business referent says what resonates and what doesn't. 3D charts disappear. 47 filters become 5. BI jargon headings become business language. In 10 days, the dashboard goes from 'technically correct' to 'I understand everything in 3 seconds.'

Client side A business referent (not the CIO — someone from the user department) available 1–2 hours per day to iterate with us. This person ensures the dashboard speaks the language of the business.
Week 6
5 days

The autonomy test: the real measure of success

The business referent must, alone, without help, add a new visual to the dashboard, modify a filter, and explain a figure to their director. If they succeed, the dashboard is officially alive. If not, we stay an extra week — at our cost. Our contract doesn't specify a number of days. It specifies this outcome.

Client side 2 referents (not 1) pass the autonomy test. Why 2? Because in any organization, the first person trained will be transferred, promoted, or leave. Two referents means organizational resilience.
After
12 months

The anchoring protocol : ce qui maintient le dashboard en vie

The biggest risk isn't technical failure — it's slow death. The dashboard works, but gradually nobody updates it. Our protocol includes a logbook (who uses what, when, for which decision), quarterly check-ups (half a day to recalibrate the dashboard as needs evolve), and a 12-month vitality test on three indicators: login frequency, number of refreshes, modifications by the referent.

Client side Half a day per quarter for the check-up (4 times/year in the first year, included in the engagement). That's it.
Read our detailed article on the sprint method →

Four engagement models, same level of excellence

Every organization has a different starting point. Our models adapt — but the Decision-First method and knowledge transfer are non-negotiable.

Diagnostic
5 days
Gratuit
Sans engagement
  • Executive interview (1h)
  • Source mapping
  • Data quality assessment
  • Maquette cliquable du dashboard
  • Costed recommendations
Demander →
Programme
3 – 6 mois
On diagnostic
Based on scope and number of dashboards
  • 3 to 5 dashboards in production
  • Microsoft Fabric deployment
  • Unified semantic model
  • Row-Level Security multi-profils
  • Internal BI team training
  • Data governance
Discutons →
Maintenance
Annuelle
Annual package
Adapted to your dashboard portfolio size
  • Quarterly check-up (half day)
  • Dashboard recalibration
  • Support prioritaire
  • Annual vitality test
  • DAX measure evolution
  • Ongoing referent training
Learn more →

What it changes in practice

Eight real situations. On the left, daily work without a decision dashboard. On the right, daily work with one.

CFO
Consolidates accounts from 8 subsidiaries over 3 weeks
Automatic consolidation in 3 minutes
CEO
Calls the sales director every Monday for the figures
The figures are on their phone at 7am
Supply Chain Manager
Discovers stockouts when the customer complains
Automatic alert 48h before stockout
HR Director
Produces the age pyramid once a year for the annual report
Real-time visualization with 3-year projection
Sales Director
Compares sales team performance in a 14-tab Excel file
Dynamic ranking, targets vs actual, in 1 click
Management Controller
Spends 2 days preparing the management committee
The dashboard is the committee support material
Quality Manager
Analyzes scrap rates at month-end with a 30-day delay
Daily monitoring with automatic alert thresholds
Secretary General (ministry)
Requests budget execution rates by internal memo
Consolidated dashboard for 60 posts in real time
Profondeur technique

Data architecture: from Lakehouse to Semantic Model

A dashboard's power depends on what lies beneath. Here's the architecture we deploy — and why each layer exists.

Architecture Medallion sur Microsoft Fabric
BRONZE Raw data Excel, ERP, CSV, APIs Ingestion Data Factory Format Delta / Parquet SILVER Cleaned data Deduplication, typing Notebooks Spark Built-in quality tests GOLD Semantic model Star: facts + dimensions Documented DAX measures Data contract = Gold layer DASHBOARD Power BI Direct Lake Auto refresh OneLake — unified storage, single copy of data
Option 1

Lakehouse

Semi-structured and unstructured data storage in Delta/Parquet format. Ideal when sources are heterogeneous (Excel + APIs + flat files). Spark notebooks enable complex transformations — multi-source joins, advanced cleansing, aggregations.

Our default recommendation for projects with 3+ heterogeneous data sources. Compatible with Direct Lake for import-free querying.
Option 2

Warehouse

Traditional SQL data warehouse on Fabric. Standard T-SQL queries, materialized views, stored procedures. Ideal when the IT team has strong SQL skills and sources are already structured.

Recommended when the organization has existing SQL expertise or when migrating from an existing data warehouse is the primary goal.

The Semantic Model — the Gold layer — is the contract between data and the dashboard. It's a star schema (fact tables + dimension tables) with named, documented, and versioned DAX measures. This layer ensures that 'revenue' means the same thing for the CFO, the sales director, and the CEO. Without a shared Semantic Model, each department calculates differently — and management committees become debates about numbers instead of debates about decisions.

Migration to Fabric: no big bang, a controlled trajectory

You have data in Sage, Excel files, a SQL server, maybe an aging data warehouse. Here's how we take you to Microsoft Fabric without disrupting your operations.

Phase 1
Wk 1–2

Source inventory & mapping

We map all your data sources: ERP (Sage, SAARI, Business Central), SQL databases, Excel files, third-party APIs. Each source is evaluated: quality, volume, frequency, criticality. The Data Readiness Index™ measures your maturity across 5 dimensions. Deliverable: data diagnostic report + prioritized migration roadmap.

Phase 2
Wk 3–6

Ingestion & Lakehouse construction

We create your Fabric workspace and configure ingestion pipelines via Dataflow Gen2 and Data Factory. Each source is connected, raw data lands in the Lakehouse Bronze layer. Cleansing and transformation produce Silver (clean data) then Gold (semantic model) layers. Unified storage in OneLake — one single copy of each data point, accessible everywhere.

Phase 3
Wk 7–8

Orchestration & automation

Pipelines are orchestrated: scheduled refresh (daily, hourly or real-time depending on source), data-driven alerts (thresholds exceeded, anomalies detected), automatic report distribution via email or Teams. Power Automate triggers business notifications. Fabric monitoring tracks CU consumption and optimizes costs. Deliverable: operational, autonomous, monitored data platform.

After migration: all your data in one Lakehouse. No more servers to maintain. Predictable costs. And most importantly: your data is ready for dashboards, automation and governance. Migration isn't the end — it's the foundation everything else builds on.

Power BI Governance: from prototype to industrialized system

A dashboard without governance dies in 6 months. Here's how we structure the complete lifecycle, from development to production.

Development

DEV Workspace
Free iterations
Test data

Test

TEST Workspace
Business validation
Filtered real data

Production

PROD Workspace
Configured audiences
RLS active · SLA defined

Development cycle

Desktop → Service : le bon circuit

DAX development and visual design happen in Power BI Desktop. Publishing targets the DEV workspace. Deployment Pipelines automate DEV → TEST → PROD promotion with automatic data source switching. No report reaches production without business validation.

Distribution

Applications & audiences

Production reports are distributed via Power BI Apps — not through direct workspace sharing. Each App targets a defined audience (executive leadership, finance, operations) with its own access rights and navigation.

Quality

Certification & endorsement

Validated datasets are certified (visible badge in the catalog). Only trained referents can certify. Uncertified datasets remain accessible but without a quality guarantee — users know exactly what they're using.

Refresh SLA: we define an explicit SLA with the client for each dashboard. Example: 'financial data is updated every business day at 7:00am.' This SLA is monitored via the Vitality Score and included in the quarterly check-up.

Import, DirectQuery, Direct Lake: the right mode for the right need

Storage mode choice determines performance, data freshness, and cost. Here's the comparison we use to decide with our clients.

Our default choice

Import

Data is copied into the in-memory VertiPaq engine. Every query is instantaneous.

Performance
Excellent
Freshness
Scheduled — up to 48×/day
DAX complet
All features
Key limitations

Max size: 1 GB (Pro), 10+ GB (Premium/PPU). Full refresh can be slow on large volumes — incremental refresh solves this.

When to use

Stable sources, moderate volumes, maximum performance and full DAX features required. Default mode for the majority of our sprints.

The future on Fabric

Direct Lake

Direct reading of Delta files in OneLake. Performance close to Import, near real-time freshness.

Performance
Rapide — moteur VertiPaq
Freshness
Near real-time — auto
DAX complet
Partiel — voir limites
Key limitations

Fabric exclusive. No calculated columns or tables. Automatic fallback to DirectQuery if the model uses SQL views or RLS on the SQL endpoint. All tables must be in the same Lakehouse.

When to use

Large volumes already in Fabric/OneLake, freshness needed without managing refreshes. Best compromise when Fabric infrastructure is in place.

Specific cases

DirectQuery

Queries are sent in real time to the source. No data copy in Power BI.

Performance
Variable — depends on source
Freshness
Absolute real-time
DAX complet
Limited — no auto hierarchies
Key limitations

Performance entirely dependent on source database optimization. Load on source system with every user interaction. Reduced DAX features. No automatic date hierarchies.

When to use

Very large volumes (hundreds of GB), continuously changing data (IoT, transactions), or when regulations prohibit data copying. Reserved for cases where no alternative is viable.

Our recommendation: for a first sprint, we use Import — it's the simplest, fastest, and most compatible with all DAX features. Migration to Direct Lake or DirectQuery happens when volumes or freshness needs justify it.

Security & compliance: each user sees exactly what they should see

A dashboard shared with 60 users doesn't mean everyone sees the same thing. Here are the security mechanisms we deploy on every project.

One report · Three different views · Server-side security
DASHBOARD All data Revenue Bouaké 245 M FCFA CA Abidjan 512 M FCFA Marge nette 18,4 % Stocks Detail by zone 12-month trend — all zones 🔒 RLS + OLS applied server-side in Power BI 👤 Dir. Bouaké Sees only Bouaké data RLS: automatic zone filtering Revenue Bouaké Margin hidden 👤 Sales Director Sees all geographic zones OLS: net margin column hidden Revenue all zones Margin hidden 👤 CEO / General Manager Sees everything: all zones, all columns No RLS or OLS restrictions FULL ACCESS — 1 REPORT, 3 VIEWS Same Power BI report · Same semantic model · Security applied server-side
Filtrage par lignes

Row-Level Security (RLS)

Le directeur de la filiale Bouaké voit les données de Bouaké. Le DG voit toutes les filiales. Le même rapport, les mêmes mesures DAX, mais les données sont filtrées automatiquement selon le profil de l'utilisateur. Le RLS est défini dans le modèle sémantique et appliqué côté serveur — l'utilisateur ne peut pas contourner le filtre.

Masquage de colonnes

Object-Level Security (OLS)

The sales manager sees revenue but not net margin. The CFO sees both. OLS hides entire columns or measures — they're invisible, not just hidden by a filter.

Classification

Sensitivity Labels (Microsoft Purview)

Every dataset and report receives a sensitivity label: Public, Internal, Confidential, Highly Confidential. The label determines export rights (no PDF for 'Highly Confidential'). Native integration with Microsoft Purview and Azure Information Protection.

Compliance

GDPR, data sovereignty & audit

For European clients and international organizations: Fabric capacities are deployed in the region of your choice (Europe, South Africa, or other). Data never leaves that region. Every access is logged (who, when, what, from which IP). Logs are exploitable via the Power BI API and integrable into your SIEM.

Performance & optimization: what makes the difference between 2 seconds and 20

A slow dashboard is an abandoned dashboard. Here are the optimization practices we systematically apply — and the tools we use to measure.

STAR SCHEMA Recommended — each dimension touches the fact directly FACT: Sales Products Customers Time Regions ⚡ < 1 seconde · 4 jointures vs SNOWFLAKE SCHEMA Avoid — dimensions chain between each other FACT: Sales Products Category Customers Segments Time Months Regions Country 🐌 5-15 sec · 8 chained joins

Strict star schema

Fact tables at the center, dimension tables around them, unidirectional relationships. No snowflake. No bidirectional relationships unless documented. Every deviation is a measurable performance degradation — we enforce this discipline from design.

Optimized DAX: CALCULATE vs iterators

Iterative measures (SUMX, FILTER) scan every row — acceptable on 10,000 rows, catastrophic on 10 million. We favor aggregates (CALCULATE, SUM) whenever possible.

Incremental refresh

On a 3-year dataset, only the last 30 days are refreshed each cycle. Historical data is partitioned and doesn't move. Result: refresh drops from 45 minutes to 3 minutes.

Diagnostic tools

Performance Analyzer in Power BI Desktop measures each visual's render time — if a chart takes more than 2 seconds, we diagnose it. DAX Studio identifies the most expensive measures in memory and CPU. Both tools are used in every sprint.

Our standard: no visual should exceed 3 seconds of load time on a standard connection. That's the threshold beyond which the user goes back to Excel.

Licensing, capacity & monitoring: the complete guide for the CIO

Choosing the right Power BI license is an architectural decision, not an administrative one. Wrong choice = thousands of dollars in excess costs or critical features locked out. This section gives you everything needed to decide — without looking elsewhere.

Four licenses, four profiles

Since April 2025, Microsoft increased Power BI prices by 40% (Pro) and 20% (PPU). Premium P-SKUs are retired for new customers — replaced by Fabric F-SKUs. Here's the current landscape.

Individual exploration

Free

Report creation in Desktop. Limited publishing. No sharing or collaboration.

0 $ /mois
Refresh
8×/jour
Dataset
1 Go max
Sharing
None
What's locked

No sharing, no collaboration, no shared workspaces. Can view reports if hosted on F64+ or Premium capacity.

Who is it for

Individual user exploring data locally, or viewer in an organization that already has Fabric F64+ capacity.

Standard collaboration

Power BI Pro

Standard license to create, publish, and share reports. All creators AND viewers must have a Pro license.

14 $ /utilisateur/mois
Included in Microsoft 365 E5
Refresh
8×/jour
Dataset
1 Go max
Sharing
Between Pro users
What's locked

No Deployment Pipelines (DEV→TEST→PROD), no paginated reports, no XMLA read/write, no AI/Copilot, no Direct Lake, datasets limited to 1 GB. Every viewer must also pay for a Pro license.

Who is it for

Team of 5–50 people creating and viewing reports. Simple sources (Excel, single ERP), moderate volumes. This is the starting point for our sprints.

Power users & builders

Premium Per User

All Premium features, billed per user. Ideal for advanced data teams without investing in dedicated capacity.

24 $ /utilisateur/mois
or $14/month add-on if already Pro
Refresh
48×/jour
Dataset
100 Go max
Sharing
Between PPU users only
Key limit

PPU content can only be shared with other PPU users. To share with Pro or Free users, content must be on Fabric F64+ capacity. No free viewers in PPU-only mode.

What it unlocks vs Pro

Deployment Pipelines (DEV→TEST→PROD), paginated reports, XMLA read/write, Dataflows Gen2, AI/Copilot, 100 GB datasets, 48 refreshes/day. Cost-effective for teams of 5–250 power users needing these advanced features.

Fabric Capacity (F-SKU): the enterprise model

Since July 2024, Premium P-SKUs are retired for new customers. Fabric (F-SKU) is the only capacity model available. The principle: you pay for shared compute power (Capacity Units), not per-user licenses. Power BI, Data Factory, Spark, Real-Time Analytics — everything consumes CUs from the same pool.

Content creators still need a Pro license ($14/month) to publish. But viewers can access reports for free if capacity is F64 or higher.

SKUCUPAYG Price1-Year ReservedUse case
F22~263 $/mois~156 $/moisDevelopment, testing, POC
F44~526 $/mois~313 $/moisSmall team, 1-2 dashboards
F88~1 051 $/mois~626 $/moisSME, 3-5 dashboards
F1616~2 102 $/mois~1 251 $/moisDepartment, medium datasets
F3232~4 205 $/mois~2 502 $/moisMulti-department, Spark
F64 ★64~8 409 $/mois~5 004 $/moisFree viewers · Enterprise
F128128~16 819 $/mois~10 008 $/moisLarge enterprise, heavy workloads
Economic calculation

Is F64 cost-effective for your organization?

Scenario A · Pro for everyone
60 users × $14/month × 12
10 080 $/an
Everyone creates and views. 8 refreshes/day, 1 GB max per dataset. No Deployment Pipelines, no paginated reports, no Direct Lake.
Scenario B · Fabric F64
F64 reserved 1 year + 5 Pro creators
60 888 $/an
5 Pro creators ($840/year) + F64 reserved 1 year ($60,048/year). 55 free viewers. 48 refreshes/day, 400 GB datasets, Deployment Pipelines, Direct Lake, AI/Copilot, paginated reports, 64 TB Mirroring storage included.
Scenario C · PPU builders + Pro viewers
5 PPU + 55 Pro
10 680 $/an
5 PPU builders ($24 × 12 = $1,440/year) + 55 Pro viewers ($14 × 12 = $9,240/year). Builders get advanced features (48 refresh, 100 GB, Deployment Pipelines), viewers stay on Pro. No free viewers — every viewer must pay. No full-scale Direct Lake (PPU only).

Verdict: for 60 users, the Pro (A) or mixed PPU (C) scenario is most cost-effective at around $10,000/year. F64 at ~$60,000/year (reserved) only becomes worthwhile when viewer count exceeds ~350 or when Fabric features (Direct Lake, data pipelines, Spark, 400 GB+ volumes) are essential. The threshold isn't user count alone — it's the combination of volume + features + viewer count.

What Fabric/Premium unlocks in practice

Beyond price, the real question is: which features will your organization actually use? Here's what's locked in Pro and unlocked in PPU or Fabric.

Refresh & performance

48 refreshes/day vs 8

In Pro, you're limited to 8 refreshes per day. For a financial dashboard continuously updated by the ERP, that's insufficient. In PPU/Fabric, 48 refreshes/day — one every 30 minutes. For near real-time, Direct Lake (Fabric only) eliminates the need for refresh by reading Delta files directly from OneLake.

Industrialization

Deployment Pipelines

Automated DEV → TEST → PROD promotion with automatic data source switching per environment. Impossible in Pro — deployments are manual, meaning human errors and no formalized business validation. This is the foundation of our governance (section above).

Volumes

100 GB (PPU) / 400 GB (Fabric)

In Pro, each dataset is limited to 1 GB. To consolidate finance + supply chain + HR in a single semantic model, 1 GB is quickly reached. PPU pushes to 100 GB, Fabric to 400 GB per dataset. Multi-domain consolidation requires this headroom.

Paginated reports

Pixel-perfect reports

Paginated reports (SSRS format) are essential for standardized financial statements, purchase orders, and regulatory reports. They generate identically formatted PDFs regardless of data size. Locked in Pro, available from PPU.

XMLA read/write

Advanced model administration

The XMLA endpoint allows administering the semantic model from third-party tools (Tabular Editor, ALM Toolkit, DAX Studio in write mode). Essential for automated deployment, migration scripts, and detailed model auditing. Pro provides no XMLA access.

AI & Copilot

Integrated artificial intelligence

Copilot in Power BI enables creating visuals, writing DAX, and interpreting trends in natural language. AutoML and Cognitive Services are also available. These AI features are only accessible in PPU or Fabric — locked in Pro. Microsoft justified the 2025 price increase by this AI investment.

Our recommendation

The trajectory we propose

Phase 1 · Sprint
Power BI Pro for everyone. Simple, fast, cost-effective. Proof of value in 6 weeks. No need for Fabric for a first dashboard.
Phase 2 · Industrialization
PPU for 3–5 creators. Deployment Pipelines, paginated reports, 48 refreshes/day. Viewers stay on Pro. Moderate investment, critical features unlocked.
Phase 3 · Scale
Fabric F-SKU when volumes exceed 1 GB, viewers exceed 250, or the organization needs Direct Lake, Spark, and Data Factory. Capacity sizing done during the diagnostic.

Monitoring & cost optimization

Monitoring

Fabric Capacity Metrics App

Official Power BI app displaying real-time CU consumption by workload (Power BI, Data Factory, Spark). Detects consumption peaks, over-consuming refreshes, and queries saturating capacity. Configured from deployment.

Optimization

Pause, scaling & reservation

In PAYG, capacity can be paused (nights, weekends) and scaling up/down adapts to load. 1-year reservation saves ~40% vs PAYG — F64 drops from ~$8,409 to ~$5,004/month. We size capacity during the diagnostic.

Protection

Autoscaling & surge protection

When capacity is saturated, background tasks are slowed to preserve report interactivity. The 'capacity overage' option pays for excess rather than suffering slowdowns. Alerts configured so the CIO is notified before users are impacted.

Official Microsoft tools to estimate your costs

Azure Pricing Calculator

Estimate your monthly costs for any combination of Azure services, including Fabric.

azure.microsoft.com/pricing/calculator →
Fabric Capacity Estimator

Enter your workloads (Power BI, Spark, Data Factory) and get the recommended F-SKU with CU breakdown.

microsoft.com/fabric/capacity-estimator →
Free 60-day trial

Microsoft offers a Fabric F64 trial for 60 days — enough to test real workloads and measure consumption before purchasing.

Start free trial →

Note : prices above are based on the US West 2 region (~$0.18/CU/hour). Prices vary ±10-15% by Azure region (West Europe: ~$0.22/CU/hour). Use the Azure calculator with your region for precise pricing.

NJIADATA does not sell Microsoft licenses. We recommend sizing adapted to your reality — not the one that maximizes the invoice. License choice is validated during the free diagnostic, with precise costing adapted to user count, data volumes, and features required by your CIO.

Frequently Asked Questions

The questions our prospects ask most often — and our honest answers.

Our data is in disorganized Excel files. Is that a problem?

No. It's actually the most common case. The sprint is designed to work with 'good enough' data — disorganized Excel files, Access databases, partially populated ERP. What we can't do is create data that doesn't exist. If you have no budget tracking and no client database, the first task will be structuring, not BI — and we say so before signing.

We already had a BI project that failed. What will be different?

Most BI projects fail for the same reason: the consultant built a system that meets specifications, not the decision-maker's needs. Our approach starts with the CEO interview — not the data inventory. The dashboard is designed to change a specific decision, not to 'provide visibility'. And the autonomy test in week 6 ensures your teams know how to use and evolve it. A dashboard nobody uses is not a risk we accept.

Who maintains the dashboard after you leave?

Your teams. That's the whole point of knowledge transfer: the empowerment kit (guide, visual catalog, DAX documentation, reusable theme) and 2 trained referents enable your organization to create subsequent dashboards without us. Quarterly check-ups in the first year ensure autonomy is real, not theoretical.

How many Power BI licenses do 60 users need?

It depends on usage patterns. If all 60 users create and edit reports, you need 60 Pro licenses ($14/month each, ~$10,080/year total). If most only view, a mixed approach (PPU for builders + Pro for viewers) at ~$10,680/year gives advanced features to creators. F64 capacity (~$60,000/year reserved) only makes sense above ~350 viewers. We size licenses during the diagnostic — it's one of week 1's deliverables. See our complete licensing guide in the section above.

Does it work with our Sage or SAP ERP?

Yes. Power BI has native connectors for SAP HANA, SAP BW, Sage 100/X3, and most market ERPs. We have experience connecting to varied environments — including legacy systems with CSV export as the only option. The week 1 diagnostic includes connection testing to your actual sources.

We don't have a CIO. Is it still possible?

Yes. Many of our clients in Africa don't have a structured IT department. The sprint is designed to work with minimal technical contact — someone who knows system and file access. The business referent who iterates with us during weeks 4-5 is a business person, not IT. And the empowerment kit is designed for business users, not IT specialists.

What's the difference vs a Power BI freelancer?

A freelancer builds what you ask for. We build what you need — and the difference isn't always the same thing. The executive interview, data contract, anchoring protocol, and knowledge transfer aren't services a freelancer typically offers. Above all, our commitment is to an outcome (the autonomy test), not a number of days.

Do you work remotely or on-site?

Both. The executive interview and iteration sessions in weeks 4-5 are more effective in person — especially for the first project. Technical development (pipeline, model, DAX) is done remotely. For clients in West Africa, our teams are based in Abidjan. For international organizations, we work from Paris with on-site visits at key moments.

What decision should your dashboard change ?

The 5-day diagnostic is free with no commitment. In one hour with your CEO, we identify the decisions that matter. In 5 days, you have your DRI score, your roadmap, and a clickable prototype of your future dashboard.

Request a free diagnostic →

Not sure whether you need ERP, BI or DMS? The free 5-day diagnostic maps your needs before any commitment.