Not just another dashboard. A decision-making tool built from the decisions you make — not the data you have. That's the difference between informing and driving action.
Our free 5-day diagnostic maps your data sources, measures your decision-making maturity (Data Readiness Index™), identifies the 3 dashboards that will have the most impact and delivers a costed action plan.
The industry starts from data and works up to decisions. We do the opposite. We start with what the executive needs to decide, identify the minimum data required, and build only what will be used.
"What decision do you make most often with the least visibility?" That's the first question. Not "what KPIs do you want?" — that generates endless lists.
Each decision generates a "data contract" — the minimal, not exhaustive, list. 45 targeted measures are worth more than 340 delivered out of habit. The model is lighter, faster, and more used.
The CEO sees green/amber/red in 3 seconds. Click, understand, decide. In 30 seconds. The dashboard isn't a consultation tool — it's a daily management reflex.
Any firm can build a dashboard. The real question is: will it still be used in 12 months? We've formalized three proprietary tools to measure, guarantee, and maintain the value of every BI project — before, during, and after.
Is your organization ready for a BI project? We assess 5 dimensions during the free diagnostic. The result determines the recommended engagement — and tells us honestly whether the project has a chance of succeeding.
No dashboard goes to production without validating these 5 criteria. It's our quality commitment — if any criterion fails, we stay an extra week. At our cost.
Health score out of 100, measured quarterly. Above 80, the dashboard is alive. Below 50, it dies — and we step in before it's too late.
Not the method from the consultant's perspective — the journey from the client's. Here's exactly what happens when you work with us, what it requires of you, and when you see the first result.
We sit down with the final decision-maker — CEO, Minister, Secretary General. No PowerPoint. No questionnaire. A one-hour conversation about the decisions they make each week under uncertainty. This conversation determines the entire project.
We open your Excel files, your ERP, your Access databases. We count empty cells, duplicates, inconsistent formats. We don't produce a 40-page audit — we produce a clickable mockup of your future dashboard, built on your real data. By the end of this week, you can see what your management tool will look like.
We build the data flow: source → cleansing → model → dashboard. The key moment arrives when the dashboard figure matches what your CFO calculates manually in Excel. This reconciliation test is decisive — it builds trust in the system. Until the CFO says 'yes, that's the right number,' we don't move on.
This is the most intense phase on the client side — and the most decisive. Every evening, we publish a dashboard version. Every morning, the business referent says what resonates and what doesn't. 3D charts disappear. 47 filters become 5. BI jargon headings become business language. In 10 days, the dashboard goes from 'technically correct' to 'I understand everything in 3 seconds.'
The business referent must, alone, without help, add a new visual to the dashboard, modify a filter, and explain a figure to their director. If they succeed, the dashboard is officially alive. If not, we stay an extra week — at our cost. Our contract doesn't specify a number of days. It specifies this outcome.
The biggest risk isn't technical failure — it's slow death. The dashboard works, but gradually nobody updates it. Our protocol includes a logbook (who uses what, when, for which decision), quarterly check-ups (half a day to recalibrate the dashboard as needs evolve), and a 12-month vitality test on three indicators: login frequency, number of refreshes, modifications by the referent.
Every organization has a different starting point. Our models adapt — but the Decision-First method and knowledge transfer are non-negotiable.
Eight real situations. On the left, daily work without a decision dashboard. On the right, daily work with one.
A dashboard's power depends on what lies beneath. Here's the architecture we deploy — and why each layer exists.
Semi-structured and unstructured data storage in Delta/Parquet format. Ideal when sources are heterogeneous (Excel + APIs + flat files). Spark notebooks enable complex transformations — multi-source joins, advanced cleansing, aggregations.
Traditional SQL data warehouse on Fabric. Standard T-SQL queries, materialized views, stored procedures. Ideal when the IT team has strong SQL skills and sources are already structured.
The Semantic Model — the Gold layer — is the contract between data and the dashboard. It's a star schema (fact tables + dimension tables) with named, documented, and versioned DAX measures. This layer ensures that 'revenue' means the same thing for the CFO, the sales director, and the CEO. Without a shared Semantic Model, each department calculates differently — and management committees become debates about numbers instead of debates about decisions.
You have data in Sage, Excel files, a SQL server, maybe an aging data warehouse. Here's how we take you to Microsoft Fabric without disrupting your operations.
We map all your data sources: ERP (Sage, SAARI, Business Central), SQL databases, Excel files, third-party APIs. Each source is evaluated: quality, volume, frequency, criticality. The Data Readiness Index™ measures your maturity across 5 dimensions. Deliverable: data diagnostic report + prioritized migration roadmap.
We create your Fabric workspace and configure ingestion pipelines via Dataflow Gen2 and Data Factory. Each source is connected, raw data lands in the Lakehouse Bronze layer. Cleansing and transformation produce Silver (clean data) then Gold (semantic model) layers. Unified storage in OneLake — one single copy of each data point, accessible everywhere.
Pipelines are orchestrated: scheduled refresh (daily, hourly or real-time depending on source), data-driven alerts (thresholds exceeded, anomalies detected), automatic report distribution via email or Teams. Power Automate triggers business notifications. Fabric monitoring tracks CU consumption and optimizes costs. Deliverable: operational, autonomous, monitored data platform.
After migration: all your data in one Lakehouse. No more servers to maintain. Predictable costs. And most importantly: your data is ready for dashboards, automation and governance. Migration isn't the end — it's the foundation everything else builds on.
A dashboard without governance dies in 6 months. Here's how we structure the complete lifecycle, from development to production.
DEV Workspace
Free iterations
Test data
TEST Workspace
Business validation
Filtered real data
PROD Workspace
Configured audiences
RLS active · SLA defined
DAX development and visual design happen in Power BI Desktop. Publishing targets the DEV workspace. Deployment Pipelines automate DEV → TEST → PROD promotion with automatic data source switching. No report reaches production without business validation.
Production reports are distributed via Power BI Apps — not through direct workspace sharing. Each App targets a defined audience (executive leadership, finance, operations) with its own access rights and navigation.
Validated datasets are certified (visible badge in the catalog). Only trained referents can certify. Uncertified datasets remain accessible but without a quality guarantee — users know exactly what they're using.
Refresh SLA: we define an explicit SLA with the client for each dashboard. Example: 'financial data is updated every business day at 7:00am.' This SLA is monitored via the Vitality Score and included in the quarterly check-up.
Storage mode choice determines performance, data freshness, and cost. Here's the comparison we use to decide with our clients.
Data is copied into the in-memory VertiPaq engine. Every query is instantaneous.
Max size: 1 GB (Pro), 10+ GB (Premium/PPU). Full refresh can be slow on large volumes — incremental refresh solves this.
Stable sources, moderate volumes, maximum performance and full DAX features required. Default mode for the majority of our sprints.
Direct reading of Delta files in OneLake. Performance close to Import, near real-time freshness.
Fabric exclusive. No calculated columns or tables. Automatic fallback to DirectQuery if the model uses SQL views or RLS on the SQL endpoint. All tables must be in the same Lakehouse.
Large volumes already in Fabric/OneLake, freshness needed without managing refreshes. Best compromise when Fabric infrastructure is in place.
Queries are sent in real time to the source. No data copy in Power BI.
Performance entirely dependent on source database optimization. Load on source system with every user interaction. Reduced DAX features. No automatic date hierarchies.
Very large volumes (hundreds of GB), continuously changing data (IoT, transactions), or when regulations prohibit data copying. Reserved for cases where no alternative is viable.
Our recommendation: for a first sprint, we use Import — it's the simplest, fastest, and most compatible with all DAX features. Migration to Direct Lake or DirectQuery happens when volumes or freshness needs justify it.
A dashboard shared with 60 users doesn't mean everyone sees the same thing. Here are the security mechanisms we deploy on every project.
Le directeur de la filiale Bouaké voit les données de Bouaké. Le DG voit toutes les filiales. Le même rapport, les mêmes mesures DAX, mais les données sont filtrées automatiquement selon le profil de l'utilisateur. Le RLS est défini dans le modèle sémantique et appliqué côté serveur — l'utilisateur ne peut pas contourner le filtre.
The sales manager sees revenue but not net margin. The CFO sees both. OLS hides entire columns or measures — they're invisible, not just hidden by a filter.
Every dataset and report receives a sensitivity label: Public, Internal, Confidential, Highly Confidential. The label determines export rights (no PDF for 'Highly Confidential'). Native integration with Microsoft Purview and Azure Information Protection.
For European clients and international organizations: Fabric capacities are deployed in the region of your choice (Europe, South Africa, or other). Data never leaves that region. Every access is logged (who, when, what, from which IP). Logs are exploitable via the Power BI API and integrable into your SIEM.
A slow dashboard is an abandoned dashboard. Here are the optimization practices we systematically apply — and the tools we use to measure.
Fact tables at the center, dimension tables around them, unidirectional relationships. No snowflake. No bidirectional relationships unless documented. Every deviation is a measurable performance degradation — we enforce this discipline from design.
Iterative measures (SUMX, FILTER) scan every row — acceptable on 10,000 rows, catastrophic on 10 million. We favor aggregates (CALCULATE, SUM) whenever possible.
On a 3-year dataset, only the last 30 days are refreshed each cycle. Historical data is partitioned and doesn't move. Result: refresh drops from 45 minutes to 3 minutes.
Performance Analyzer in Power BI Desktop measures each visual's render time — if a chart takes more than 2 seconds, we diagnose it. DAX Studio identifies the most expensive measures in memory and CPU. Both tools are used in every sprint.
Our standard: no visual should exceed 3 seconds of load time on a standard connection. That's the threshold beyond which the user goes back to Excel.
Choosing the right Power BI license is an architectural decision, not an administrative one. Wrong choice = thousands of dollars in excess costs or critical features locked out. This section gives you everything needed to decide — without looking elsewhere.
Since April 2025, Microsoft increased Power BI prices by 40% (Pro) and 20% (PPU). Premium P-SKUs are retired for new customers — replaced by Fabric F-SKUs. Here's the current landscape.
Report creation in Desktop. Limited publishing. No sharing or collaboration.
No sharing, no collaboration, no shared workspaces. Can view reports if hosted on F64+ or Premium capacity.
Individual user exploring data locally, or viewer in an organization that already has Fabric F64+ capacity.
Standard license to create, publish, and share reports. All creators AND viewers must have a Pro license.
No Deployment Pipelines (DEV→TEST→PROD), no paginated reports, no XMLA read/write, no AI/Copilot, no Direct Lake, datasets limited to 1 GB. Every viewer must also pay for a Pro license.
Team of 5–50 people creating and viewing reports. Simple sources (Excel, single ERP), moderate volumes. This is the starting point for our sprints.
All Premium features, billed per user. Ideal for advanced data teams without investing in dedicated capacity.
PPU content can only be shared with other PPU users. To share with Pro or Free users, content must be on Fabric F64+ capacity. No free viewers in PPU-only mode.
Deployment Pipelines (DEV→TEST→PROD), paginated reports, XMLA read/write, Dataflows Gen2, AI/Copilot, 100 GB datasets, 48 refreshes/day. Cost-effective for teams of 5–250 power users needing these advanced features.
Since July 2024, Premium P-SKUs are retired for new customers. Fabric (F-SKU) is the only capacity model available. The principle: you pay for shared compute power (Capacity Units), not per-user licenses. Power BI, Data Factory, Spark, Real-Time Analytics — everything consumes CUs from the same pool.
Content creators still need a Pro license ($14/month) to publish. But viewers can access reports for free if capacity is F64 or higher.
| SKU | CU | PAYG Price | 1-Year Reserved | Use case |
|---|---|---|---|---|
| F2 | 2 | ~263 $/mois | ~156 $/mois | Development, testing, POC |
| F4 | 4 | ~526 $/mois | ~313 $/mois | Small team, 1-2 dashboards |
| F8 | 8 | ~1 051 $/mois | ~626 $/mois | SME, 3-5 dashboards |
| F16 | 16 | ~2 102 $/mois | ~1 251 $/mois | Department, medium datasets |
| F32 | 32 | ~4 205 $/mois | ~2 502 $/mois | Multi-department, Spark |
| F64 ★ | 64 | ~8 409 $/mois | ~5 004 $/mois | Free viewers · Enterprise |
| F128 | 128 | ~16 819 $/mois | ~10 008 $/mois | Large enterprise, heavy workloads |
Verdict: for 60 users, the Pro (A) or mixed PPU (C) scenario is most cost-effective at around $10,000/year. F64 at ~$60,000/year (reserved) only becomes worthwhile when viewer count exceeds ~350 or when Fabric features (Direct Lake, data pipelines, Spark, 400 GB+ volumes) are essential. The threshold isn't user count alone — it's the combination of volume + features + viewer count.
Beyond price, the real question is: which features will your organization actually use? Here's what's locked in Pro and unlocked in PPU or Fabric.
In Pro, you're limited to 8 refreshes per day. For a financial dashboard continuously updated by the ERP, that's insufficient. In PPU/Fabric, 48 refreshes/day — one every 30 minutes. For near real-time, Direct Lake (Fabric only) eliminates the need for refresh by reading Delta files directly from OneLake.
Automated DEV → TEST → PROD promotion with automatic data source switching per environment. Impossible in Pro — deployments are manual, meaning human errors and no formalized business validation. This is the foundation of our governance (section above).
In Pro, each dataset is limited to 1 GB. To consolidate finance + supply chain + HR in a single semantic model, 1 GB is quickly reached. PPU pushes to 100 GB, Fabric to 400 GB per dataset. Multi-domain consolidation requires this headroom.
Paginated reports (SSRS format) are essential for standardized financial statements, purchase orders, and regulatory reports. They generate identically formatted PDFs regardless of data size. Locked in Pro, available from PPU.
The XMLA endpoint allows administering the semantic model from third-party tools (Tabular Editor, ALM Toolkit, DAX Studio in write mode). Essential for automated deployment, migration scripts, and detailed model auditing. Pro provides no XMLA access.
Copilot in Power BI enables creating visuals, writing DAX, and interpreting trends in natural language. AutoML and Cognitive Services are also available. These AI features are only accessible in PPU or Fabric — locked in Pro. Microsoft justified the 2025 price increase by this AI investment.
Official Power BI app displaying real-time CU consumption by workload (Power BI, Data Factory, Spark). Detects consumption peaks, over-consuming refreshes, and queries saturating capacity. Configured from deployment.
In PAYG, capacity can be paused (nights, weekends) and scaling up/down adapts to load. 1-year reservation saves ~40% vs PAYG — F64 drops from ~$8,409 to ~$5,004/month. We size capacity during the diagnostic.
When capacity is saturated, background tasks are slowed to preserve report interactivity. The 'capacity overage' option pays for excess rather than suffering slowdowns. Alerts configured so the CIO is notified before users are impacted.
Estimate your monthly costs for any combination of Azure services, including Fabric.
azure.microsoft.com/pricing/calculator →Enter your workloads (Power BI, Spark, Data Factory) and get the recommended F-SKU with CU breakdown.
microsoft.com/fabric/capacity-estimator →Microsoft offers a Fabric F64 trial for 60 days — enough to test real workloads and measure consumption before purchasing.
Start free trial →Note : prices above are based on the US West 2 region (~$0.18/CU/hour). Prices vary ±10-15% by Azure region (West Europe: ~$0.22/CU/hour). Use the Azure calculator with your region for precise pricing.
NJIADATA does not sell Microsoft licenses. We recommend sizing adapted to your reality — not the one that maximizes the invoice. License choice is validated during the free diagnostic, with precise costing adapted to user count, data volumes, and features required by your CIO.
The questions our prospects ask most often — and our honest answers.
No. It's actually the most common case. The sprint is designed to work with 'good enough' data — disorganized Excel files, Access databases, partially populated ERP. What we can't do is create data that doesn't exist. If you have no budget tracking and no client database, the first task will be structuring, not BI — and we say so before signing.
Most BI projects fail for the same reason: the consultant built a system that meets specifications, not the decision-maker's needs. Our approach starts with the CEO interview — not the data inventory. The dashboard is designed to change a specific decision, not to 'provide visibility'. And the autonomy test in week 6 ensures your teams know how to use and evolve it. A dashboard nobody uses is not a risk we accept.
Your teams. That's the whole point of knowledge transfer: the empowerment kit (guide, visual catalog, DAX documentation, reusable theme) and 2 trained referents enable your organization to create subsequent dashboards without us. Quarterly check-ups in the first year ensure autonomy is real, not theoretical.
It depends on usage patterns. If all 60 users create and edit reports, you need 60 Pro licenses ($14/month each, ~$10,080/year total). If most only view, a mixed approach (PPU for builders + Pro for viewers) at ~$10,680/year gives advanced features to creators. F64 capacity (~$60,000/year reserved) only makes sense above ~350 viewers. We size licenses during the diagnostic — it's one of week 1's deliverables. See our complete licensing guide in the section above.
Yes. Power BI has native connectors for SAP HANA, SAP BW, Sage 100/X3, and most market ERPs. We have experience connecting to varied environments — including legacy systems with CSV export as the only option. The week 1 diagnostic includes connection testing to your actual sources.
Yes. Many of our clients in Africa don't have a structured IT department. The sprint is designed to work with minimal technical contact — someone who knows system and file access. The business referent who iterates with us during weeks 4-5 is a business person, not IT. And the empowerment kit is designed for business users, not IT specialists.
A freelancer builds what you ask for. We build what you need — and the difference isn't always the same thing. The executive interview, data contract, anchoring protocol, and knowledge transfer aren't services a freelancer typically offers. Above all, our commitment is to an outcome (the autonomy test), not a number of days.
Both. The executive interview and iteration sessions in weeks 4-5 are more effective in person — especially for the first project. Technical development (pipeline, model, DAX) is done remotely. For clients in West Africa, our teams are based in Abidjan. For international organizations, we work from Paris with on-site visits at key moments.
The 5-day diagnostic is free with no commitment. In one hour with your CEO, we identify the decisions that matter. In 5 days, you have your DRI score, your roadmap, and a clickable prototype of your future dashboard.
Request a free diagnostic →Not sure whether you need ERP, BI or DMS? The free 5-day diagnostic maps your needs before any commitment.