The dominant model: data-first
If you've ever been involved in a BI project, you'll recognise the sequence. The consultant arrives, maps data sources, builds a star schema (fact tables, dimension tables), creates sophisticated DAX measures, develops a dozen dashboards, trains users, and leaves. All in 9 to 14 months.
This is the data-first approach. It starts from the bottom — raw data — and works upward to the decision. It's the method taught in every Microsoft certification, promoted by every firm, applied in every RFP. And it fails in the majority of cases.
in a typical project
delivered
viewed
after 6 months
These figures are orders of magnitude drawn from our cumulative experience across dozens of BI projects at European telecom operators, industrial groups, and financial institutions. Industry studies (Gartner, Forrester) converge on the same finding: a majority of BI projects fail to deliver expected business value. The pattern we observe is consistent: the data model is technically flawless and functionally oversized. The consultant built an analytical cube allowing 400 filter combinations, but the CEO needs 3 views and wants them in 2 seconds on their phone.
A perfect data model that changes no decision is an intellectual exercise. Not Business Intelligence.
The inversion: Decision-First Design
The approach we've developed over 20 years of practice inverts the chain. We don't start from data. We start from decisions.
How it works: a food industry group case study
Let's take a concrete case. A food industry group — 8 subsidiaries, 4 processing plants, a distribution network across 3 countries, 200 product references, 800 employees. The CEO manages from headquarters and needs visibility across the entire chain, from raw material purchasing to net margin by product.
A traditional firm would start by mapping the group's 14 data sources: the accounting ERP, Excel production tracking files, the sales database, logistics data from each warehouse, bank statements, HR files. They'd build a unified star schema with dozens of dimension tables (products, customers, suppliers, sites, periods, currencies). And 9 months later, deliver 10 dashboards covering finance, production, sales, logistics, and HR.
We start differently. We start with one hour with the CEO.
Step 1: identify the 5 vital decisions
After one hour of conversation, here is what emerges. The CEO of this food group regularly makes five critical decisions — and for each one, they lack visibility.
This matrix fits on an A3 sheet. It is the real project specification. Not 78 pages of functional specs. A matrix: decision × data × source × frequency. Each cell validated by the CEO personally, not by an intermediate project manager.
Step 2: the data contract
Each decision generates what we call a "data contract" — the minimum list of data needed for that decision to be informed. Not "useful" data. Not "interesting" data. The data without which the decision is made blind.
For the decision "should we restock Bouaké or San Pedro?", the contract is precise: real-time stock by reference and site (source: ERP inventory module), production rate over the last 7 days (source: Excel file from plant managers), confirmed customer orders for the next 14 days (source: sales database). Three sources. Not fourteen.
The data model is then built to serve these 5 contracts, and only these 5 contracts. Result: instead of 340 DAX measures and 12 dashboards, we produce 45 measures and 4 dashboards. But these 4 dashboards answer exactly the questions the CEO asks every week.
Step 3: the dashboard as a decision tree
This is where the visual difference becomes striking. A data-first dashboard for this food group would display a revenue trend chart, a detail table by subsidiary, filters by product, period, and geography. It's exhaustive. It's informative. And the CEO doesn't know what to do with it at 8am before their board meeting.
A decision-first dashboard shows something else: a green/amber/red signal per plant. The CEO sees in 3 seconds that Bouaké is amber. They click. Oil range stock covers 4 days, 3 orders pending. The recommendation: transfer 12 tonnes from San Pedro or place a supplier order. They decide. In 30 seconds, not 30 minutes.
The first dashboard informs. The second drives action. The difference isn't aesthetic — it's structural. The data-first dashboard was designed by a consultant who thinks in measures and dimensions. The decision-first dashboard was designed by someone who understood that the CEO wants to know one thing at 8am: is there a problem, and if so, what to do?
A good BI consultant knows how to build a data model. An excellent BI consultant knows the data model is the means, not the end. The end is the decision the CEO will make on Monday morning.
Why the industry doesn't work this way
Let's be honest: there are legitimate reasons why data-first dominates. First, it's safer. A consultant who builds an exhaustive data model doesn't risk "missing" a future request. By covering everything, they protect against the unpredictable — and in large organisations with multiple stakeholders, that caution makes sense. Second, data-first has real merits in certain contexts: when the organisation doesn't yet know what it's looking for, when the goal is data exploration, or when a unified data warehouse is a technical prerequisite for other projects (data science, AI, regulatory reporting).
But we must also acknowledge an economic reality. The data-first model is more profitable for the consultant. A complete model with 340 DAX measures requires months of specialist work. Each measure is a billable day. Each dashboard is a deliverable that justifies the project price. The fact that 10 of those 12 dashboards will never be opened is no one's problem — the contract was fulfilled, the specifications were delivered.
The decision-first approach is less profitable in the short term for the consultant. 45 measures instead of 340 means 3 times fewer billable days. 4 dashboards instead of 12 makes the project look "small" on a firm's CV. And the CEO interview — that crucial hour that determines everything else — isn't a billable deliverable in standard fee schedules. It's an investment in understanding that most firms don't value.
We made a different choice. Not because data-first is "bad" — but because in the majority of projects we've observed, the decision-maker's real need is targeted, not exhaustive. Meeting a targeted need with a targeted tool produces more value than an exhaustive tool nobody uses. A client who uses their 4 dashboards daily comes back for phase 2. One whose project died in a drawer is a lost client — and a reference we can't cite.
Where this approach comes from
Decision-First Design wasn't born from theory. It was born from a pattern repeated over 20 years of consulting in Europe. At a French telecom operator, we delivered a 200-measure data model for the marketing department. Six months later, the marketing director used 4 measures. The other 196 had been built to meet specifications signed off by a project manager, not by the marketing director himself.
At a Swiss industrial group, the CFO had a 15-page dashboard. He only opened one — the one showing consolidated cash by subsidiary and distance to the bank covenant. Everything else was noise. When we asked "what decision do you make with this dashboard?", he answered: "I check whether I need to call the bank this week or not." One decision. One number. Everything else existed only to justify the project budget.
It's this accumulation of experiences — ambitious projects whose value always concentrated in 10 to 20% of deliverables — that led us to invert the logic. Instead of building everything and hoping the decision-maker finds what they need, we first identify what the decision-maker needs, and build only that.
We spent the first part of our career building perfect data models. We're spending the second building only what will be used. It's harder, paradoxically — because the most demanding constraint isn't technical, it's intellectual: understanding the decision before touching the data.
The three tests of Decision-First Design
Every element of the system — every table, every DAX measure, every visual — must pass three tests before being built.
The decision test. What decision does this measure inform? If the answer is "it gives visibility" or "it's a best practice," it fails the test. A measure that doesn't serve an identified decision is a measure we don't build.
The Monday morning test. If the CEO opens this dashboard on Monday at 8am, will they know in under 10 seconds whether there's a problem? If the answer is no — if they need to click, filter, interpret, compare before understanding — the design needs rework. Critical information must jump off the screen.
The action test. Once the problem is identified, does the dashboard suggest an action? Not necessarily an automatic recommendation — but at minimum a navigation path to the detail needed to act. If the CEO sees red on Bouaké but has to open Excel to understand why, the dashboard has failed at the last step.
Measurable impact
Across projects where we've applied Decision-First Design in Europe, we observe significant gaps compared to data-first projects. These figures are averages from our portfolio — not from an academic study. We share them as benchmarks, not promises.
vs classic approach
at 6 months
volume
per dashboard
The last figure deserves an explanation. In a data-first project, the link between a dashboard and a specific decision is typically not documented — because it wasn't designed. In our approach, each dashboard is linked to identified decisions from the design stage. This makes ROI measurable: we can count how many decisions are actually made using the dashboard, and which are still made the old way.
The most striking result isn't technical. It's behavioural. When a dashboard is designed for a specific decision, the decision-maker opens it. Every day. Without being prompted. Because they know exactly why they're opening it and what they'll find. The dashboard is no longer a reporting tool — it's a management reflex.
What if your CEO can't articulate their decisions?
This is the most common question. And that's precisely where the expertise lies. Most leaders don't spontaneously articulate their decisions in terms usable for a BI project. They say "I want visibility on sales" or "I need a financial dashboard." It's our job to translate these requests into concrete decisions.
How? By asking the right questions. Not "what KPIs do you want?" — that question generates endless lists. But "what was the last decision you made this week with a sense of uncertainty?" Or "if you could have an instant answer to one question every morning, what would it be?" Or "what number do you systematically check before your board meeting?"
These questions extract real decisions, not theoretical ones. And it's on this reality that we build.
That's also why our 5-day diagnostic always starts with this interview. Not with a technical mapping. Not with a data inventory. With one hour with the person who makes the decisions. Because if we don't know which decision we want to inform, the best data model in the world remains an intellectual exercise. And if we do know, 45 well-chosen measures are worth more than 340 measures delivered out of habit.
About NJIADATA
NJIADATA is a consulting firm specialising in Microsoft solutions for African markets, based in Paris and Abidjan. 4 senior founders, 90+ years of combined experience at European telecom operators, financial institutions, and industrial groups. Our mission: bring international-grade consulting standards to the continent while transferring skills to local teams.
From source to insight.