The Sentinel Framework: Measuring Governance in the Great Lakes Region

Version 1.0 | April 2026
Published by Mulembe Politics, Mulembe Nation Network
Licence: Creative Commons BY-NC-SA 4.0
Citation: Mulembe Politics. (2026). The Sentinel Framework: Measuring Governance in the Great Lakes Region (Version 1.0). Mulembe Nation Network. https://mulembenation.co.ke/politics/sentinel-methodology/

VersionDateChanges
v1.0April 2026Initial publication. Covers GQI, IRR, and CIS frameworks. Kenya primary case study.

All future methodology updates will be documented here. Version history is permanent — no silent revisions.

Abstract

The Sentinel Framework is a multi-scale governance measurement methodology developed for Africa’s Great Lakes region. It operationalises governance quality as three measurable constructs: institutional rules-compliance (whether governments follow their own laws), resource delivery efficiency (whether budgets reach frontline services), and institutional resilience (whether organisations survive leadership changes and resist capture). The framework produces composite scores at ward, county, national, and regional levels, enabling comparison across scales that global indices cannot reach. This document describes the conceptual foundations, component definitions, measurement protocols, data sources, and validation strategy for Sentinel Framework Version 1.0.

1. Motivation and Design Principles

1.1 The problem with existing governance indices

Established governance indices — the World Bank’s Worldwide Governance Indicators, the Mo Ibrahim Index of African Governance, V-Dem, and the Fragile States Index — share three structural limitations that reduce their utility for the Great Lakes context.

Capital city bias. Composite national scores aggregate governance quality across an entire country, obscuring dramatic sub-national variation. A Kenyan national governance score tells an investor nothing about whether a specific county’s procurement processes are reliable, or whether a ward’s development fund is being executed. The Sentinel Framework is designed from the sub-national level up.

Annual measurement frequency. Most indices publish annually with 12–18 month publication lags. Governance quality in the Great Lakes region can change substantially in weeks — a court packing episode, a sudden budget reallocation, an electoral commission chair’s removal. The Sentinel Framework targets quarterly updates with near-real-time flagging of material institutional events.

Institutional homogeneity assumption. Global indices treat all institutions of the same type as comparable regardless of constitutional form, legal personality, or operational mandate. The Sentinel Framework distinguishes between institutions by their legal classification and applies scale-specific variable definitions accordingly.

1.2 Design principles

Multi-scale coherence. The same measurement logic applies at ward, constituency, county, and national level. Scores are comparable within a scale, not across scales without normalisation.

Source transparency. Every data point links to a primary source: court records, audit reports, gazette notices, electoral commission filings, or legislative records. Scores derived from community-submitted data are clearly distinguished from those derived from official sources.

Honest uncertainty. Confidence scores and confidence intervals are published alongside point estimates. Where data is insufficient to score a component, this is declared rather than imputed.

Proprietary aggregation, transparent components. The component definitions and measurement protocols in this document are published in full. The aggregation methodology — how component scores combine into composite index scores — is proprietary to Mulembe Nation and is not published here. This is standard practice for commercial intelligence products and is consistent with the approach of the Economist Intelligence Unit, Oxford Analytica, and similar providers. Data licensing clients receive full methodology disclosure under NDA.

2. The Governance Quality Index (GQI)

2.1 Definition

The Governance Quality Index measures overall governance performance at a given administrative unit — ward, constituency, county, or national level — across three dimensions: electoral competition and legitimacy, administrative process compliance, and last-mile service delivery efficiency. Each dimension is scored on a 0–100 scale. Higher scores indicate stronger governance performance.

2.2 Components

Electoral Competition (EC)

Measures the quality and legitimacy of electoral processes at the relevant scale. Covers voter registration completeness, candidate field diversity, electoral dispute rates and resolution, turnout relative to registration, and compliance with statutory electoral timelines.

Primary data sources: Independent Electoral and Boundaries Commission (Kenya), Electoral Commission of Uganda, National Electoral Commission (Rwanda), court records of electoral petitions, gazette notices of election results.

Measurement unit: Score from 0–100. 80+ indicates competitive elections with low dispute rates and high compliance. Below 40 indicates significant process failures.

Administrative Process Compliance (APC)

Measures whether the administrative unit follows its own legally mandated processes — procurement procedures, public participation requirements, statutory reporting deadlines, budget approval timelines, and audit response obligations.

Primary data sources: Office of the Controller of Budget (Kenya), Auditor-General reports, county assembly Hansard records, procurement portal filings, court orders on administrative compliance.

Measurement unit: Score from 0–100. Derived from the ratio of compliant processes to total required processes, adjusted for materiality of non-compliance and recurrence.

Last-Mile Execution (LME)

Measures the efficiency of resource delivery from budget allocation to frontline service availability. Covers budget absorption rates adjusted for audit quality, functional facility rates (health, education, water), and service continuity indicators.

Primary data sources: National Treasury budget execution reports, Controller of Budget county reports, Kenya Health Information System, NEMIS, community monitoring submissions verified through the Mulembe Nation data pipeline.

Measurement unit: Score from 0–100. Accounts for the quality of expenditure, not only absorption volume — a county that spends 95% of its budget but with 20% flagged for procurement irregularities scores lower than a county that spends 80% cleanly.

2.3 Scale-specific variable definitions

ScaleEC MeasurementAPC MeasurementLME Measurement
WardWard representative election, petition rateWard Development Fund complianceBorehole functionality, school feeding programme
CountyGovernor/assembly elections, petition outcomesCounty laws, CFSP timelines, audit responseCounty hospital drug stock, road maintenance
NationalPresidential/parliamentary elections, IEBC complianceConstitutional body independence, statutory deadlinesNational road maintenance, vaccination coverage

2.4 Coverage

Version 1.0 covers Kenya’s 47 counties and 290 constituencies. Ward-level coverage is currently limited to counties with sufficient public audit data. Uganda, Rwanda, Tanzania, Burundi, and DRC coverage will be added in Version 1.1 as data infrastructure is established in each country.

3. The Institutional Resilience Rating (IRR)

3.1 Definition

The Institutional Resilience Rating measures an organisation’s capacity to maintain its core functions independent of individual leadership, political pressure, or resource shocks. It applies to specific institutions — electoral commissions, courts, anti-corruption bodies, regulatory agencies, public enterprises — rather than to geographic administrative units.

IRR scores classify institutions into four categories: Anchor (80–100, stable and autonomous), Hub (60–79, moderately resilient), Transitional (40–59, vulnerable to capture or collapse), and Shell (0–39, exists primarily as a vehicle for specific individuals or factions).

3.2 Components

Operational Fidelity (OF)

Measures adherence to the institution’s own internal rules — its constitution, standard operating procedures, service charter, and governing legislation. Sub-indicators: Rule Publication and Enforcement (whether required rules are publicly accessible and consistently applied) and Decisional Consistency (whether internal decisions are overturned for failure to follow own procedures).

Primary data sources: Institution websites, Political Parties Dispute Tribunal records (Kenya), court judgments on institutional compliance, annual reports, gazette notices.

Institutional Autonomy (IA)

Measures the degree to which the institution operates independently of external actors. Covers Decision-Making Freedom (proportion of decisions reversed by external pressure) and Leadership Autonomy (proportion of leadership positions filled through statutory processes versus executive discretion).

Primary data sources: Gazette notices, court judgments on institutional independence, parliamentary committee records, budget allocation patterns, public statements tracked for consistency with prior institutional positions.

Succession Velocity (SV)

Measures how quickly an institution restores normal operations following leadership transitions. Covers statutory compliance with transition timelines, continuity of core functions during transitions, and knowledge transfer completeness.

Primary data sources: Gazette notices (appointment and departure dates), institution annual reports, parliamentary committee testimony, media monitoring of operational disruptions during transitions.

4. The Civic Intelligence Score (CIS)

4.1 Definition

The Civic Intelligence Score measures the quality of public information access and civic participation around governance processes — whether citizens can access governance information within statutory timelines, whether public participation mechanisms function as legally required, and whether complaint and redress systems are accessible and responsive.

4.2 Components

Public Accessibility (PA)

Measures whether governance-relevant information is publicly accessible within statutory timelines — budget documents, audit reports, meeting minutes, procurement notices, and environmental impact assessments. Scored as the proportion of required disclosures published on time and in accessible formats.

Participation Compliance (PC)

Measures whether legally required public participation processes are conducted and whether submissions are demonstrably considered. Covers budget public participation forums, environmental impact assessment consultations, and legislative public hearings.

Redress Transparency (RT)

Measures the accessibility and responsiveness of complaint and redress mechanisms — whether citizens can file complaints, whether complaints receive responses within statutory timelines, and whether outcomes are communicated publicly.

5. Composite Index Architecture

The three indices — GQI, IRR, and CIS — combine to produce the Mulembe Sentinel Index score for a given entity at a given point in time. The aggregation methodology is proprietary. The following aspects are disclosed:

  • All component scores are normalised to a 0–100 scale before aggregation
  • Aggregation weights differ by entity type — a court is weighted differently from a county government because the relevant governance functions differ
  • Confidence intervals are calculated for each component and propagated through to the composite score
  • Scores with insufficient data coverage are flagged with a data quality indicator rather than imputed
  • All composite scores are stored with measurement date, period type, data source, and confidence score in the Mulembe Nation structured database

6. Data Infrastructure

Sentinel Index scores are produced and stored in the Mulembe Nation context graph — a structured relational database connecting entities (people, organisations, places, events) to metrics, relationships, and published content. Each score is a record in the metrics table with fields for entity, metric type, value, unit, measurement date, period type, and data source. This architecture enables time-series tracking of governance quality over multiple periods, cross-entity comparison within a given scale and period, API query by entity, metric type, geography, or date range, and linkage between index scores and the published articles, court filings, and audit reports that underlie them.

The data infrastructure is built on MySQL with a Python data pipeline for ingestion and a WordPress publishing layer for editorial presentation. API access to the structured data is available to licensed data clients. See the partnerships page for licensing details.

7. Validation Strategy

7.1 Internal validation (current stage)

Logical consistency checks. Do scores move in expected directions when governance events occur? When a court is publicly found to have violated its own procedures, does the Operational Fidelity sub-component of the IRR score decline?

Historical backtesting. GQI calculations for past periods with known outcomes — the 2017 and 2022 Kenyan electoral cycles, the 2021 Ugandan presidential election — allow validation against documented governance events.

7.2 External validation (in development)

Correlation with established indices. Where World Bank sub-national governance data exists, county-level GQI scores will be compared for directional consistency. We will also compare against the Mo Ibrahim Index of African Governance at national level and V-Dem electoral integrity scores where available.

Predictive validity testing. Beginning Q3 2026, we will publish quarterly accuracy reports tracking whether governance events predicted by low Sentinel scores occurred within the stated confidence interval timeframe.

Expert review. We are seeking governance scholars at East African universities to review the methodology. Substantive critiques will be published alongside our responses and incorporated into Version 1.1 where warranted. Submit review interest to lab@mulembenation.co.ke.

7.3 Known limitations of Version 1.0

  • Single-analyst scoring. No inter-coder reliability testing has been conducted yet. Component scoring currently reflects editorial judgment with source citation rather than multi-analyst consensus. This will be addressed in Version 1.1 through a structured calibration exercise.
  • Data availability variance. Official data quality varies significantly across the 12 countries covered. Kenya has substantially more granular public audit and electoral data than DRC or Burundi. Scores for lower-data environments carry wider confidence intervals and are clearly flagged.
  • Community data integration. The mechanism for incorporating community-submitted governance data into scores is operational but not yet at scale. Community data is currently treated as a secondary source flagged separately from official data.
  • No peer review. Version 1.0 has not been submitted for academic peer review. Expert consultation is ongoing and will inform Version 1.1.
  • Weighting empirical basis. Component aggregation weights in Version 1.0 are based on expert judgment. Version 1.2 will incorporate empirical optimisation against historical outcomes to validate or revise the current weighting approach.

8. Geographic Coverage and Roadmap

CountryGQIIRRCISTarget version
KenyaCounty + constituency levelKey national institutionsCounty levelv1.0 — current
UgandaDistrict levelKey national institutionsNationalv1.1 — Q3 2026
RwandaDistrict levelKey national institutionsNationalv1.1 — Q3 2026
TanzaniaRegion levelSelected institutionsNationalv1.2 — Q4 2026
BurundiProvince levelSelected institutionsNationalv1.2 — Q4 2026
DRCProvince levelSelected institutionsNationalv2.0 — 2027
ZambiaProvince levelSelected institutionsNationalv2.0 — 2027
MalawiDistrict levelSelected institutionsNationalv2.0 — 2027
EthiopiaRegion levelSelected institutionsNationalv2.0 — 2027
South SudanState levelKey national institutionsNationalv2.0 — 2027
SomaliaFederal member state levelSelected institutionsNationalv2.1 — 2028
MozambiqueProvince levelSelected institutionsNationalv2.1 — 2028

9. Data Licensing and Access

  • Editorial use (free). Journalists and researchers may cite Sentinel scores in published work with attribution to Mulembe Nation and a link to this methodology document.
  • Research licensing. Structured data access for academic research. Attribution required. From $500 per year. Contact partnerships@mulembenation.co.ke.
  • Commercial licensing. Full data feeds, API access, and custom queries for investment, risk, and development finance use. Includes full methodology disclosure under NDA. Contact partnerships@mulembenation.co.ke.
  • Custom index reports. Sponsored reports covering a specific geography or institution type. Methodology published in full. Funder does not influence findings. From $2,500 per report.

10. Methodology Feedback and Contribution

This methodology is published for critique. If you identify a conceptual error, a problematic operationalisation, a missing data source, or a significant limitation not acknowledged here, we want to hear from you.

Submit methodology feedback to lab@mulembenation.co.ke with the subject line “Sentinel Methodology v1.0”. All substantive feedback receives a written response. Accepted corrections are published in the version changelog. The version history in this document is permanent — no silent revisions.

If you are a governance scholar, data scientist, or practitioner with relevant expertise and want to contribute to the methodology review, contact partnerships@mulembenation.co.ke.


Mulembe Politics is a flagship vertical of the Mulembe Nation Network, published by Lwandaz Tales Media, Nairobi, Kenya. The Sentinel Framework is proprietary to Lwandaz Tales Media. Component definitions in this document are published under Creative Commons BY-NC-SA 4.0. Aggregation methodology, component weights, and sub-index formulas are proprietary and not covered by this licence. Last updated: April 2026. Methodology version: 1.0.