Seventh case study · Methodology coda · Tri-Covenant Watch

How Does a Platform That Audits Government Audit Itself?

Tri-Covenant Watch quantifies government silence. It also quantifies its own. The link_audit tool drove dead_anchor from 88 down to 50 — a concrete demonstration that structural critique requires structural self-discipline.

2026-05-06 · Tri-Covenant Watch Editorial · CC BY 4.0

The first six case studies revealed six cross-covenant structural findings: 0 segments on disabled women's violence, 7 on reasonable accommodation, 0 on disabled indigenous. But if this platform only critiqued government without auditing itself, it would just be another "no one claims" discourse. So we ran the same quantitative tool on ourselves and made link quality the 7th indicator. Wave 162 baseline: 88 dead_anchor. After Wave 165: 50.

88 → 50
3-platform dead_anchor reduced from baseline to Wave 165 final (43% repair rate)

Why platform self-audit?

Tri-Covenant Watch's core thesis is: government silence is measurable. But this thesis only holds if our own tools are not silent. If our three platforms have 88 broken internal links on the home pages, we lose the standing to criticize logical gaps in government documents.

Structural critique requires structural self-discipline.
We demand disability-disaggregated statistics from the government, cross-covenant List of Issues with mutual citation, CRC GC-22 §32-34 implementation — all of which require discourse precision from the government. If our own discourse precision (broken links, dead anchors, stale subdomains) doesn't meet the same standard, the critique loses legitimacy.

How we did it

During Wave 160-166, we built a 150-line Python script (link_audit.py) for full-tree scanning of the entire _public/ directory:

The script outputs a JSON history (one snapshot per month) and auto-renders to QA · Link Audit Dashboard. Step [5/5] of run_monthly_pipeline.sh runs it on the 1st of each month.

Three key fixes from baseline → 50

CRPD: 102 pages add id="main"

-102

102 CRPD sub-pages had href="#main" accessibility skip-link but the <main> element lacked id="main". Keyboard users tabbing to skip-link landed nowhere.
Fix: sed-replaced <main> with <main id="main">

Remove CEDAW templates/_layout_pi.html

-2

Build templates shouldn't be deployed (GH Pages skips _-prefix files anyway → 404). Template contained dead anchors but wasn't actually served.
Fix: git rm

CEDAW index.html add 4 invisible anchors

-38

19 PI detail pages' footer linked to ../index.html#axis3 and #axis4, but index.html had no matching id.
Fix: Added <a id="axis3" style="position:absolute;left:-9999px;"> and 3 others as invisible markers

Three fixes combined: 88 → 50. The remaining 50 mostly involve CRC platform's crc-articles.html#PI-XX (where crc-articles uses JS to dynamically load PI data, no static ids) — a structural issue requiring a JS hash router to fix.

From tool to KPI: 7th indicator joins monthly tracker

Wave 166 promoted dead_anchor from an internal audit to the 7th indicator on monthly-tracker:

📅 Monthly Action Tracker originally had 6 indicators (6 structural findings). Wave 166 added the 7th:
🩺 Platform Health · dead_anchor: baseline 50 → target 0 (2027-Q1)

This indicator is inverted (lower is better), reads from link_audit.json automatically. Any commit introducing a new broken anchor will immediately surface "platform health declining" on the dashboard — creating self-regulation.

Three meanings

1. Credibility signal

NGOs scrutinize NGOs; UN reviewers assess NGO evidence credibility. A platform that quantifies government discourse silence but has 50% broken self-links forfeits half its credibility. Putting self-audit as the 7th indicator publicly visible tells readers: "Our numbers accept the same scrutiny standards."

2. Engineering discipline

Monthly audits ensure future new pages / new waves don't introduce regressions. link_audit.py is integrated into step [5/5] of run_monthly_pipeline.sh, refreshing alongside the 6 structural findings.

3. Example for other NGOs

DEVELOPER_GUIDE.md guides other NGOs replicating this architecture to apply link_audit too. Make self-audit a first-class citizen from day 1, not retrofit later.

Conclusion: silence is bidirectional

Government silence is 0 segments of government documents. Our silence is 88 broken links. Both are measurable, trackable, accountable. The only difference is who admits first.

We admitted 88, then drove it to 50, target 0. And the government?

📌 Wave 169 postscript (2026-05-06): After this piece went live we re-ran link_audit and noticed we'd been watching only dead_anchor while ignoring broken_target (links to files that don't exist). CRC's 14 timeline pages cited ../../data/sources/*.md and ../../data/advocacy_actions/*.md — but those files lived in the parent repo, not the deployed _public/ sub-repo. broken_target fell from 3,624 to 67 (98% repair). Three fixes: (1) mirror data/sources/*.md + data/advocacy_actions/*.md into _public/ (380 KB, 42 files); (2) fix the relative-path logic in render_timeline.py; (3) for non-existent sources show grey "📄 source pending" text instead of an <a>. Three-platform totals: 3,713 → 156 (96% repair). Same standard applied to ourselves: admit, then repair.
📌 Wave 170-171 postscript (2026-05-06, 9 months ahead of schedule): Same discipline continued. Wave 170 zeroed out stale_subdomain (21) and platform_root_absolute (18). Wave 171 added 14 invisible id="PI-XX" markers to crc-articles.html and chain.html each, plus a small hash-router script — dead_anchor fell to 0. The 7th indicator's 2027-Q1 target was reached on 2026-05-06 — 9 months early. Three-platform totals: 156 → 10 (94% repair, only 10 small-scope broken_target placeholders remain). Lesson: when a quantitative standard is published and watched by readers, self-discipline accelerates. "We admitted 88, then drove it to 50" was the article's commitment; Wave 171 honored it down to 0. And the government?
📌 Wave 173-176 postscript: full zero achieved. After publishing the press kit (W173) and the benchmark table (W174), we again caught ourselves introducing broken links (PI-12 and sogiesc_frequency.html linking to platforms that don't host them). Wave 175 brought release notes up to date; Wave 176 cleaned them all in one pass: cross-platform links converted to absolute URLs, browsable index pages added for CRC's sources/ and advocacy_actions/ directories, and 4 placeholder hrefs converted to grey "(pending)" text labels. All four categories across three platforms: 0 issues. broken_target / dead_anchor / stale_subdomain / platform_root_absolute all at zero. From Wave 162 baseline of 3,751 down to Wave 176's 0 — across 14 waves. The double commitment: not only is what should be quantified made public, but when readers find blind spots we hadn't yet seen, those get admitted and repaired too.
📌 Wave 177-179 postscript: second self-audit KPI converges (WCAG 2.1 AA · 94% files clean). With link health at zero, the same quantitative logic extends to web accessibility — a disability-rights platform's own a11y shouldn't only just barely pass.

W177 baseline: 921 HTML files, only 20 fully clean (2.2%), 27,084 findings. W178: fixed 218 C7 (missing lang) + 31 C6 (missing skip-link) → both structural categories cleared. W179 in two stages: stage 1 rewrote the C1 audit — the prior matching was element-agnostic (every color:#fff got paired with the body background, so white-on-dark buttons got falsely flagged), refactored to scope-aware (color/background pair only when in the same CSS rule or inline-style attribute) → 26,866 → 157 (99.4% turned out to be a tooling artifact). Stage 2 ran a real palette pass — replaced 8 low-contrast badge backgrounds (#C68822 / #E65100 / #2FA598 etc.) with WCAG-AA-safe darker variants (#8E5A0E / #B5371F / #1F6F66), patching 217 files across inline styles + <style> blocks → 157 → 55.

W180: batch-bumped body font-size < 16px to 16px (kids/* 18px) across 295 files → C2 also reaches zero. All 3 platforms: 924/924 files clean (100%). The WCAG 2.1 AA self-audit KPI converged from 27,084 to 0 across 4 waves.

Result · two self-audit KPIs both at zero: Beyond self-quantification, the auditing tool itself faced external scrutiny — Wave 179's biggest finding was admitting that W177-178's "26,866 violations" was 99.4% a measurement artifact, not real a11y failure. Honest numbers matter more than impressive numbers. Same question, from past-tense to present-tense: a government-monitoring platform quantifies government-document silence (0 segments), its own link health (0 issues), and its own accessibility (0 findings). And the government?

Released under CC BY 4.0. Free to reproduce / adapt with attribution. Suggested citation:
Tri-Covenant Watch. (2026-05-06). "How Does a Platform That Audits Government Audit Itself?" cedaw.taiwanmommies.org/blog/2026-05-06-platform-self-audit-en.html