The first six case studies revealed six cross-covenant structural findings: 0 segments on disabled women's violence, 7 on reasonable accommodation, 0 on disabled indigenous. But if this platform only critiqued government without auditing itself, it would just be another "no one claims" discourse. So we ran the same quantitative tool on ourselves and made link quality the 7th indicator. Wave 162 baseline: 88 dead_anchor. After Wave 165: 50.
Why platform self-audit?
Tri-Covenant Watch's core thesis is: government silence is measurable. But this thesis only holds if our own tools are not silent. If our three platforms have 88 broken internal links on the home pages, we lose the standing to criticize logical gaps in government documents.
We demand disability-disaggregated statistics from the government, cross-covenant List of Issues with mutual citation, CRC GC-22 §32-34 implementation — all of which require discourse precision from the government. If our own discourse precision (broken links, dead anchors, stale subdomains) doesn't meet the same standard, the critique loses legitimacy.
How we did it
During Wave 160-166, we built a 150-line Python script (link_audit.py) for full-tree scanning of the entire _public/ directory:
- Parse all
hrefattributes — find relative + internal links - Parse all
idattributes — build file → ids map - For each link: does the target file exist? Does the #anchor actually exist on that file?
- Categorize into 4 types: broken_target (link to non-existent file) / dead_anchor (#xxx pointing to non-existent id) / stale_subdomain (outdated domains) / platform_root_absolute (/foo absolute paths only working on canonical deployment)
The script outputs a JSON history (one snapshot per month) and auto-renders to QA · Link Audit Dashboard. Step [5/5] of run_monthly_pipeline.sh runs it on the 1st of each month.
Three key fixes from baseline → 50
CRPD: 102 pages add id="main"
102 CRPD sub-pages had href="#main" accessibility skip-link but the <main> element lacked id="main". Keyboard users tabbing to skip-link landed nowhere.
Fix: sed-replaced <main> with <main id="main">
Remove CEDAW templates/_layout_pi.html
Build templates shouldn't be deployed (GH Pages skips _-prefix files anyway → 404). Template contained dead anchors but wasn't actually served.
Fix: git rm
CEDAW index.html add 4 invisible anchors
19 PI detail pages' footer linked to ../index.html#axis3 and #axis4, but index.html had no matching id.
Fix: Added <a id="axis3" style="position:absolute;left:-9999px;"> and 3 others as invisible markers
Three fixes combined: 88 → 50. The remaining 50 mostly involve CRC platform's crc-articles.html#PI-XX (where crc-articles uses JS to dynamically load PI data, no static ids) — a structural issue requiring a JS hash router to fix.
From tool to KPI: 7th indicator joins monthly tracker
Wave 166 promoted dead_anchor from an internal audit to the 7th indicator on monthly-tracker:
🩺 Platform Health · dead_anchor: baseline 50 → target 0 (2027-Q1)
This indicator is inverted (lower is better), reads from link_audit.json automatically. Any commit introducing a new broken anchor will immediately surface "platform health declining" on the dashboard — creating self-regulation.
Three meanings
1. Credibility signal
NGOs scrutinize NGOs; UN reviewers assess NGO evidence credibility. A platform that quantifies government discourse silence but has 50% broken self-links forfeits half its credibility. Putting self-audit as the 7th indicator publicly visible tells readers: "Our numbers accept the same scrutiny standards."
2. Engineering discipline
Monthly audits ensure future new pages / new waves don't introduce regressions. link_audit.py is integrated into step [5/5] of run_monthly_pipeline.sh, refreshing alongside the 6 structural findings.
3. Example for other NGOs
DEVELOPER_GUIDE.md guides other NGOs replicating this architecture to apply link_audit too. Make self-audit a first-class citizen from day 1, not retrofit later.
Conclusion: silence is bidirectional
Government silence is 0 segments of government documents. Our silence is 88 broken links. Both are measurable, trackable, accountable. The only difference is who admits first.
We admitted 88, then drove it to 50, target 0. And the government?
dead_anchor while ignoring broken_target (links to files that don't exist). CRC's 14 timeline pages cited ../../data/sources/*.md and ../../data/advocacy_actions/*.md — but those files lived in the parent repo, not the deployed _public/ sub-repo. broken_target fell from 3,624 to 67 (98% repair). Three fixes: (1) mirror data/sources/*.md + data/advocacy_actions/*.md into _public/ (380 KB, 42 files); (2) fix the relative-path logic in render_timeline.py; (3) for non-existent sources show grey "📄 source pending" text instead of an <a>. Three-platform totals: 3,713 → 156 (96% repair). Same standard applied to ourselves: admit, then repair.
id="PI-XX" markers to crc-articles.html and chain.html each, plus a small hash-router script — dead_anchor fell to 0. The 7th indicator's 2027-Q1 target was reached on 2026-05-06 — 9 months early. Three-platform totals: 156 → 10 (94% repair, only 10 small-scope broken_target placeholders remain). Lesson: when a quantitative standard is published and watched by readers, self-discipline accelerates. "We admitted 88, then drove it to 50" was the article's commitment; Wave 171 honored it down to 0. And the government?
sources/ and advocacy_actions/ directories, and 4 placeholder hrefs converted to grey "(pending)" text labels. All four categories across three platforms: 0 issues. broken_target / dead_anchor / stale_subdomain / platform_root_absolute all at zero. From Wave 162 baseline of 3,751 down to Wave 176's 0 — across 14 waves. The double commitment: not only is what should be quantified made public, but when readers find blind spots we hadn't yet seen, those get admitted and repaired too.
W177 baseline: 921 HTML files, only 20 fully clean (2.2%), 27,084 findings. W178: fixed 218 C7 (missing
lang) + 31 C6 (missing skip-link) → both structural categories cleared. W179 in two stages: stage 1 rewrote the C1 audit — the prior matching was element-agnostic (every color:#fff got paired with the body background, so white-on-dark buttons got falsely flagged), refactored to scope-aware (color/background pair only when in the same CSS rule or inline-style attribute) → 26,866 → 157 (99.4% turned out to be a tooling artifact). Stage 2 ran a real palette pass — replaced 8 low-contrast badge backgrounds (#C68822 / #E65100 / #2FA598 etc.) with WCAG-AA-safe darker variants (#8E5A0E / #B5371F / #1F6F66), patching 217 files across inline styles + <style> blocks → 157 → 55.W180: batch-bumped body font-size < 16px to 16px (kids/* 18px) across 295 files → C2 also reaches zero. All 3 platforms: 924/924 files clean (100%). The WCAG 2.1 AA self-audit KPI converged from 27,084 to 0 across 4 waves.
Result · two self-audit KPIs both at zero:
- link_audit: W162-W176 / 14 waves / 3,751 → 0
- WCAG 2.1 AA: W177-W180 / 4 waves / 27,084 → 0
Released under CC BY 4.0. Free to reproduce / adapt with attribution. Suggested citation:
Tri-Covenant Watch. (2026-05-06). "How Does a Platform That Audits Government Audit Itself?" cedaw.taiwanmommies.org/blog/2026-05-06-platform-self-audit-en.html