We Need to Support Authors Better to Deliver Accessible Content
From ATAG 2.0 to Local AI & Automated WCAG-EM
Mike Gifford
4:05pm, Jan 31, 2026 (K.3.401) | FOSDEM 2026
Collaboration & Content Management
Thank organizers. Drupal Core Accessibility Maintainer. Chapter author in Digital Accessibility Ethics: Disability Inclusion in All Things Tech — Digital Accessibility and Open Source Need Each Other
Why authors: Most accessibility failures originate during authoring (structure, links, images,
tables, headings, media alternatives). Audits only detect the outcomes.
Scope of talk: ATAG 2.0 Part B (author support), We4Authors work, and what local AI changes
(context-aware guidance at the moment of authoring).
Key claim this talk defends: Accessible content at scale requires better tooling defaults and
in-context guidance, not more one-off training.
The Definition of Insanity
The Cycle of Failure:
- We train authors → they leave → we retrain
- Authors are SMEs, not accessibility experts
- The CMS as filter — If the filter allows garbage, we get garbage
- Focus on authors, not auditors
Prevention beats remediation. CMS is the gate.
Claims behind the bullets:
- Training churn: Organizations repeatedly retrain because authors change roles or leave.
Tooling must carry knowledge forward.
- Authors are SMEs: Subject matter experts are not expected to be accessibility specialists.
The system must make the accessible path the easy path.
- CMS as filter: If the editor allows missing headings, bad link text, empty alt, broken
table structure, inaccessible embeds, you will publish those defects at scale.
- Shift focus: Support authors where decisions are made, instead of relying on auditors after
publication.
Supportive standards context: Authoring Tool Accessibility Guidelines (ATAG) explicitly frames
authoring tools as responsible for both accessible UIs and helping produce accessible content.
https://www.w3.org/TR/ATAG20/
The Missing Standard: ATAG 2.0
Part A: The Editor UI
The authoring interface itself must be accessible.
~25% of the population has a disability (including authors).
Part B: Support Authors
Accessible content creation can be hard.
Tools need to help content authors to produce accessible output.
ATAG: A = accessible editor. B = help authors.
ATAG 2.0 structure:
- Part A: the authoring interface itself is accessible (authors can have disabilities too).
- Part B: the tool supports producing accessible output (prompts, checks, repair help,
accessible defaults).
Why “missing standard”: WCAG is widely discussed, but ATAG adoption is limited. Many systems
treat author support as documentation, not product requirements.
Reference: ATAG 2.0 (W3C Recommendation). https://www.w3.org/TR/ATAG20/
Move from Gatekeeping to Guiding
- ATAG 2.0 is now over a decade old, prior to current LLM acceleration
- Code on its own can provide guardrails, but guidance is limited
- AI allows us to move to providing more custom assistance to authors
- We don't need more AI generated content, but better help for people
Stop policing. Start guiding in context.
Claims behind the bullets:
- ATAG 2.0 predates today’s LLM tooling. The original guidance assumed mostly static rules and limited context
awareness.
- Rules-based guardrails are useful but shallow. They catch “missing alt” but not “bad alt for this image and
this page goal.”
- Local AI can provide contextual assistance at authoring time: suggestions, checks, and coaching tied to the
actual draft content.
- This is not about generating more content. It is about better decisions by humans, supported by tooling.
Reference for the standard basis: ATAG 2.0. https://www.w3.org/TR/ATAG20/
The We4Authors Lesson
- Funka lead an initiative to prepare for the European Accessibility Act (EAA)
- Several CMS & editor projects aligned on basic defaults
- The focus was on building a better UI & doing user research with authors
- 4 years ago, the possibilities of AI seemed far away
We4Authors: good defaults, low adoption.
Claims behind the bullets:
- We4Authors was a cross-tool effort to improve authoring experiences and defaults ahead of the European
Accessibility Act context.
- The emphasis was user research with authors and practical defaults, not only compliance checklists.
- At the time, the missing piece was scalable contextual guidance inside authoring tools.
Reference (DrupalCon Europe session):
https://events.drupal.org/europe2020/sessions/top-cms-tools-are-working-together-build-more-inclusive-world.html
Related talk (FOSDEM archive):
https://archive.fosdem.org/2023/schedule/event/accessibility_and_open_source/
Outstanding Opportunities
Biggest opportunities for improvement in authoring tools:
- Change language
- Tables creator
- Text alternative (ALT text)
- Forms editor
- Video
- Documentation
- Live testing while authoring
- Testing of documents
- Testing the content of pages
- Testing the whole website
List the big gaps. Point to archive.
What these opportunities represent: recurring high-impact failure modes where authoring tools
can prevent defects before publication.
- Language: correct language tags, mixed-language spans, and author prompts.
- Tables: header association, captions, scope, and discouraging layout tables.
- Alt text: presence, quality checks, and context-aware suggestions.
- Forms: labels, instructions, error prevention, and error recovery support.
- Video: captions, transcripts, audio description prompts, and publishing gates.
- Testing: live checks while authoring, plus document and site-level evaluations.
Reference archive: https://mgifford.github.io/We4Authors/
Vision: The Moment of Authoring
Privacy-first AI pipeline:
- Draft stays inside the CMS boundary
- Quality of content is maintained
- No drafts leak to third parties
Admin should have choice of LLMs:
- Small Language Models (SLMs)
- Tools bundled with the CMS
- Optimized prompts for authoring context
- Guardrails for LLM inputs
Keep drafts local. Admin chooses model.
Claims behind the bullets:
- Privacy-first: Draft content often includes personal data, sensitive policy, or unpublished
material. Many organizations cannot send it to third-party AI services.
- CMS boundary: Keeping the pipeline local reduces legal and procurement friction and
supports regulated environments.
- Choice of models: Administrators need options (small language models, local deployments,
enterprise-approved models) and the ability to tune prompts for authoring context.
- Guardrails: Constrain inputs and outputs, log what was suggested, and require human
confirmation where it matters.
Why this matters: Local SLMs can be an open-source advantage if the community builds shared
modules, prompts, and evaluation datasets.
Right Now: Shifting Left
Catching errors where they happen:
- Think Spellcheck
- Sa11y: Bookmarklet+
- Editoria11y: CMS integrated
- Both fix content before it gets published
Why this matters:
- Reduces remediation costs significantly
- Empowers authors != policing them
- Cross-organization accessibility
Spellcheck, but for a11y. Sa11y, Editoria11y.
Claims behind the bullets:
- Shift left: Catch issues during authoring when fixes are cheap and obvious.
- Author empowerment: The tool should explain what to do and why, not just block publishing.
- Cross-organization impact: When checks are built into the workflow, you do not depend on
individual expertise or a single accessibility specialist.
Examples:
- Sa11y (in-page checking model): https://sa11y.netlify.app/
- Editoria11y (CMS integration model): https://editoria11y.princeton.edu/
What More is Possible?
- Better Alt Text: In my previous evaluation, AI often outperforms humans (Trust but Verify)
- Plain Language & Captions: Automated simplification and media alternatives
- Friction: Actually require human review
- Structural Hints: “You bolded this. Did you mean an H3?”
Make friction mandatory. AI drafts, humans sign off.
Claims behind the bullets:
- Alt text quality: AI can draft better starting points than many humans, but it still needs
verification for accuracy and relevance.
- Plain language & captions: AI can draft simplifications and media alternatives, but
accessibility risk rises if teams treat drafts as final.
- Friction as safety: Require explicit review actions (for example, a checkbox, a short
confirmation, or “explain why this is accurate”) before publish.
- Structural hints: Tools can detect common anti-patterns (bold used as headings, broken list
structure) and guide correction.
Reference (your prior talk): Alternative Text for Images: How Bad Are Our ALT text Anyway?
https://archive.fosdem.org/2025/schedule/event/fosdem-2025-4709-alternative-text-for-images-how-bad-are-our-alt-text-anyway-/
Standards anchor: This aligns with ATAG Part B’s intent: tools should assist authors in
producing accessible content, not just warn after the fact. https://www.w3.org/TR/ATAG20/
Compliance is Science
- Authoring support reduces audit scope and failure rates
- Web Content Accessibility Guidelines - Evaluation Methodology
- WCAG-EM gives a repeatable sampling and evaluation process — AI can support this
- Python-ACR: a new tool to draft OpenACR documents
- These reports feed into the Web Accessibility Directive (WAD) & the European Accessibility Act (EAA)
WCAG-EM is the method. AI can speed sampling + reporting.
Claims behind the bullets:
- Authoring support reduces audit scope: fewer basic failures make evaluation faster and
increase pass rates on repeat audits.
- WCAG-EM: provides a repeatable evaluation methodology (define scope, explore site, select
representative sample, audit, report).
- AI support: can help with page discovery, sampling, evidence capture, and report drafting,
but not final conformance decisions.
- ACR tooling: Structured reports (OpenACR) reduce vendor spin and force disclosure of gaps
and evidence.
- Why it matters: These artifacts feed legal reporting and procurement expectations (WAD
statements in the public sector context, EAA service accessibility info in services context).
References:
- WCAG-EM (revised draft): https://w3c.github.io/wai-wcag-em/
- python-acr: https://github.com/mgifford/python-acr
- ACR editor (Section508.gov): https://acreditor.section508.gov/
Automate WCAG-EM
- You can't automate everything, but much more can be
- Automated crawlers pulling in key pages & random samples
- Organized by WCAG Success Criteria & written straight to JSON
- Manual testing, including testing with disabled users where appropriate, plus accessibility statements
Automate sampling + structure. Humans do judgment.
Claims behind the bullets:
- Full automation is not realistic. Many success criteria require human judgment and user testing.
- What can be automated: crawling, representative/random sampling, collecting evidence artifacts,
organizing findings by WCAG success criteria, and writing structured outputs (JSON) that tools can reuse.
- What stays manual: task testing, usability friction, cognitive load issues, assistive technology workflows,
and validation with disabled users where appropriate.
- The outcome should still include a public statement and a maintenance process, not a one-time report.
Reference: WCAG-EM draft: https://w3c.github.io/wai-wcag-em/
Move Together
- Pick one ATAG Part B practice and implement it in your editor or CMS
- Join the W3C ATAG Community Group
- Share your work & seek ways to collaborate
- Advocate for LLMs to be more inclusive
- Book Announcement: Digital Accessibility Ethics — Disability Inclusion in All Things Tech
Join ATAG CG. Share work.
Claims behind the bullets:
- ATAG Part B is implementable in pieces. Teams do not need to solve everything at once.
- The W3C ATAG Community Group is the coordination venue for shared patterns, requirements, and reference
implementations.
- Open collaboration is how we avoid each CMS building incompatible, proprietary authoring assistants.
- Inclusive LLM behavior is not automatic. Vendors optimize for “helpful,” not necessarily for accessibility,
accuracy, or safe author workflows.
References:
Reference: Digital
Accessibility Ethics — Disability Inclusion in All Things Tech.
- I am one of 39 authors from 10 countries contributing to Digital Accessibility Ethics: Disability Inclusion in All Things Tech, releasing later this spring. Together we bring over 600 years of combined accessibility and disability experience.
- The book introduces the first digital accessibility ethics framework.
- My chapter applies the framework to issues facing the open source community.
- Chapter title: "Digital Accessibility and Open Source Need Each Other".
- The framework centres 10 values essential to inclusion in open source.
- I focus on honesty, transparency, trust, accountability, awareness, education, and sustainability as core open source values.
- The framework is practical: actions to take and questions to ask to build more ethical open source projects.
- Learn more and pre-order: Digital Accessibility Ethics: Disability Inclusion in All Things Tech.
Resources
QR: commit to one concrete action.
Use these links as the “take-home kit”:
- ATAG 2.0: the standards basis for accessible authoring tools and author support.
https://www.w3.org/TR/ATAG20/
- ATAG Community Group: where coordination and modern tooling alignment is happening.
https://github.com/w3c-cg/atag
- WCAG-EM draft: the evolving evaluation methodology that can be partially automated.
https://w3c.github.io/wai-wcag-em/
- We4Authors archive: prior research, gaps, and recommendations.
https://mgifford.github.io/We4Authors/
Prompt: Scan the QR and write down one specific change you will implement (one ATAG Part B
practice, one editor check, one publishing gate, or one evaluation automation).
Questions
Invite: which CMS, which workflow, which constraint?
Q & A prompts:
- Which authoring surface: Drupal, WordPress, headless CMS, Git-based publishing, custom editor?
- Which constraint dominates: privacy/procurement, author skill levels, workflow complexity, or lack of
testing capacity?
- Where to start: pick one ATAG Part B feature and one WCAG-EM automation that reduces repeated manual work.