W3C Accessibility Guidelines (WCAG) 3.0 will provide a wide range of recommendations for making web content more accessible to users with disabilities. Following these guidelines will address many of the needs of users with blindness, low vision and other vision impairments; deafness and hearing loss; limited movement and dexterity; speech disabilities; sensory disorders; cognitive and learning disabilities; and combinations of these. These guidelines address accessibility of web content on desktops, laptops, tablets, mobile devices, wearable devices, and other web of things devices. The guidelines apply to various types of web content including static, dynamic, interactive, and streaming content; visual and auditory media; virtual and augmented reality; and alternative access presentation and control. These guidelines also address related web tools such as user agents (browsers and assistive technologies), content management systems, authoring tools, and testing tools.

Each guideline in this standard provides information on accessibility practices that address documented user needs of people with disabilities. Guidelines are supported by multiple requirements and assertions to determine whether the need has been met. Guidelines are also supported by technology-specific methods to meet each requirement or assertion.

This specification is expected to be updated regularly to keep pace with changing technology by updating and adding methods, requirements, and guidelines to address new needs as technologies evolve. For entities that make formal claims of conformance to these guidelines, several levels of conformance are available to address the diverse nature of digital content and the type of testing that is performed.

See WCAG 3.0 Introduction for an introduction and links to WCAG technical and educational material.

This is an update to the W3C Accessibility Guidelines (WCAG) 3.0. It includes a restructuring of the guidelines and first draft decision trees for three Guidelines: Clear meaning, Image alternatives, and Keyboard focus appearance.

To comment, file an issue in the W3C wcag3 GitHub repository. The Working Group requests that public comments be filed as new issues, one issue per discrete comment. It is free to create a GitHub account to file issues. If filing issues in GitHub is not feasible, email public-agwg-comments@w3.org (comment archive). In-progress updates to the guidelines can be viewed in the public editors’ draft.

Introduction

Summary

What’s new in this version of WCAG 3.0?

This draft includes an updated list of the potential Guidelines and Requirements that we are exploring. The list of Requirements is longer than the list of Success Criteria in WCAG 2.2. This is because:

The final set of Requirements in WCAG 3.0 will be different from what is in this draft. Requirements will be added, combined, and removed. We also expect changes to the text of the Requirements. Only some of the Requirements will be used to meet the base level of conformance.

The Requirements are grouped into the following sections:

The purpose of this update is to demonstrate a potential structure for guidelines and indicate the current direction of the WCAG 3.0 conformance. Please consider the following questions when reviewing this draft:

To provide feedback, please file a GitHub issue or email public-agwg-comments@w3.org (comment archive).

About WCAG 3.0

This specification presents a new model and guidelines to make web content and applications accessible to people with disabilities. The W3C Accessibility Guidelines (WCAG) 3.0 support a wide set of user needs, use new approaches to testing, and allow frequent maintenance of guidelines and related content to keep pace with accelerating technology change. WCAG 3.0 supports this evolution by focusing on the functional needs of users. These needs are then supported by guidelines written as outcome statements, requirements, assertions, and technology-specific methods to meet those needs.

WCAG 3.0 is a successor to Web Content Accessibility Guidelines 2.2 [[WCAG22]] and previous versions, but does not deprecate WCAG 2. It will also incorporate some content from and partially extend User Agent Accessibility Guidelines 2.0 [[UAAG20]] and Authoring Tool Accessibility Guidelines 2.0 [[ATAG20]]. These earlier versions provided a flexible model that kept them relevant for over 15 years. However, changing technology and changing needs of people with disabilities have led to the need for a new model to address content accessibility more comprehensively and flexibly.

There are many differences between WCAG 2 and WCAG 3.0. The WCAG 3.0 guidelines address accessibility of web content on desktops, laptops, tablets, mobile devices, wearable devices, and other Web of Things devices. The guidelines apply to various types of web content, including static, dynamic, interactive, and streaming content; visual and auditory media; virtual and augmented reality; and alternative access presentation and control. These guidelines also address related web tools such as user agents (browsers and assistive technologies), content management systems, authoring tools, and testing tools.

Each guideline in this standard provides information on accessibility practices that address documented user needs of people with disabilities. Guidelines are supported by multiple requirements to determine whether the need has been met. Guidelines are also supported by technology-specific methods to meet each requirement.

Content that conforms to WCAG 2.2 levels A and AA is expected to meet most of the minimum conformance level of this new standard but, since WCAG 3.0 includes additional tests and different scoring mechanics, additional work will be needed to reach full conformance. Since the new standard will use a different conformance model, the Accessibility Guidelines Working Group expects that some organizations may wish to continue using WCAG 2, while others may wish to migrate to the new standard. For those that wish to migrate to the new standard, the Working Group will provide transition support materials, which may use mapping and other approaches to facilitate migration.

Section status levels

As part of the WCAG 3.0 drafting process each normative section of this document is given a status. This status is used to indicate how far along in the development this section is, how ready it is for experimental adoption, and what kind of feedback the Accessibility Guidelines Working Group is looking for.

Guidelines

Summary

The following guidelines are being considered for WCAG 3.0. They are currently a list of topics which we expect to explore more thoroughly in future drafts. The list includes current WCAG 2 guidance and additional requirements. The list will change in future drafts.

Unless otherwise stated, requirements assume the content described is provided both visually and programmatically.

The individuals and organizations that use WCAG vary widely and include web designers and developers, policy makers, purchasing agents, teachers, and students. To meet the varying needs of this audience, several layers of guidance will be provided including guidelines written as outcome statements, requirements that can be tested, assertions, a rich collection of methods, resource links, and code samples.

The following list is an initial set of potential guidelines and requirements that the Working Group will be exploring. The goal is to guide the next phase of work. They should be considered drafts and should not be considered as final content of WCAG 3.0.

Ordinarily, exploratory content includes editor's notes listing concerns and questions for each item. Because this Guidelines section is very early in the process of working on WCAG 3.0, this editor's note covers most of the content in this section. Unless otherwise noted, all items in the list as exploratory at this point. It is a list of all possible topics for consideration. Not all items listed will be included in the final version of WCAG 3.0.

The guidelines and requirements listed below came from analysis of user needs that the Working Group has been studying, examining, and researching. They have not been refined and do not include essential exceptions or methods. Some requirements may be best addressed by authoring tools or at the platform level. Many requirements need additional work to better define the scope and to ensure they apply correctly to multiple languages, cultures, and writing systems. We will address these questions as we further explore each requirement.

Additional Research

One goal of publishing this list is to identify gaps in current research and request assistance filling those gaps.

Editor's notes indicate the requirements within this list where the Working Group has not found enough research to fully validate the guidance and create methods to support it or additional work is needed to evaluate existing research. If you know of existing research or if you are interested in conducting research in this area, please file a GitHub issue or send email to public-agwg-comments@w3.org (comment archive).

Image and media alternatives

Image alternatives

Users have equivalent alternatives for images.

Which foundational requirements apply?

For each image:

  1. Would removing the image impact how people understand the page?
  2. Is the image presented in a way that is available to user agents and assistive technology?
  3. Is an equivalent text alternative available for the image?
Decorative image

Decorative image is programmatically hidden.

Equivalent text alternative

Equivalent text alternative is available for image that conveys content.

Detectable image

Image is programmatically determinable.

Image role

The role and importance of the image is programmatically indicated.

Image type

The image type (photo, icon, etc.) is indicated.

Editable alternatives

Needs additional research

Auto generated text descriptions are editable by content creator.

Style guide

Text alternatives follow an organizational style guide.

Media alternatives

Users have equivalent alternatives for media content.

Descriptive transcripts

A transcript is available whenever audio or visual alternatives are used.

Findable media alternatives

Needs additional research

Media that has the desired media alternatives (captions, audio description, and descriptive transcripts) can be found.

Preferred language

Needs additional research

Equivalent audio alternatives are available in the preferred language.

Non-verbal cues

Needs additional research

Media alternatives explain nonverbal cues, such as tone of voice, facial expressions, body gestures, or music with emotional meaning.

Non-text alternatives

Users have alternatives available for non-text, non-image content that conveys context or meaning.

Non-text content

Needs additional research

Equivalent text alternatives are available for non-text, non-image content that conveys context or meaning.

Captions

Where there is audio content in media, there are equivalent synchronized captions.

Captions exist

Text alternatives to the information conveyed by the audio track exist.

Captions are findable

Mechanisms exist to help users find text alternatives to the auditory information conveyed by media.

Captions are controllable

The media media player provides a mechanism to turn the captions on and off.

Captions are in the target user's language

When captions are used as a text alternative for an audio track, they are provided in the target user’s language for the media.

Captions are equivalent to audio content

Captions are equivalent and equal in content to that of audio.

Captions are synchronized

Captions are in sync with the visual media.

Captions are consistent

The captions are presented consistently throughout the media, and across several related productions, unless exceptions are warranted. This includes consistent styling and placement of the captions text and consistent methods for identifying speakers, languages, and sounds.

Captions do not obstruct visual information

In visual media, captions are placed on the screen so that they do not obstruct important visual information.

Speakers are identified

The speaker is identified in the captions. If there is only one speaker in the media, the speaker can be identified in the media description or at the beginning of the closed captioning. If there is more than one speaker in the media, then changes in speaker need to be identified throughout.

Languages of speech are identified

When there is more than one language spoken in media, the captions identify the language spoken by each speaker.

Sounds are identified or described

Significant sounds, including sound effects and other non-spoken audio, are identified or described in the captions.

Captions are adaptable

The appearance of captions and the language of captions are adaptable.

Alternative language versions are available

Captions in a different language than that of the media are available so that the user can choose to view captions in their preferred language.

Enhanced features to interact with captions are available

Enhanced features that allow users to interact with captions are available.

Captions are available in 360-degree space in augmented, virtual, and extended realities

In augmented, virtual, and extended reality environments, captions are available in 360-degree space.

Speakers are indicated visually in augmented, extended, and virtual realities

In augmented, virtual, and extended environments, a visual indicator or signal, in addition to audio, is available to direct users toward the source of a sound or to indicate who is the speaker.

Style guide

The captions are following an organization’s style guide.

  • Name of the style guide
  • Version (if any)
  • Date of release
  • Description
  • Examples of how core WCAG guidelines are addressed
Testing with users

The organization conducted tests with users who need captions and fixed issues based on findings.

  • Types of disabilities each user had
  • Number of users (for each type of disability)
  • Date of testing
  • Examples of fixed issues based on the results
Video player selection

The organization uses a video player that supports captions.The video player supports closed captions in a standard caption format, or an open captions format.

  • Name of the video player
  • Caption format
Contribution by producer

During the video production process, the video producer converts the dialogue, along with other important sounds and music, into a caption file format.

  • Names of the videos
  • File types
  • Number/Name of video producer
Video player controls for cc

The organization has selected a video player that provides controls for turning closed captions on and off. In the video player controls, there must be at least one method to turn closed captions on or off.

  • Name of the video player
Adaptable video player

The organization uses a video player that allows the user to personalize the appearance and location of closed captions. An individual’s need for modification will vary among people. The organization should allow for adjustment to these styles, including but not limited to: font size, font weight, font style, font color, background color, background transparency, and placement.

  • Name of the video player
  • Customizable styles
AR, VR, or XR video player

The organization uses a video player that supports captions remaining directly in front of the user in a 360-degree augmented, virtual, or extended environment (AR, VR, or XR). In these spaces, the user feels surrounded by content. As the user moves in this space, any caption provided will appear directly in front of the user regardless of where they are looking.

  • Name of the video player
Subtitles in other languages

The organization provides captions in one or more alternative languages for the most common languages in their market. Typically called subtitles when in another language, closed captions in multiple languages assists in understanding the content and learning another language.

  • Original language for video
  • Languages for subtitles
Visual indicators in 360 field

The organization provides visual indicators in extended reality environments to indicate where the speaker is or the direction of a sound when audio is heard from outside the current view. As users move in extended reality environments, the position of the audio may stay the same. Users can personalize the visual indicators by selecting from a set of options.

Using human captioners

For live events, the organization has a human captioner providing live captions to the audience using CART.

  • Name of the captioner or service provider
Perfect set of alternatives

As part of the organization’s standard media production procedures, the video producer creates the closed caption files, audio description, and descriptive transcript during the production cycle and then uploads them to their appropriate places during the publishing process.

  • Alternatives provided

Audio descriptions

Where there is visual content in media, there is an equivalent synchronized audio description.

Audio descriptions exist

An audio alternative to the visual information conveyed in visual media exists.

Audio description is findable

Mechanisms exist to help users find audio alternatives to the visual information conveyed in visual media.

Audio description is controllable

The media player provides a mechanism to turn the audio description on and off.

Audio description is in the target user's language

When audio description is provided as an alternative for visual information, it is provided in the target user’s language for the media.

Audio description equitably describes important visual information

Information about actions, charts or informative visuals, scene changes, and on-screen text that are important and are not described or spoken in the main soundtrack are included in the audio description.

Audio description is synchronized

The audio description is in sync with the visual content.

Audio description does not overlap other audio

Audio description is provided during existing pauses in dialogue and does not overlap other important audio.

Audio description is adaptable

Mechanisms are available that allow users to control the audio description volume independently from the audio volume of the video and to change the language of the audio description, if multiple languages are provided.

Extended audio description

In cases where the existing pauses in a soundtrack are not long enough, the video pauses the visual to extend the audio track and provides an extended audio description to describe all of important visual information.

Alternative language versions are available

Audio description in alternative languages to that of the media are available so that the user can choose to listen to the audio description in their preferred language.

Style guide

The script for the audio description is following an organization’s style guide.

  • Name of the style guide
  • Version (if any)
  • Date of release
  • Description
  • Example(s) of core guideline(items)
Testing with users

Tests with users who need audio description were conducted and fixed issues based on findings.

  • Type of disabilities each user had
  • Number of users (for each type)
  • Date of testing
  • Examples of fixed issues based on the results
Reviewed by content creators

The audio description was reviewed by the person who created the video.

  • Role of the creator
  • Number of creators (for each Role)
  • Date (Period) of review
  • Examples of fixed issues based on the feedback
Using human describers

For live events, the organization has a human describer providing live audio description to the audience using assistive listening devices.

Perfect set of alternatives

As part of the organization’s standard media production procedures, the video producer creates the closed caption files, audio description, and descriptive transcript during the production cycle and then uploads them to their appropriate places during the publishing process.

  • Alternatives provided

Figure captions

Users can view figure captions even if not focused at figure.

Persistent captions

Needs additional research

Figure captions persist or can be made to persist even if the focus moves away.

Single sense

Users have content that does not rely on a single sense or perception.

Use of hue

Needs additional research

Information conveyed by graphical elements does not rely on hue.

Use of visual depth

Needs additional research

Information conveyed with visual depth is also conveyed programmatically and/or through text.

Use of sound

Information conveyed with sound is also conveyed programmatically and/or through text.

Use of spatial audio

Information that is conveyed with spatial audio is also conveyed programmatically and/or through text.

Text and wording

Text appearance

Users can read visually rendered text.

Maximum text contrast

Needs additional research

The rendered text against its background meets a maximum contrast ratio test for its text appearance.

Minimum text contrast

Needs additional research

The rendered text against its background meets a minimum contrast ratio test for its text appearance.

Text size

Needs additional research

The rendered text meets a minimum font size and weight.

Text style

The rendered text does not use a decorative or cursive font face.

Text-to-speech

Users can access text content and its meaning with text-to-speech tools.

Text-to-speech supported

Needs additional research

Text content can be converted into speech.

Human language

The human language of the view and content within the view is programmatically available.

Semantic text appearance

Needs additional research

Meaning conveyed by text appearance is programmatically available.

Clear meaning

Users can access explanations of or alternatives to ambiguous text content.

Which foundational requirements apply?

For each item of ambiguous text, such as non-literal text, abbreviations and acronyms, ambiguous numbers, or text missing letters or diacritics:

  1. Is the text presented in a way that is available to user agents, including assistive technology (AT)?
  2. Does the accessibility support set meet Explain ambiguous text or provide an unambiguous alternative?
    • Yes, pass. Stop.
    • No, continue.
  3. Does the author meet Explain ambiguous text or provide an unambiguous alternative?
    • Yes, pass. Stop.
    • No, fail.

Exception

  • If the purpose is to showcase works of art or fiction, such as a poetry journal or fictional stories, this guideline does not apply. However, if the purpose is to educate students about art or fiction, then this guideline applies.
Detectable text

Text is programmatically determinable.

Unambiguous text

Explain ambiguous text or provide an unambiguous alternative.

Simplified written content

Users are not required to navigate complex words or sentence structures in order to understand content.

Appropriate tone

Needs additional research

The language and tone used is appropriate to the topic or subject matter.

Double negatives

Content does not include double negatives to express a positive unless it is standard usage for that language or dialect.

Sentence voice

Needs additional research

The voice used is easiest to understand in context.

Uncommon words

Needs additional research

Definitions for uncommon or new words are available.

Unnecessary words or phrases

Sentences are concise, without unnecessary filler words and phrases.

Verb tense

Needs additional research

The verb tense used is easiest to understand in context.

Interactive components

Keyboard focus appearance

Users can see which element has keyboard focus.

Which foundational requirements apply?

For each focusable item:

  1. Is the user agent default focus indicator used?
  2. Is the focus indicator defined by the author?
Custom indicator

A custom focus indicator is used with sufficient size, change of contrast, adjacent contrast, distinct style and adjacency.

User agent default indicator

Focusable item uses the user agent default indicator.

Supplementary indicators

@@

Style guide

Focus indicators follow an organizational style guide.

Pointer focus appearance

Users can see the location of the pointer focus.

Pointer visible

There is a visible indicator of pointer focus.

Navigating content

Users can determine where they are and move through content (including interactive elements) in a systematic and meaningful way regardless of input or movement method.

Focus in viewport

The focus does not move to a position outside the current viewport, unless a mechanism is available to return to the previous focus point.

Focus retention

A user can focus on a content “area,” such as a modal or pop-up, then resume their view of all content using a limited number of steps.

Keyboard focus order

The keyboard focus moves sequentially through content in an order and way that preserves meaning and operability.

Restore focus

When the focus is moved by the content into a temporary change of view (e.g. a modal), the focus is restored to its previous location when returned from the temporary change of view.

Relevant focus

The focus order does not include repetitive, hidden, or static elements.

Expected behavior

Users have interactive components that behave as expected.

Consistent interaction

Interactive components with the same functionality behave consistently.

Consistent labels

Interactive components with the same functionality have consistent labels.

Consistent visual design

Interactive components that have similar function and behavior have a consistent visual design.

Control location

Needs additional research

Interactive components are visually and programmatically located in conventional locations.

Conventions

Needs additional research

Interactive components follow established conventions.

Familiar component

Conventional interactive components are used.

Reliable positioning

Interactive components retain their position unless a user changes the viewport or moves the component.

Control information

Users have information about interactive components that is identifiable and usable visually and using assistive technology.

Control contrast

Needs additional research

Visual information required to identify user interface components and states meet a minimum contrast ratio test, except for inactive components or where the appearance of the component is determined by the user agent and not modified by the author.

Control importance

Needs additional research

The importance of interactive components is indicated.

Control labels

Interactive components have visible labels that identify the purpose of the component.

Control updates

Changes to interactive components’ names, roles, values or states are visually and programmatically indicated.

Distinguishable controls

Interactive components are visually distinguishable without interaction from static content and include visual cues on how to use them.

Field constraints

Field constraints and conditions (required line length, date format, password format, etc.) are available.

Input labels

Inputs have visible labels that identify the purpose of the input.

Label in name

The programmatic name includes the visual label.

Name, role, value, state

Accurate names, roles, values, and states are available for interactive components.

Input / operation

Input operation

Users can use different input techniques and combinations and switch between them.

Concurrent inputs

Any input modality available on a platform can be used concurrently.

Hover information

Users can dismiss additional content (triggered by hover) without moving the pointer, unless the additional content communicates an input error or does not obscure or replace other content.

Input control

Interactive components are available to all navigation and input methods.

Content changes

Users are aware of changes to content or context.

Notify about change

Changes in content and updates notify users, regardless of the update speed.

Notify on change

Notification is provided when viewing content that was previously viewed is changed.

Inform before activation

Interactive components that can alter the order of content convey their purpose prior to activation, and convey their impact on content order when activated.

Reverse change of context

Components that trigger a ‘change of context’ are indicated, or the change of context can be reversed.

Target size

Users are not required to accurately position a pointer in order to view or operate content.

Target size minimum

The combined target size and spacing to adjacent targets is at least 24x24 pixels.

Target size optimum

The combined target size and spacing to adjacent targets is at least 48x48 pixels.

Keyboard operation

Users can navigate and operate content using only the keyboard focus.

Comparable keyboard effort

Needs additional research

The number of input commands required to complete a task using the keyboard is similar to the number of input commands when using other input modalities.

Conflicting keyboard commands

Authored keyboard commands do not conflict with platform commands or they can be remapped.

Consistent keyboard interaction

Keyboard interface interactions are consistent.

Keyboard mode

If the keyboard is non-hardware (such as a virtual keyboard), the keyboard input mode is indicated.

Keyboard only

All functionality must be accessible through the keyboard, except when a task requires input based on the user’s specific input action.

No keyboard trap

If keyboard focus can be moved to an interactive component, then the keyboard focus can be moved away from that component, or the component can be dismissed, with focus returning to the previous point.

Non-standard commands

The user is informed of non-standard authored keyboard commands.

Gestures

Users are not required to use gestures or dragging to view or operate content.

Change focus with pointer device

Selecting an interactive component with a pointer sets the focus to that element.

Complex pointer inputs

Every function that can be operated by a pointer, can be operated by a single pointer input or a sequence of single pointer inputs without requiring certain timing.

Pointer-agnostic

Functionality which supports pointers is available to any pointer device supported by the platform.

Pointer cancellation

The method of pointer cancellation is consistent.

Specific pressure

Needs additional research

Where specific pressures are used, they can be adjusted and/or disabled without loss of function.

Speed insensitive

Needs additional research

Where specific speeds are used, they can be adjusted and/or disabled without loss of function.

Motion input

Users are not required to move their bodies or devices to operate functionality.

Use without body movement

All functionality that requires full or gross body movement may also be accomplished with a standard input device.

Use without device movement

All functionality can be completed without reorienting or repositioning hardware devices.

Error handling

Correct mistakes

Users know about and can correct mistakes.

Error association

Error notifications are programmatically associated with the error source so that users can access the error information while focused on the source of the error.

Error identification

Errors are visually identifiable without relying on only text, only color, or only symbols.

Error notification

Errors that can be automatically detected are identified and described to the user.

Persistent errors

Error notifications persist until the user dismisses them or the error is resolved.

Visible errors

Needs additional research

Error notifications are visually collocated with the source of the error within the viewport, or provide a link to the source of the error which, when activated, moves the viewport to the error.

Animation and movement

Avoid physical harm

Users do not experience physical harm from content.

Audio shifting

Needs additional research

Audio shifting designed to create a perception of motion is avoided, or can be paused or prevented.

Flashing

Flashing or strobing beyond thresholds defined by safety standards are avoided, or can be paused or prevented.

Motion

Needs additional research

Visual motion and pseudo-motion that lasts longer than 5 seconds is avoided, or can be paused or prevented.

Motion from interaction

Needs additional research

Visual motion and pseudo-motion triggered by interaction is avoided or can be prevented, unless the animation is essential to the functionality or the information being conveyed.

Layout

Relationships

Users can determine relationships between content both visually and using assistive technologies.

Clear relationships

The relationships between parts of the content is clearly indicated.

Clear starting point

The starting point or home is visually and programmatically labeled.

Distinguishable relationships

Needs additional research

Relationships that convey meaning between pieces of content are programmatically determinable. Note: Examples of relationships include items positioned next to each other, arranged in a hierarchy, or visually grouped.

Distinguishable sections

Needs additional research

Sections are visually and programmatically distinguishable.

Recognizable layouts

Users have consistent and recognizable layouts available.

Consistent order

The relative order of content and interactions remain consistent throughout a workflow. Note: Relative order means that content can be added or removed, but repeated items are in the same order relative to each other.

Familiar layout

Conventional layouts are available.

Information about options

Information required to understand options is visually and programmatically associated with the options.

Related information

Related information is grouped together within a visual and programmatic structure.

Orientation

Users can determine their location in content both visually and using assistive technologies.

Current location

Needs additional research

The current location within the view, multi-step process, and product is visually and programmatically indicated.

Multi-step process

Context is provided to orient the user in a site or multi-step process.

Contextual information

Contextual information is provided to help the user orient within the product.

Structure

Users can understand and navigate through the content using structure.

Section labels

Major sections of content have within them well structured, understandable visual and programmatic headings.

Section length

Needs additional research

Content is organized into short sections of related content.

Section purpose

The purpose of each section of the content is clearly indicated.

Single idea

The number of concepts within a segment of text is minimized.

Topic sentence

For text intended to inform the user, each paragraph of text begins with a topic sentence stating the aim or purpose.

White spacing

Whitespace separates chunks of content.

Title

Content has a title or high-level description.

Lists

Three or more items of related data are presented as bulleted or numbered lists.

Numbered steps

Steps in a multi-step process are numbered.

Consistency across views

Consistency

Users have consistent and alternative methods for navigation.

Consistent navigation

Navigation elements remain consistent across views within the product.

Multiple ways

The product provides at least two ways of navigating and finding information (Search, Scan, Site Map, Menu Structure, Breadcrumbs, contextual links, etc.).

Persistent navigation

Navigation features are available regardless of screen size and magnification (responsive design).

Process and task completion

Avoid cognitive tasks

Users can complete tasks without needing to memorize nor complete advanced cognitive tasks.

Allow automated entry

Automated input from user agents, third-party tools, or copy-and-paste is not prevented.

No cognitive tests

Processes, including authentication, can be completed without puzzles, calculations, or other cognitive tests (essential exceptions would apply).

No memorization

Needs additional research

Processes can be completed without memorizing and recalling information from previous stages of the process.

Adequate time

Users have enough time to read and use content.

Adjust timing at start

For each process with a time limit, a mechanism exists to disable or extend the limit before the time limit starts.

Adjust timing at timeout

For each process with a time limit, a mechanism exists to disable or extend the time limit at timeout.

Disable timeout

For each process with a time limit, a mechanism exists to disable the limit.

Unnecessary steps

Users can complete tasks without unnecessary steps.

Optional information

Processes can be completed without being forced to read or understand unnecessary content.

Optional input

Processes can be completed without entering unnecessary information.

Avoid deception

Users do not encounter deception when completing tasks, unless essential to the task.

Deceptive controls

Needs additional research

Interactive components are not deceptively designed.

Exploitive behaviors

Needs additional research

Process completion does not include exploitive behaviors.

Misinformation

Needs additional research

Processes can be completed without navigating misinformation or redirections.

Preselections

Preselected options are visible by default during process completion without additional interactions.

Redirection

Needs additional research

A mechanism is available to prevent fraudulent redirection or alert users they are exiting the site.

Retain information

Users do not have to reenter information or redo work.

Go back in process

In a multi-step process, the interface supports stepping backwards in a process and returning to the current point without data loss.

Redundant entry

Information previously entered by or provided to the user that is required to be entered again in the same process is either auto-populated, or available for the user to select.

Save progress

Data entry and other task completion processes allow saving and resuming from the current step in the task.

Complete tasks

Users understand how to complete tasks.

Action required

In a process, the interface indicates when user input or action is required to proceed to the next step.

Inform at start of process

Information needed to complete a multi-step process is provided at the start of the process, including:

  • number of steps it might take (if known in advance),
  • details of any resources needed to perform the task, and
  • overview of the process and next step.
Steps and instructions

The steps and instructions needed to complete a multi-step process are available.

Policy and protection

Content source

Users can determine when content is provided by a Third Party

Citation

Needs additional research

The author or source of the primary content is visually and programmatically indicated.

Indicate third-party content

Needs additional research

Third-party content (AI, Advertising, etc.) is visually and programmatically indicated.

Obscuring primary content

Needs additional research

Advertising and other third-party content that obscures the primary content can be moved or removed without interacting with the advertising or third-party content.

Security and privacy

Users’ safety, security or privacy are not decreased by accessibility measures.

Clear agreement

Needs additional research

The interface indicates when a user is entering an agreement or submitting data.

Disability information privacy

Needs additional research

Disability information is not disclosed to or used by third parties and algorithms (including AI).

Sensitive information

Needs additional research

Prompts to hide and remove sensitive information from observers are available.

Risk statements

Needs additional research

Clear explanations of the risks and consequences of choices, including use, are stated.

Algorithms

Users are not disadvantaged by algorithms.

Algorithm bias

Needs additional research

Algorithms (including AI) used are not biased against people with disabilities.

Social media algorithm

Needs additional research

A mechanism is available to understand and control social media algorithms.

Help and feedback

Help available

Users have help available.

Consistent help

Needs additional research

Help is labeled consistently and available in a consistent visual and programmatic location.

Contextual help

Contextual help is available.

Conversational support

Conversational support allowing both text and verbal modes is available.

Data visualizations

Needs additional research

Help is available to understand and use data visualizations.

New interfaces

Needs additional research

When interfaces dramatically change (due to redesign), a mechanism to learn the new interface or revert to the older design is available.

Personalizable help

Needs additional research

Help is adaptable and personalizable.

Sensory characteristics

Instructions and help do not rely on sensory characteristics.

Support available

Needs additional research

Accessible support is available during data entry, task completion and search.

Supplemental content

Users have supplemental content available.

Number supplements

Text or visual alternatives are available for numerical concepts.

Text supplements

Needs additional research

Visual illustrations, pictures, and images are available to help explain complex ideas, events, and processes.

Feedback

Users can provide feedback to authors.

Feedback mechanism

A mechanism is available to provide feedback to authors.

User control

Control text

Users can control text presentation.

Adjust color

Text and background colors can be customized.

Adjust background

Patterns, designs, or images placed behind text are avoided or can be removed by the user.

Font size meaning

When font size conveys visual meaning (such as headings), the text maintains its meaning and purpose when text is resized.

Text customization

Users can change the text style (like font and size) and the layout (such as spacing and single column) to fit their needs.

Adjustable viewport

Users can transform size and orientation of content presentation to make it viewable and usable.

Orientation

Content orientation allows the user to read the language presented without changing head or body position.

Reflow

Content can be viewed in multiple viewport sizes, orientations, and zoom levels — without loss of content, functionality, meaningful relationships, and with scrolling only occurring in one direction.

Transform content

Users can transform content to make it understandable.

Alternative presentation

Needs additional research

Complex information or instructions for complex processes are available in multiple presentation formats.

Content markup

Role and priority of content is programmatically determinable.

Summary

Access to a plain-language summary, abstract, or executive summaries is available.

Transform content

Needs additional research

Content can be transformed to make its purpose clearer.

Media control

Users can control media and media alternative.

Adjust captions

The position and formatting of captions can be changed.

Audio control

Audio can be turned off, while still playing the video, and without affecting the system sound.

Interactive audio alternative

Needs additional research

Alternatives for audio include the ability to search and look up terms.

Media alternative control

Captions and audio descriptions can be turned on and off.

Media chapters

Needs additional research

Media can be navigated by chapters.

Control interruptions

Users can control interruptions.

Control notifications

The timing and positioning of notifications and other interruptions can be changed, suppressed or saved, except interruptions involving an emergency.

Control possible harm

Users can control potential sources of harm.

Disturbing content

Needs additional research

Warnings are available about content that may be emotionally disturbing, and the disturbing content can be hidden.

Haptic stimulation

Needs additional research

Haptic feedback can be reduced or turned off.

Triggers

Needs additional research

Warnings are available about triggering content, and the warnings and triggering content can be hidden.

Verbosity

Needs additional research

Overwhelming wordiness can be reduced or turned off.

Visual stimulation

Needs additional research

Visual stimulation from combinations of density, color, movement, etc. can be reduced or turned off.

User agent support

Users can control content settings from their User Agents including Assistive Technology.

Assistive technology control

Content can be controlled using assistive and adaptive technology.

Printing

Needs additional research

Printing respects user’s content presentation preferences.

User settings

User settings are honored.

Virtual cursor

Assistive technologies can access content and interactions when using mechanisms that convey alternative points of regard or focus (i.e. virtual cursor).

Conformance

Summary

You might want to make a claim that your content or product meets the WCAG 3.0 guidelines. If it does meet the guidelines, we call this “conformance”.

If you want to make a formal conformance claim, you must use the process described in this document. Conformance claims are not required and your content can conform to WCAG 3.0, even if you don’t want to make a claim.

There are two types of content in this document:

We are experimenting with different conformance approaches for WCAG 3.0. Once we have developed enough guidelines, we will test how well each works.

WCAG 3.0 will use a different conformance model than WCAG 2.2 in order to meet its requirements. Developing and vetting the conformance model is a large portion of the work AG needs to complete over the next few years.

AG is exploring a model based on Foundational Requirements, Supplemental Requirements, and Assertions.

The most basic level of conformance will require meeting all of the Foundational Requirements. This set will be somewhat comparable to WCAG 2.2 Level AA.

Higher levels of conformance will be defined and met using Supplemental Requirements and Assertions. AG will be exploring whether meeting the higher levels would work best based on points, percentages, or predefined sets of requirements (modules).

Other conformance concepts AG continues to explore the following include conformance levels, issue severity, adjectival ratings and pre-assessment checks.

See Explainer for W3C Accessibility Guidelines (WCAG) 3.0 for more information.

Only accessibility-supported ways of using technologies

The concept of "accessibility-supported" is to account for the variety of user agents and scenarios. How does an author know that a particular technique for meeting a guideline will work in practice with user agents that are used by real people?

The intent is for the responsibility of testing with user agents to vary depending on the level of conformance.

At the foundational level of conformance, assumptions can be made by authors that methods and techniques provided by WCAG 3.0 work. At higher levels of conformance the author may need to test that a technique works, or check that available user agents meet the requirement, or a combination of both.

This approach means the Working Group will ensure that methods and techniques included do have reasonably wide and international support from user agents, and there are sufficient techniques to meet each requirement.

The intent is that WCAG 3.0 will use a content management system to support tagging of methods/techniques with support information. There should also be a process where interested parties can provide information.

An "accessibility support set" is used at higher levels of conformance to define which user agents and assistive technologies you test with. It would be included in a conformance claim, and enables authors to use techniques that are not provided with WCAG 3.0.

An exception for long-present bugs in assistive technology is still under discussion.

Defining conformance scope

When evaluating the accessibility of content, WCAG 3.0 requires the guidelines apply to a specific scope. While the scope can be an all content within a digital product, it is usually one or more subsets of the whole. Reasons for this include:

WCAG 3.0 therefore defines two ways to scope content: views and processes. Evaluation is done on one or more complete views or processes, and conformance is determined on the basis of one or more complete views or processes.

Conformance is defined only for processes and views. However, a conformance claim may be made to cover one process and view, a series of processes and views, or multiple related processes and views. All unique steps in a process MUST be represented in the set of views. Views outside of the process MAY also be included in the scope.

We recognize that representative sampling is an important strategy that large and complex sites use to assess accessibility. While it is not addressed within this document at this time, our intent is to later address it within this document or in a separate document before the guidelines reach the Candidate Recommendation stage. We welcome your suggestions and feedback about the best way to incorporate representative sampling in WCAG 3.0.

Glossary

Many of the terms defined here have common meanings. When terms appear with a link to the definition, the meaning is as formally defined here. When terms appear without a link to the definition, their meaning is not explicitly related to the formal definition here. These definitions are in progress and may evolve as the document evolves.

This glossary includes terms used by content that has reached a maturity level of Developing or higher. The definitions themselves include a maturity level and may mature at a different pace than the content that refers to them. The AGWG will work with other taskforces and groups to harmonize terminology across documents as much as is possible.

Accessibility support set

The group of user agents and assistive technologies you test with.

The AGWG is considering defining a default set of user agents and assistive technologies that they use when validating guidelines.

Accessibility support sets may vary based on language, region, or situation.

If you are not using the default accessibility set, the conformance report should indicate what set is being used.

Accessibility supported

Supported by in at least 2 major free browsers on every operating system and/or available in assistive technologies used by 80% cumulatively of the AT users on each operating system for each type of AT used.

Ambiguous numbers

To be defined.

Assertion

A formal claim of fact, attributed to a person or organization. An attributable and documented statement of fact regarding procedures practiced in the development and maintenance of the content or product to improve accessibility.

Assistive technology

Hardware and/or software that acts as a user agent, or along with a mainstream user agent, to provide functionality to meet the requirements of users with disabilities that go beyond those offered by mainstream user agents

Functionality provided by assistive technology includes alternative presentations (e.g., as synthesized speech or magnified content), alternative input methods (e.g., voice), additional navigation or orientation mechanisms, and content transformations (e.g., to make tables more accessible).

Assistive technologies often communicate data and messages with mainstream user agents by using and monitoring APIs.

The distinction between mainstream user agents and assistive technologies is not absolute. Many mainstream user agents provide some features to assist individuals with disabilities. The basic difference is that mainstream user agents target broad and diverse audiences that usually include people with and without disabilities. Assistive technologies target narrowly defined populations of users with specific disabilities. The assistance provided by an assistive technology is more specific and appropriate to the needs of its target users. The mainstream user agent may provide important functionality to assistive technologies like retrieving web content from program objects or parsing markup into identifiable bundles.

Audio describer

A person who provides verbal descriptions of visual elements in media, cultural spaces, and live performances to make content and experiences more accessible to individuals who are blind or have low vision. They describe actions, settings, costumes, and facial expressions, inserting these descriptions into pauses within the dialogue or audio.

Audio description

Narration added to the soundtrack to describe important visual details that cannot be understood from the main soundtrack alone. For audiovisual media, audio description provides information about actions, characters, scene changes, on-screen text, and other visual content.

Audio description is also sometimes called “video description”, “described video”, “visual description”, or “descriptive narration”.

In standard audio description, narration is added during existing pauses in dialogue. See also extended audio description.

If all important visual information is already provided in the main audio track, no additional audio description track is necessary.

Automated evaluation

Evaluation conducted using software tools, typically evaluating code-level features and applying heuristics for other tests.

Automated testing is contrasted with other types of testing that involve human judgement or experience. Semi-automated evaluation allows machines to guide humans to areas that need inspection. The emerging field of testing conducted via machine learning is not included in this definition.

Blocks of text

Continuous text with multiple sentences that is not separated by structural elements such as table cells, regions.

CART

Communication Access Realtime Translation, or CART, is a type of live captioning provided by trained captioners, using specialized software along with phonetic keyboards or stenography methods, to produce real-time visual captioning for meeting and event participants. CART is available primarily in English, with some providers providing French, Spanish, and other languages on demand. It is not available for Japanese and some other languages.

CART is sometimes referred to as “real-time captioning” or “open captions”.

Captions

Time-synchronized visual and/or text alternative that communicates the audio portion of a work of multimedia (for example, a movie or podcast recording). Captions are similar to dialogue-only subtitles, except captions convey not only the content of spoken dialogue, but also equivalents for non-dialogue audio information needed to understand the program content, including sound effects, music, laughter, speaker identification and location.

In some countries, captions are called subtitles.

Change of viewport within a page/view

Change of content/context that causes the users keyboard navigation point to change where they have the option to move back out of the new content/context.

“within a page/view is part of this term because - if the new viewport/content/context is within the same page/view going back etc. would be under the control of the author. If moving to another page/view - perhaps on a different site - the current author would not have control and this would be a requirement on the user agent.

This is different from Change of Context in WCAG 2.x major changes that, if made without user awareness, can disorient users who are not able to view the entire page simultaneously.

Closed captions

Captions that are decoded into chunks known as “caption frames” that are synchronized with the media. Closed captions can be turned on and off with some players, and can often be read using assistive technology.

Closed system

Information technology that prevents users from easily attaching or installing assistive technologies. For example, kiosk, calculator, vending machines, etc.

Common keyboard navigation technique

Keyboard navigation technique that is the same across most or all applications and platforms and can therefore be relied upon by users who need to navigate by keyboard alone.

A sufficient listing of common keyboard navigation techniques for use by authors can be found in the WCAG common keyboard navigation techniques list

Complex pointer input

Any pointer input other than a single pointer input.

Component

A grouping of interactive elements for a distinct function.

Conformance

Satisfying all the requirements of the guidelines. Conformance is an important part of following the guidelines even when not making a formal Conformance Claim.

See the Conformance section for more information.

Content

Information and sensory experience to be communicated to the user by an interface, including code or markup that defines the content’s structure, presentation, and interactions.

Contrast ratio test

To be defined.

Decorative image

To be defined.

Default direction of text

To be defined.

Default orientation

A single orientation that a platform uses to view content by default.

Deprecate

To declare something outdated and in the process of being phased out, usually in favor of a specified replacement.

Deprecated documents are no longer recommended for use and may cease to exist in the future.

Diverse set of users

To be defined.

Down event

A platform event that occurs when the trigger stimulus of a pointer is depressed.

The down-event may have different names on different platforms, such as “touchstart” or “mousedown”.

Element

To be defined.

Enhanced audio description

An audio description that is added to audiovisual media by pausing the video to allow for additional time to add audio description.

This technique is only used when the sense of the video would be lost without the additional audio description and the pauses between dialogue or narration are too short.

Enhanced captions

Captions that provide an experience beyond the display of a text alternative or a visual equivalent of the text and sounds in the audio. For example, some media players offer the following options for enhancing captions:

  • Adding stylized text, including color and size differences, to emphasize information
  • Adding graphics, such as labeled illustrations or concept maps
  • Adding interactive elements, such as hyperlinks to glossary terms
  • Adding a second work of media to accompany the primary media
  • Customizing media placement
  • Offering alternative playback options for captions

Enhanced captioning is also called kinetic, embodied, integral, dynamic, and animated captioning.

Essential exception

Exception because there is no way to carry out the function without doing it this way or fundamentally changing the functionality.

Evaluation

The process of examining content for conformance to these guidelines.

Different approaches to evaluation include automated evaluation, semi-automated evaluation, human evaluation, and user testing.

Figure captions

A title, brief explanation, or comment that accompanies a work of visual media and is always visible on the page.

Functional need

A statement that describes a specific gap in one’s ability, or a specific mismatch between ability and the designed environment or context.

Gesture

A motion made by the body or a body part used to communicate to technology.

Guideline

High-level, plain-language outcome statements used to organize requirements.

Guidelines provide a high-level, plain-language outcome statements for managers, policy makers, individuals who are new to accessibility, and other individuals who need to understand the concepts but not dive into the technical details. They provide an easy-to-understand way of organizing and presenting the requirements so that non-experts can learn about and understand the concepts. Each guideline includes a unique, descriptive name along with a high-level plain-language summary. Guidelines address functional needs on specific topics, such as contrast, forms, readability, and more.

Guidelines group related requirements and are technology-independent.

High cognitive load

To be defined.

Human evaluation

Evaluation conducted by a human, typically to apply human judgement to tests that cannot be fully automatically evaluated.

Human evaluation is contrasted with automated evaluation which is done entirely by machine, though it includes semi-automated evaluation which allows machines to guide humans to areas that need inspection. Human evaluation involves inspection of content features, by contrast with user testing which directly tests the experience of users with content.

Image

To be defined.

Image role

To be defined.

Image type

To be defined.

Informative

Content provided for information purposes and not required for conformance. Also refered to as non-normative.

Interactive component

To be defined.

Interactive element

Element that a user can act on

OR

A part of the interface that responds to user input and can have a distinct programmatic name.

In contrast to non-interactive elements. For example, headings or paragraphs.

Items

The smallest testable unit for testing scope. They could be interactive components such as a drop down menu, a link, or a media player. They could also be units of content such as a phrase, a paragraph, a label or error message, an icon, or an image.

Keyboard focus

To be defined.

Mechanism

A process or technique for achieving a result.

The mechanism may be explicitly provided in the content, or may be relied upon to be provided by either the platform or by user agents, including assistive technologies.

The mechanism needs to meet all success criteria for the conformance level claimed.

Method

Detailed information, either technology-specific or technology-agnostic, on ways to meet the requirement as well as tests and scoring information.

Navigated sequentially

Navigated in the order defined for advancing focus (from one element to the next) using a keyboard interface.

Non-interactive element

Element that a user perceives but cannot act on

OR

A part of the interface that does not respond to user input and does not include sub-parts.

In contrast to interactive element.

Non-literal text

Non-literal text uses words or phrases in a way that goes beyond their standard or dictionary meaning to express deeper, more complex ideas. This is also called figurative language. To understand it, users have to interpret the implied meaning behind the words, rather than just their literal or direct meaning.

Examples:

  • allusions
  • hyperbole
  • idioms
  • irony
  • jokes
  • litotes
  • metaphors
  • metonymies
  • onomatopoeias
  • oxymorons
  • personification
  • puns
  • sarcasm
  • similes

More detailed examples are available in the Methods section.

Non-web software

Software that does not qualify as web content.

Normative

Content whose instructions are required for conformance.

Open captions

Captions that are visual equivalent images of text that are embedded in video. Open captions are also known as burned-in, baked-on, or hard-coded captions. Open captions cannot be turned off and cannot be read using assistive technology.

Path-based gesture

Gesture that depends on the path of the pointer input and not just its endpoints.

Path based gesture includes both time dependent and non-time dependent path-based gestures.

Platform

Software, or collection of layers of software, that lie below the subject software and provide services to the subject software and that allows the subject software to be isolated from the hardware, drivers, and other software below.

Platform software both makes it easier for subject software to run on different hardware, and provides the subject software with many services (e.g. functions, utilities, libraries) that make the subject software easier to write, keep updated, and work more uniformly with other subject software.

A particular software component might play the role of a platform in some situations and a client in others. For example a browser is a platform for the content of the page but it also relies on the operating system below it.

The platform is the context in which the product exists.

Point of regard

The position in rendered content that the user is presumed to be viewing. The dimensions of the point of regard can vary.

For example, it can be a two-dimensional area (e.g. content rendered through a two-dimensional graphical viewport), or a point (e.g. a moment during an audio rendering or a cursor position in a graphical rendering), or a range of text (e.g. focused text).

The point of regard is almost always within the viewport, but it can exceed the spatial or temporal dimensions of the viewport. See rendered content for more information about viewport dimensions.

The point of regard can also refer to a particular moment in time for content that changes over time. For example, an audio-only presentation.

User agents can determine the point of regard in a number of ways, including based on viewport position in content, keyboard focus, and selection.

Pointer

To be defined.

Private and sensitive information

Private and sensitive information such as, but not limited to:

  • Racial or ethnic origin
  • Personally identifiable information
  • Biometric information
  • Medical and health information
  • Gender identification
  • Financial information
Process

A sequence of steps that need to be completed to accomplish an activity or task from beginning to end.

Product

Testing scope that is a combination of all items, views, and task flows that make up the web site, set of web pages, web app, etc.

The context for the product would be the platform.

Programmatically determinable

The meaning of the content and all its important attributes can be determined by software functionality that is accessibility supported.

Purely decorative

content that, if removed, does not affect the meaning or functionality of the page.

Relied upon

The content would not conform if that technology is turned off or is not supported.

Requirement

Result of practices that reduce or eliminate barriers that people with disabilities experience.

Section

A self-contained portion of content that deals with one or more related topics or thoughts.

A section may consist of one or more paragraphs and include graphics, tables, lists and sub-sections.

Semi-automated evaluation

Evaluation conducted using machines to guide humans to areas that need inspection.

Semi-automated evaluation involves components of automated evaluation and human evaluation.

Simple pointer input

Input event that involves only a single “click” event or a ‘button down” and “button up” pair of events with no movement between.

Examples of things that are not simple pointer actions include double clicks, dragging motions, gestures, and any use of multipoint input or gestures, and the simultaneous use of a mouse and keyboard.

Single pointer

An input modality that only targets a single point on the page/screen at a time – such as a mouse, single finger on a touch screen, or stylus.

Single pointer interactions include clicks, double clicks, taps, dragging motions, and single-finger swipe gestures. In contrast, multipoint interactions involve the use of two or more pointers at the same time, such as two-finger interactions on a touchscreen, or the simultaneous use of a mouse and stylus.

Single pointer input

An input modality that only targets a single point on the view at a time – such as a mouse, single finger on a touch screen, or stylus.

Single pointer interactions include clicks, double clicks, taps, dragging motions, and single-finger swipe gestures. In contrast, multipoint interactions involve the use of two or more pointers at the same time, such as two-finger interactions on a touchscreen, or the simultaneous use of a mouse and stylus.

Single pointer input is in contrast to multipoint input such as two, three or more fingers or pointers touching the surface, or gesturing in the air, at the same time.

Activation is usually by click or tap but can also be by programmatic simulation of a click or tap or other similar simple activation.

Standard platform keyboard commands

Keyboard commands that are the same across most or platforms and are relied upon by users who need to navigate by keyboard alone

A sufficient listing of common keyboard navigation techniques for use by authors can be found in the WCAG standard keyboard navigation techniques list

Subtitles

Captions that are displayed with a work of media that translate or transcribe the dialogue or narrative. Subtitles are synchronized with the soundtrack in real-time and can include spoken dialogue, sound effects, and other auditory information.

Task flow

Testing scope that includes a series views that support a specified user activity. A task flow may include a subset of items in a view or a group of views. Only the part of the views that support the user activity are included in a test of the task flow.

Technology

A mechanism for encoding instructions to be rendered, played or executed by user agents.

As used in these guidelines “web technology” and the word “technology” (when used alone) both refer to web content technologies.

Web content technologies may include markup languages, data formats, or programming languages that authors may use alone or in combination to create end-user experiences.

Temporary change of context

To be defined.

Test

Mechanism to evaluate implementation of a method.

Text

To be defined.

Two-dimensional content

To be defined.

Under the control of the provider

Where the provider is able to influence the content and its functionality.

This could be by directly creating the content themselves or by having influence by means of financial or other reward or removal of reward to the author of the content.

Up event

A platform event that occurs when the trigger stimulus of a pointer is released.

The up-event may have different names on different platforms, such as “touchend” or “mouseup”.

User agent

Any software that retrieves and presents external content for users.

User interface context

A user interface with a specific layout and associated components. If more than X% of the associated components are changed, it is a new user interface context.

User manipulable text

Text which the user can adjust. This could include, but is not limited to, changing:

  • Line, word or letter spacing
  • Color
  • Line length — being able to control width of block of text
  • Typographic alignment — justified, flushed right/left, centered
  • Wrapping
  • Columns — number of columns in one-dimensional content
  • Margins
  • Underlining, italics, bold
  • Font face, size, width
  • Capitalization — all caps, small caps, alternating case
  • End of line hyphenation
  • Links
User need

The end goal a user has when starting a process through digital means.

User testing

Evaluation of content by observation of how users with specific functional needs are able to complete a process and how the content meets the relevant requirements.

View

Testing scope that includes all content visually and programmatically available without a significant change.

Conceptually, views correspond to the definition of a web page as used in WCAG 2, but are not restricted to content meeting that definition.

Viewport

Object in which the platform presents content.

The author has no control of the viewport and almost always has no idea what is presented in a viewport (e.g. what is on screen) because it is provided by the platform. On browsers the hardware platform is isolated from the content.

Content can be presented through one or more viewports. Viewports include windows, frames, loudspeakers, and virtual magnifying glasses. A viewport may contain another viewport. For example, nested frames. Interface components created by the user agent such as prompts, menus, and alerts are not viewports.

Privacy Considerations

The content of this document has not matured enough to identify privacy considerations. Reviewers of this draft should consider whether requirements of the conformance model could impact privacy.

Security Considerations

The content of this document has not matured enough to identify security considerations. Reviewers of this draft should consider whether requirements of the conformance model could impact security.

Change log

This section shows substantive changes made in WCAG 3.0 since the First Public Working Draft was published in 21 January 2021.

The full commit history to WCAG 3.0 and commit history to Silver is available.

Acknowledgements

Additional information about participation in the Accessibility Guidelines Working Group (AG WG) can be found on the Working Group home page.

Contributors to the development of this document

Previous contributors to the development of this document

Abi James, Abi Roper, Alastair Campbell, Alice Boxhall, Alistair Garrison, Amani Ali, Andrew Kirkpatrick, Andrew Somers, Andy Heath, Angela Hooker, Aparna Pasi, Avneesh Singh, Azlan Cuttilan, Ben Tillyer, Betsy Furler, Brooks Newton, Bruce Bailey, Bryan Trogdon, Caryn Pagel, Charles Hall, Charles Nevile, Chris Loiselle, Chris McMeeking, Christian Perera, Christy Owens, Chuck Adams, Cybele Sack, Daniel Bjorge, Daniel Henderson-Ede, Darryl Lehmann, David Fazio, David MacDonald, David Sloan, David Swallow, Dean Hamack, Detlev Fischer, DJ Chase, E.A. Draffan, Eleanor Loiacono, Francis Storr, Frederick Boland, Garenne Bigby, Gez Lemon, Giacomo Petri, Glenda Sims, Greg Lowney, Gregg Vanderheiden, Gundula Niemann, Imelda Llanos, Jaeil Song, JaEun Jemma Ku, Jake Abma, Jan McSorley, Janina Sajka, Jaunita George, Jeanne Spellman, Jeff Kline, Jennifer Chadwick, Jennifer Delisi, Jennifer Strickland, Jennison Asuncion, Jill Power, Jim Allan, Joe Cronin, John Foliot, John Kirkwood, John McNabb, John Northup, John Rochford, Jon Avila, Joshue O’Connor, Judy Brewer, Julie Rawe, Justine Pascalides, Karen Schriver, Katharina Herzog, Kathleen Wahlbin, Katie Haritos-Shea, Katy Brickley, Kelsey Collister, Kim Dirks, Kimberly Patch, Laura Carlson, Laura Miller, Léonie Watson, Lisa Seeman-Kestenbaum, Lori Samuels, Lucy Greco, Luis Garcia, Lyn Muldrow, Makoto Ueki, Marc Johlic, Marie Bergeron, Mark Tanner, Mary Jo Mueller, Matt Garrish, Matthew King, Melanie Philipp, Melina Maria Möhnle, Michael Cooper, Michael Crabb, Michael Elledge, Michael Weiss, Michellanne Li, Michelle Lana, Mike Crabb, Mike Gower, Nicaise Dogbo, Nicholas Trefonides, Omar Bonilla, Patrick Lauke, Paul Adam, Peter Korn, Peter McNally, Pietro Cirrincione, Poornima Badhan Subramanian, Rachael Bradley Montgomery, Rain Breaw Michaels, Ralph de Rooij, Rebecca Monteleone, Rick Boardman, Ruoxi Ran, Ruth Spina, Ryan Hemphill, Sarah Horton, Sarah Pulis, Scott Hollier, Scott O’Hara, Shadi Abou-Zahra, Shannon Urban, Shari Butler, Shawn Henry, Shawn Lauriat, Shawn Thompson, Sheri Byrne-Haber, Shrirang Sahasrabudhe, Shwetank Dixit, Stacey Lumley, Stein Erik Skotkjerra, Stephen Repsher, Steve Lee, Sukriti Chadha, Susi Pallero, Suzanne Taylor, sweta wakodkar, Takayuki Watanabe, Thomas Logan, Thomas Westin, Tiffany Burtin, Tim Boland, Todd Libby, Todd Marquis Boutin, Victoria Clark, Wayne Dick, Wendy Chisholm, Wendy Reid, Wilco Fiers.

Research Partners

These researchers selected a Silver research question, did the research, and graciously allowed us to use the results.

Enabling funders

This publication has been funded in part with U.S. Federal funds from the Health and Human Services, National Institute on Disability, Independent Living, and Rehabilitation Research (NIDILRR), initially under contract number ED-OSE-10-C-0067, then under contract number HHSP23301500054C, and now under HHS75P00120P00168. The content of this publication does not necessarily reflect the views or policies of the U.S. Department of Health and Human Services or the U.S. Department of Education, nor does mention of trade names, commercial products, or organizations imply endorsement by the U.S. Government.