[Kari Kernen] Thank you to everyone for joining us today. We will begin momentarily. I'm just gonna wait a few minutes to let everyone finish joining the room.
And again, good morning, good afternoon, everyone. Gonna give us just one more minute, let attendees to continue joining the room, and then we will get started.
All right, again, good morning, good afternoon, everyone. My name is Kari Kernen, and I am the Sales Development Manager here at TPGi. I just wanna thank everyone for joining us today for our webinar on Accessibility Testing with WCAG 2.2 with James Edwards.
Just a few housekeeping items I wanna go through first. This webinar is being recorded. So if you need. The recording will be sent out after the webinar, usually by the end of the week. There are also transcriptions available, so if you need closed captioning, you can turn that on as well. If you have any questions, please utilize the Q&A box. We will try to get to as many questions as we can at the end of the webinar. If we don't get to your questions, we will respond to you by email later this week. Chat is turned on as well, but please try to stick to putting questions for the presenter in the Q&A box.
And last, if anyone needs any accessibility support, accessibility training, please reach out to Ida, and that's ida@tpgi.com, and someone from our sales team will get back to you. And with, that I'm gonna pass the mic on over to James, let him provide an introduction of himself, and get started.
Well, hello, and welcome. My name's James Edwards, I'm sometimes also known as Brothercake, and I'm going to talk today about Accessibility Testing with WCAG 2.2.
Now, the date of this webinar was intended to be after the final recommendation being published, but that hasn't quite happened yet. It was planned for the end of August. However, it's been delayed by some objections relating to the internationalization of tech spacing and visual presentation recommendations, things like how letter spacing relates to non-letter based languages, for example, like Japanese. It's hoped these issues will be resolved in time for September publication, but this is not guaranteed.
I've been following this spec quite closely for the last year, and I've previously written about it on the TPGi blog. I'll paste a link to that article into the chat if you'd like to look at that at some point. But while that article was about how to meet the new SCs as an author or designer, today I'm going to talk about how to test the new SCs as an auditor or engineer.
And suffice it to say it's a lot more complicated, because we never know what we're gonna get, do we? As the saying almost has it, clients do the darndest things. So there are nine new six success criteria in WCAG 2.2. So let's start by listing those out.
So first of all, the Level A checkpoints are 3.2.6 Consistent Help, and 3.3.7 Redundant Entry. The Level AA checkpoints are 2.4.11 Focused Not Obscured, 2.5.7 Dragging Movements, 2.5.8 Target Size Minimum, and 3.3.8 Accessible Authentication Minimum. Then there are three AAA checkpoints, 2.4.12 Focus Not Obscured Enhanced, which as its name suggests, is a more strict version of Focus Not Obscured. And then there's 2.4.13 Focus Appearance and 3.3.9 Accessible Authentication Enhanced, which again is a more strict version of accessible authentication.
It had been planned that Focus Appearance was going to be AA and that was going to companion with 2.4.7 Focus Visible being promoted to single A. However that didn't happen, as I recall it was moved to AAA because of concerns about its complexity, because although it's really easy to conform to this SC as an author, it is actually quite difficult to test given the huge variety of styles and shapes and things that Focus Indicators can have. But we'll get to that later.
Okay, so let's start with 3.2.6 Consistent Help. This is about consistent placement of information that provides general help to users, including points of human contact and automated help mechanisms. So this would be things like a chat window or a telephone number, an email address, places where users can get help with the site or the company service in general. This SC doesn't require that such things be provided only that if they are provided they can be found in a consistent location.
But the chances are most sites are already doing this. It comes down to consistent templating. So this is a relatively easy SC to pass and in most terms it'll be easy to test as well. The benefit of this SC is well it help all users but it may be particularly beneficial for some users with cognitive or memory impairments.
These are the normative requirements, I'll read this out. If a webpage contains any of the following help mechanisms and those mechanisms are repeated on multiple webpages within a set of webpages. They occur in the same relative order to other page content unless a change is initiated by the user. And there are four things it lists as help mechanisms, human contact details, a human contact mechanism, self-help option, or a fully automated contact mechanism.
So take a an example of the kind of content that applies here. This is some screenshots of the TPGi website at the top, it's the header that has various logo links, search field and there's some links at the top, one of which says contact. And that's a link that takes you to another page on which you can find a contact form. In the footer we've got a telephone number and an email address and also a corporate address. All three of which constitute help mechanisms because they're channels for getting help from humans.
Now, and it's slightly ambiguous whether that corporate address should really count. I mean it does because the normative requirements define what a help mechanism is in terms of human contact details, among other things. So anything that is a point of human contact counts as a help mechanism. Whether or not this is direct information such as these addresses, email, phone number, or whether it's a link to another page like the contact link is not significant. 3.2.6 applies to the information or the link as it's presented.
So the testing steps for this are quite straightforward. Identify the presence of repeated help mechanism, verify that it appears in the same relative order on each page, and then repeat that testing for each responsive breakpoint. Because different responsive breakpoints may produce different layouts in which the order of things is different. That's fine, you don't have to be consistent between break points, you only have to be consistent within a single break point.
Now, same relative order is a key term here. This means the serialized order i.e. the source order, not the visual order. Specifically it means the same order relative to other content which is also repeated on every page. So in the case of the header it would be other content that's always in the header, but ad hoc content which only appears on one page isn't included. So if you have for example, like a promotional link that's only on one page and that changes the relative order of the help mechanism on that one page, then that doesn't matter.
The thing with break points also applies to different templating. Like if you've got a site for example, it's got a public area and a member's area and those two areas of the site have completely different layouts. You don't have to be consistent between them, you only have to be consistent within each set of web pages.
And I've said the visual order doesn't have to be consistent, but ideally it should be. And in most cases, because this all comes down to templating, if the source order is the same then the visual order is likely to be the same as well, but if you do find yourself in cases where it isn't, where the source order is consistent but the visual order isn't, this is unlikely, but if you do find that then it's worth testing that separately against 1.3.2 Meaningful Sequence and 2.4.3 Focus Order.
So here's another example of the, this is the same website header that I showed you in the previous screenshot and this is what it looks like when all the CSS removed. So this is a representation of its serialized view and you can see the area, the area highlighted in green is the area we're concerned about comprising roughly the search box and the three links at the top right of the header, Free ARC Account, Login and Contact.
So what we are concerned with is both the relative order of the items within this region and the order of the region relative to the overall header content. It's kind of slightly ambiguous. The term relative order is slightly ambiguous. The understanding docs explain it like this, that when testing this success criterion, it's the help item that's relative to the rest of the content. When testing a page, other content that is present across the set of webpages and is before the help item should be before the help item on this page. Items which are after the help item on other pages should be after the help item on this page.
So it doesn't say specifically how much content is embraced by this I mean is it just the previous and next thing or is it everything else in the header? This is not locked down and it basically can't be locked down more precisely than that because if it's too specific, this can lead to loopholes or like false failures and things like that. So it's one of those kind of intuitive "common sense" things that you're thinking once a user knows where to look for it, is it always in that same place? And you can take a view and use your judgment on whether that passes or fails based on those kind of broad judgments. Like is it always at the end of the page? Is it always at the start of the header? Stuff like that.
Testing barriers. Okay so what I'm talking about here is Accessibility barriers to being able to test this SC. So for any SC for example that's concerned with visual presentation of things you need to be able to see in order to assess that and these barriers are kind of unfortunate but inevitable with WCAG. However in this case there are no apparent barriers that I can think of. You can test it with source code, it's all quite straightforward.
Automated testing, this can't be tested by automation, primarily because automation suites generally don't hold memory between page views, they test single pages and don't hold memory in order to test whether the same content is present on multiple pages. Even if it did, you can't unambiguously identify the same content on different pages anyway. I mean you'd identify that by scanning the markup but two pieces of markup on different pages might have incidental variations like attributes or class names that are slightly different. That means automation cannot unambiguously determine that two pieces of markup are the same functional content. So even if memory was held between pages, you can't unambiguously identify the same content anyway and even if that were not the case, you can't unambiguously identify what constitutes a help mechanism by automation anyway because that's something that takes a human to determine.
Okay, that's very straightforward.
3.3.7 Redundant Entry. This is about preventing users from having to re-enter information they've already entered before within the same process to reduce cognitive load, so that for an example, this would be a multi-page application form or an online checkout which might ask for the user's name and address in more than one place. The benefit of this SC is for users with cognitive or memory impairment and also for users with mobility impairments who relies on voice recognition or switch control, things like that, for whom data entry can be extremely laborious. So asking users to enter information they've already put before, it creates an unnecessary burden.
These are the normative requirements. Information previously entered by or provided to the user that is required to be entered again in the same process is either auto-populated or available for the user to select except when re-entering the information is essential, the information is required to ensure the security of the content or previously entered information is no longer valid.
So here's one example of the kind of content that applies here. This is an online checkout form asking you to enter your delivery address and there are two email fields. One says email, one says confirm email, so that's a requirement to re-enter redundant information. I'll come back to that in a moment to discuss whether it passes or fails. And here's another example. This is the second page in the same checkout flow which asks you to enter your billing address for your credit card, which is information you've already provided on the previous page or potentially information you've already provided.
So to manually test this, firstly identify applicable form fields. So anywhere within the same process that's asking for redundant information. Those are the forms that apply. Incidentally, process also means same session. So if you leave the site it's no longer the same session so it doesn't count as the same process. However it doesn't necessarily mean the same site. So a checkout flow might go to a third party payment system and come back again and that still counts as part of the same process. Excuse me.
Second step is to ignore those where exceptions apply. So the exceptions are that previous information was in different format such as if an application asks you to upload your resume in a Word document, then asks you to enter your employment history in a web form, that doesn't count because it's a different format. So that's an exception. Or where duplicate information is required for verification such as confirming the password you've entered, or it's required for security like being entering your name for digital signing, or it's no longer valid, which might be updating card details when payment was declined, or it's essential. And essential in this case would be something like a memory game which is essential because then it's not a memory game.
Step three is to verify that redundant information is either autopopulated or available for the user to select. Now autopopulated specifically means populated by the site, so browser auto complete doesn't count. It's irrelevant to this SC. And available for you to select, there's a whole range of ways that could be, it could be a dropdown menu with previous information that you can just select that or it could be text on the page, on the same page that literally could be copy and pasted, or in the case of a checkout flow it could be a checkbox like use my billing address as my shipping address. That would count as well.
So to go back to our previous two examples, the email and confirm email fields, these pass because they meet the first exception, re-entering the information is essential, duplicate entry is to confirm that you've typed your email address correctly. So that's an essential exception. This one also passes because of this checkbox. This checkbox appears, it says my billing address is the same as my delivery address, checking that completes all the fields and the billing address field. So that also counts because the redundant information is available for the user to select.
And as I noted, browser auto complete doesn't pass. So even if you had your address filled into your browser like Sherlock Holmes I've used here and I had to Google what his postcode is, you have to, browser auto complete doesn't count. The fact that you can do this doesn't mean you pass this SC. Also in the case that the checkbox wasn't present. So in this screenshot there is no use the same address checkbox and auto complete is there but that doesn't count. So this particular case would fail if that checkbox wasn't there because there's no way for the user to select that information.
Testing Barriers 3.3.7. Again, no obvious barriers here. You know, you can do this by reading through the source code. It's not necessary to be able to see or have any particular perceptual abilities to test this one.
Automation again, this can't be tested by automation. Same reason again, it doesn't hold memory between page views. Same content between two pages may not necessarily be determinable anyway and also some exceptions are judgment calls that require intelligent knowledge such as whether the duplication is essential or whether it's required for security or verification. Auto population could potentially be identified if session specific information is carried over but it would take like complex text analysis to do that. Like if you enter your address and then your address is then available in text on the next page, to determine that that is the same address when it might have been reformatted, different line breaks, different commas and so on. It would take text analysis to do that which is currently not something that automation suites generally do, although they probably will in quite near future.
Onto 2.4.11 Focus Not Obscured, right? This one's quite fun. This is about ensuring that elements are not entirely obscured by author-created content at the point when they receive focus. This issue can most commonly occur with things like sticky headers, notification banners, non modal dialogues, things like that. This benefits all keyboard users but particularly some users with cognitive or memory impairments by ensuring that it's easy to identify where the focus position is.
And we do have an existing SC 2.4.7 Focus Visible, but that doesn't cover this, because it's only concerned with where the content has a mode of operation in which the focus is visible. So if a focused item is temporarily obscured, it still has a mode of operation in which the focus indicator is visible, therefore it wouldn't fail 2.5.7. And that's what this SC is for. Simple requirements when a user interface component receives keyboard focus components is not entirely obscured due to author-created content.
So I'm gonna show you a demo of that now. So this is a typical site layout with a header and main page and it's got a row of nav links at the top and then a long list of links down the bottom. Now if I tab through this page, you can see it's all quite straightforward when I go downwards, but as I come back up and I get to the point where the focus meets the sticky header, it disappears beneath the sticky header. Now that's a fail because the currently focused item is entirely obscured by author-created content.
Okay, the testing steps for this. So firstly identify the presence of focusable elements. Manually shift/tab through all the elements and you have to go in both directions to test this, all the way down to the bottom and all the way back up. Then check through interactive exceptions. There's a couple of notes in the specification that provides specific exceptions and I'll show examples of those in a moment. Then verify that none of these elements are entirely obscured at the point when they receive focus.
Now note that the normative requirements refer specifically to user interface components which is defined as a part of the content that is perceived by users as a single control for a distinct function. In most cases this is quite straightforward. If the button is a focusable element and it's a user interface component, but sometimes you have more complex like compound widgets where the whole thing is considered the component even though it's got parts within it. A color pick or a slider is a better example of that. It's got a thumb within it, but the whole slider is considered to be the user interface component, that that's what you would test.
But note also that this definition doesn't include the focus indicator. So in a situation where the component is entirely obscured but part of its focus outline is still partly visible, then it still fails because the focus indicator doesn't count. Also note that if the obscuring content is semi-transparent such that it does entirely obscure the element receiving focus, but it's still possible to see that element because of the transparency, then this doesn't fail 2.4.11. However, in that situation its focus indicator it should be tested against 1.4.11 Non-text Contrast.
Also note critically that this only applies to elements at the point when they receive focus. It doesn't just apply to any element that could be focused nor does it apply to elements that are already focused when the obscuring content appears, it only applies at the specific moment when that element receives focus.
So if I go back to some more demos now, I'll show you some of these exceptions. This is one, this is a persistent menu. So here we've got a menu builder here that you can go through, but this menu is designed to remain persistent as you tab away from the nav bar. Generally they don't do that, generally menus like this close when you tab away from it, but in the example that it doesn't, didn't remain persistent, these two links are now obscured, but that doesn't fail if, critically if, there's a way of dismissing the obscuring content without moving focus and it's the ability to do that without moving focus, which is the key point. And in this case I can press escape and that makes the menu disappear and that means it meets the exception and so it doesn't fail.
The other exception is for movable content. So this kind of stuff is quite common in online applications where you have draggable toolbars and panels, Photoshop style stuff. Now so I can move these elements and I can cause them to obscure an element and then I can tab to that element and now that element's obscured, but this doesn't fail because the user themselves caused that state so it doesn't fail. So any kind of movable content like this, you only have to test it before any of those things have happened. You don't need to consider what might happen if the user moves content to obscure it.
Now finally, if I go back to that sticky header, this is the same page, but this time I've implemented a script that prevents the focused item going under the underneath the header. As you can see it approaches it and then it auto scrolls to prevent that from ever going beneath the sticky header. Now this works by setting scroll padding top on the HTML element, which effectively causes native scroll into view behavior to treat the bottom of the header as though it's the top of the viewport. The amount of scroll padding is applied via JavaScript and it's dynamically recalculated whenever the header height changes such as viewpoint size or increases in text size that might cause the header links to wrap and increase the height of the header.
There's an article on the TPGi blog and there's a link to that which describes this. I originally wrote this article with a technique I came up with and after I did it was Alistair Campbell on Twitter who was like, "have you considered another approach" and his approach was way better than what I come up with so fair enough. So I scripted and documented his approach and that's what I'm recommending here. This is the best way of doing it. It's really simple to use, it's really robust and the auto scroll behavior it produces matches what native auto scroll behavior does as well. So it's very nice, a very nice addition. Anytime you need to stick your header, just throw that script in, it basically completely solves that problem.
Right, testing barriers for 2.4.11. This does require the ability to make visual and spatial comparisons because you've gotta be able to see that content is being obscured. However, this can be done by automation.
This is the only SC that can be fully automated. So what you do is programmatically focus each focusable element. By the way, I'm talking kind of blue sky here, I'm not going to give you specific like code for how you'd automate this, just blue sky conceptually what you would do in order to automate this. So what you would do is programmatically focus each focusable element in turn and then in each case you'd query the bounding box and the stacking order of all other displayed and non-transparent page elements and verify that the focus element's bounding box coordinates are not entirely contained within the bounding box coordinates of another element with a higher stacking order.
So, and that's a bit hard to wrap, it's hard to explain but its fairly easy to code that, and you could, you could cache that, you could cache the collection of all elements on the page and get all their coordinate values and then just refer to that cache as you're tabbing through so that you don't have to query every element every single time, which would be really inefficient. So yeah, that's doable just by measuring relative coordinate values. It is possible to test this by automation.
There are two exceptions but they, you can avoid those automation. The movable content exception won't arise if you do test it on page load without any prior saved changes and the dismissable content exception won't arise if elements are only focused and never actioned. Just in case it might be a good idea though to programmatically send an escape key press before testing for overlap on each focus element just in case that has happened for some other reason.
Okay, 2.4.12 Focus Not Obscured. This is identical to 2.4.11 except it also fails for partially obscured content and it has no exceptions for movable or dismissible content. So these are the normative requirements, When a user interface component receives keyboard focus, no part of the component is hidden by author-created content. So even partial overlap, even a few pixels would fail that.
So the manual testing for this is essentially the same. Identify the presence of focusable elements. Manually shift/tab through all such elements in both directions. There are no interactive exceptions so you don't have to check for that. And then verify that none are partially or entirely obscured on focused. And again, you would repeat this for every responsive break point because the layout might be different.
Right, excuse me while I drink some tea.
Right, 2.5.7 Dragging Movements. This is concerned with interfaces that use pointer dragging movements such as drag and drop and requires that interfaces which use dragging movements can also be operated by a single pointer without dragging movements.
Now even the simplest dragging motion needs quite precise pointer control. You have to be able to click down, keep your mouse clicked down while you move it around. And this may not be possible for users who have issues with fine motor control or who use assistive technology that simulates pointer movement from voice command or other input. I dunno if you've ever tried using simulated mouse input, but it's really laborious and long-winded. It's not a fun way of interacting at all. So you'd want to avoid that, avoid complex mouse movements as much as possible.
Now there is an existing SC 2.5.1 Pointer Gestures but that doesn't cover this because dragging is not a pointer gesture. A pointer gesture is to find something that includes directional path or multi-point information, i.e. the path you take from A to B is significant whereas with dragging movements the path is not significant. Only where you finish, only the end point is significant. So it's not a pointer gesture and that's why it's covered by this new SC, Dragging Movements.
Normative requirements here, all functionality that uses a dragging movement for operation can be achieved by a single pointer without dragging, unless dragging is essential or the functionality is determined by the user agent and not modified by the user. That latter point would be things like mobile phone gestures where you can swipe and drag, like pull down the page to refresh or swipe left and right navigation. Those things are not covered by this. It's only author-created content.
So I'll give you an example of drag and drop and this is what drag and drop looks like. You've seen this many times, I'm sure I expect encountered it. You can just drag items and move it to different things and there we go, that's drag and drop, but it's the only way in this particular interface, the only way of interacting with this is by drag and drop. There's no other way of doing it. Therefore this fails 2.5.7.
So to test for this you would firstly identify content that uses pointer dragging. You would then some examples of that, a drag and drop, drag sorting, custom sliders use dragging movements and carousels, also color pickers, things like that. Step two is to ignore those where exceptions apply. Those are functionally provided by the browser system as we've talked about. That also includes things like un-styled native sliders. If it hasn't been author styled, just a native range input that hasn't been author styled, then that would be an exception. Also where the dragging movement is essential, that would be something like an online test for mobility obviously requires you to do that kind of motion to see if you can. So that would be an exception. Also if there's equivalent functionality which does not rely on pointer dragging movements available elsewhere.
Step three is to verify that content can be used with single pointer actions. So some examples of that would be drag and drop that can be operated by point and click or where the targets have actionable menus, excuse me, where drag sorting or rearrangement includes up and down buttons or position number inputs. Text input for changing the order would count because even though text input is, keyboard input accounts as agnostic for the purposes of this SC. Carousels often include previous and next buttons. Sliders often have up down buttons or sometimes have up down buttons on them. Color pickers generally already pass because with most color pickers you can drag around to select colors but can also single click anywhere in the color space to collect that color.
What's important here is that keyboard support is not sufficient here and is therefore irrelevant to testing this. That doesn't include text input as I noted because that's mode agnostic. But if you've got a drag and drop interface which requires pointer dragging but can also be operated by keyboard, that is not sufficient to pass this SC because it's only concerned with pointer movement.
So if we go back to that drag and drop example, excuse me, this is the same basic script but this time I've retrofitted with point and click functionality. So here I can click one item, soon as I do the targets become available and then I can just single click somewhere else to move it there. That's a fairly straightforward enhancement to implement with drag and drop depending on how it's coded. But that would pass simply because you can do this single Windows 3.1 style point and click operation.
Another example that would pass, this is a wonderful script written by Darren Sennef. Let me give you a link to that article. He wrote a really long complicated article about all different ways in which drag sorting can be made accessible and this is the final pattern he settled on. And the point was to make this keyboard accessible, but as a consequence of that, this now passes 2.5.7 as well because these up down buttons can be operated by single pointer click. Nice and easy, just single clicks and it moves up or down. So the fact that you can do that, that means it passes 2.5.7.
A slider, here's another an example, this is a custom slider that I've implemented. Typically sliders are moved by dragging operation, but you can click in the track to set the slider to that point. So as long as you can do that, as long as that functionality is available, this also passes 2.5.7.
Testing barriers for 2.5.7. This is a hard barrier unfortunately that this requires the ability to use pointer dragging movements because in order to identify that content requires pointer dragging movements, you have to be able to do pointer dragging movements. Well not really, but there you go. This can't be helped, and this can't be tested by automation either, partly because complex interaction is required, which might not be possible to emulate. Even if it was possible to emulate that kind of invalidates the test, since testing with emulation is as much a test of the emulation as it is the thing it's emulating. So emulation would not be a reasonable way of testing the same thing.
But even if it were, you can't necessarily identify the relevant content anyway. There may be cases where there's nothing in the markup that clearly identifies draggable content. There may be no roles or anything in there that tells you it's draggable content just by looking at the markup. The only way to do that would be by programmatically evaluating the script to determine how content can be interacted with just by looking at what events it's binding. But like that isn't a thing, that doesn't exist. There's no such thing as that. So maybe you know LLM analysis might make that possible in future, but for now that's blue sky.
Right, target size 2.5.8 target size. This is about, sorry, this is about ensuring that targets are large enough for pointers for users with limited mobility or limited vision. It works in two ways. It's called target size, but it's actually more about target spacing. This is a bit of a subtlety that took me a while to wrap my head around.
These are the normative requirements, okay, so the size of the target for pointer inputs is at least 24 by 24 CSS pixels except where, now there are five exceptions which I've paraphrased here because the full text of them wouldn't fit on a slide. So one is the Spacing exception. Now this is defined by drawing a 24 pixel diameter circle centered on each undersized target. I'll show you a demo of this in a moment. If a target circle doesn't intersect any other circle or any other target element than it passes.
Now the 24 pixel circle abstraction is a useful way of measuring this because it works for any kind of shape of target. But for regular rectangular targets you can do it in your head like if a row of horizontal buttons or all 16 by 16, then they must have eight pixels of space between them because the radius of that circle is 12, you've already used eight of it, so there has to be four on each side to prevent the circles from overlapping. So you can do that in your head for rectangles. But the circle abstraction works for non rectangular shapes or shapes that are not perfectly aligned in a row or column.
Second exception is Equivalent, where the function would be treated through different control, note of course that the different control itself also has to pass target size. Third exception is Inline, the target is constrained by the line height of non-target text. That basically means text links within a paragraph, but it doesn't mean text links in general and I'll come back to that again in a moment.
The other two exceptions are User Agent control, the size is not author modified. So that would be something like a native radio control or checkbox. If there's no author styling on that, then that's an instant pass. And the third is where the size is essential. The presentation is essential or legally required. I'm not aware of any specific circumstances where that would be the case, but I know the law gets quite arcane with stuff like this doesn't it like online forms that have to emulate paper forms, stuff like that. Law does weird stuff.
Let me, right, yeah, let me define target. That's the first thing to go to here. The target is the region of the display that will accept a pointer action, such as the interactive area of the user interface component. So this is not the visible area or not necessarily the visible area. It is the interactive area. And this can become significant in cases where the interactive area is not the same as the visible area.
Let me show you some examples of how this works. On the left here we've got a purple square representing a button or something and it's got an orange outline around it. So that orange outline is the 24 pixel square that represents the smallest size of a target. And in the first example this passes, it's a square button 24 by 24 that passes easily, but the second button is, what is it? It's 32 by 18. Now that wouldn't count, even though the area of that adds up to the same, it doesn't count because it has to be 24 by 24 along both sides, or more precisely, and this is quite critical here, I only just understood this a few days ago. It must be large enough that a 24 by 24 pixel square can be drawn entirely inside it.
That precision is really significant when it comes to elements with rounded corners. Here we've got the same button on the left, a square button 24 by 24. On the right is another button that's 24 by 24, but this one's got rounded corners. Now when you apply border radius to a button, the white areas that are outside the border radius, they're not interactive. So you can put your mouse in the purple part and that's interactive, but these white parts around the border edges are not interactive. As a result of that, you can no longer fit a 24 by 24 square entirely in the target. Therefore this one fails. So even though, well it fails to meet the smallest size anyway. It can be undersized. Again I'll come back to undersized in a moment. So in order to count as sufficiently large to meet the 24 by 24 minimum, the orange square has to fit entirely inside the button and it doesn't. So when you have rounded corners on the button, you have to increase the size of the button in order to make that work. So in this particular example, the button would have to be 30 by 30 in order to fit that 24 pixel square entirely inside it.
This is also significant in cases where non rectangular SVG shapes are directly interactive. Like if you have a path element that draws a shape and the path itself is clickable, then the 24 pixel square has to fit entirely inside the path. Similarly, in image maps where the areas are non-rectangular, if anyone uses image maps anymore, I don't know, but they still exist. And if those areas are non rectangular, then again the square has to fit entirely inside the region.
But the interactive target isn't necessarily the same as the visible shape. So you could have an example like this. The gray outline here just represents like a button that would otherwise be transparent just so you can see where the boundaries are. But imagine a button that's entirely transparent and the only visible part of it is these small icons inside it. Now these would pass because even though the visible part of the button is quite small, the whole button is an interactive target. So you can click anywhere in the region around it and it's still interactive. Therefore this passes because what matters is the interactive target, not necessarily the visible target.
So this is the kind of things you're most likely to encounter that throw a slight spanner in works. Either buttons like this where the visible and interactive targets are not the same, or rounded corners. It's rare to encounter cases where a non rectangular shape itself like this is interactive, but it could happen. And in those cases, you just remember the rule that the 24 pixel square has to fit inside it.
So the testing steps for this, you identify the presence of pointer targets, ignore those where exceptions apply and then verify the targets have sufficient space or size or spacing. So the exceptions I went through earlier, which are equivalent or minimum size, or targets constrained by line height, or widget browsers or system widget or the small size is essential. Essential small size might be something like a game that tests pointer precision.
Undersized targets is where the spacing exception comes in. So if you've got a target and it doesn't meet the 24 by 24 minimum size, that's when the spacing exception kicks in. So I'm gonna show you some examples now of a site. So this is a mockup site that I created. In case you're wondering what the text is, it's what you get if you pass "lorem ipsum" into Google Translate. It's quite funny really, a ridiculous mouse will be born. I like that. I might use that as a slogan.
So this is just a page with various interactive elements. Now I'm using a bookmarklet here. This is something that Steve Faulkner created. I'll give you a link to that as well. Now what a bookmarklet does is it draws 24 pixel diameter circles centered on every interactive element. So the green ones are targets which are 24 pixels or more, and the blue ones are targets which are too small.
So straightaway the logo and the header easily pass. The links in the top right also pass even when they have small icons, they're inside square buttons. So the whole button is interactive, hence the non rectangular target thing doesn't occur here. And that's fine. The icon links in the right hand column also pass even though they're small because they have sufficient spacing. Now if you can see this, it's visually shows that these circles are not overlapping, but in mathematical terms these are 20 pixel icons. They've got about eight pixels of spacing between them. So that easily adds up 20 plus 28. So that's enough. They passed the spacing exception. So they're fine.
The paragraph links here have plenty of space, but even if they didn't, they would still pass because these text links are constrained by the line height of non target text. Now the reason that's important is because if you have say a paragraph of text and it's got a few links inside it, you can't expect those links to have more height than the surrounding text because what that would do is it create a paragraph with uneven line spacing. And once you've got that, that creates a cognitive barrier to people with certain reading disabilities or comprehension that would make it harder for the text to read. So that's why that exception applies.
But note that it doesn't just apply to any link at all, these links over on the right, soft-footed soccer ball, complete keyboard, tomorrow protein, and the living element whatever that is supposed to mean. These circles overlap. Now, there's nothing constraining these. They're not within a paragraph. There's no reason why these links couldn't be taller. And as a result of that, these links fail. So that's our first failure.
Let's look at the form. Now, the label elements are also interactive targets for the purposes of target size. So if the form control is inside the label, the whole label counts. And that's an example you can see or you can potentially see down here where we've got two times and then a text box and this whole thing is wrapped in the label. So the whole label is considered and they pass because they're large enough.
Over here we've got separate inputs and labels, radio controls and labels. Now these labels are quite narrow and the circles overlap each other so they should fail, but in fact they don't because, this gets complicated. The native radio inputs, they automatically pass because they're un-styled native widgets. Now these labels should fail but they don't because there's an equivalent control on the same page that performs the same function, i.e. radio button performs the same function as clicking the label, therefore these labels don't fail even though they're undersized and overlapping. So let's get rid of those to show they're gone.
Now look at the slider. This slider is quite narrow. Now this doesn't pass the native widget exception because I've styled it with a specific width. That specific width means it no longer counts as an unstyled native widget, which means its lack of height now is a problem, and because it overlaps the label, this should fail. Now the label itself is large enough so that would pass. And as we saw with the radio controls and their labels, there is a possibility to have an equivalence exception. But that doesn't apply in this case because clicking the label does not provide equivalent functionality. With the slider you can drag it and change its values, and you can't do that with the label, clicking the label only sets focus on it. So it's not equivalent control and therefore it fails.
These other controls here, the checkboxes are their native controls, so they're fine. These are custom switches, they should fail, but they pass because they're sufficiently spaced. So overall the only failing content on that page is the slider and the tightly packed list of links.
So testing barriers for this, this benefits from the ability to make visual and spatial comparisons, but you can just do it in your head with source code and mental abstraction. So it's a soft barrier but it's not a hard barrier. It's easier to test this if you can see, but it's not absolutely required.
And also some of this can be automated. So targets to pass because they have sufficient spacing that's unambiguously determinable by automation. Smaller targets to pass because they meet the spacing exception can be tested by evaluating their viewpoint coordinates. Again, similar to Focus Not Obscured, and targets to pass because their native widgets and their sizes not author-modified can also be tested by automation. Just by verifying the CSS from a list of what elements are native widgets. So that's all doable.
But some of it can't be detected. Manual checks will still be needed. Things that are inline equivalent or essential exceptions, they can't be tested by automation. Equivalents and essential obviously requires human decisions. But testing that something is constrained by the line height of non-target text is also not testable because it's actually not possible to determine that the height of the target is constrained by its parents' line height. Since computed line height is not necessarily a number. And even if you measured the element and its parent and constrained and determined that they're the same height, that doesn't prove that one constrains the other, simply because they're the same. So that's a judgment call. Also, some interactive targets might not be identifiable, like targets that are functional widgets but don't have any recognizable markup like role or tabindex attributes. This would fail elsewhere but also precludes automated testing here.
Feeling the pressure of time now. So I'll get a move on. 3.3.8 Accessible Authentication Minimum. This is about cognitive function tests used to authenticate users such as solving a puzzle, remembering a password. It requires that these things are not required for authentication unless an alternative is available or the exceptions apply. This benefits users with cognitive issues relating to things like memory, reading, numeracy or perceptual processing.
So here's the normative requirements. A cognitive function test such as remembering a password or solving a puzzle is not required for any step in authentication process unless that step provides at least one of the following. So the first is Alternative, another authentication method is available which doesn't rely on a cognitive function test.
The second is Mechanism, assistive mechanism is available to the user. That would be something like username and password authentication, which does count as a cognitive function test. But you can use browser auto complete or other browser tools in order to complete that. So that counts as a mechanism that's help available to the user.
The third exception is Object Recognition, the test is to recognize objects. I'll show an example of that in a minute. And third is Personal Content, the text is based on non-text user content. Can't see that anyone would do this, but something like if you have a profile that has images in it, the site could use images from your profile and ask you to recognize which one of these is an image of you or something like that.
The most common one you're gonna encounter is this, which is captcha tests to ask you to select all squares with bicycles or select all squares with crosswalks and so on. These are an exception because these are considered sufficiently easy that they're an exception to cognitive function test. There is of course a potential for cultural lack of familiarity. You know, you might not know what a crosswalk is or you might know it but not know what it's called. And these scenes are generally US urban scenes that if you live in Europe you might not recognize some of the names, but that doesn't, that's not an accessibility issue 'cause it affects everyone equally. So the SC doesn't consider that.
But here's some other examples of captchas that will fail. One of them is a text-based capture where you have to look at some obfuscated words and type them in a text field. This relies on perceptual processing and transcription. So that fails. Second is an audio capture which relies on perceptual processing and memory. And the third is this rotating captcha. I dunno if you've seen these weird things where you've got a hand pointing in one direction and some kind of animal and you've got to rotate the animal until it's pointing in the same direction as the hand. I mean who comes up with the stuff? I don't know. But this relies on perceptual processing. So that would also fail.
So manual testing for 3.3.8. So firstly, identify the presence of cognitive function tests. And the primary things you can find are either captchas or username and password authentication or inputting a security code sometimes where you get sent a code and you have to type that into a field that's verification. All of those are cognitive function tests. Step two is to ignore those where exceptions apply, as I've said before, that's captcha tests that ask you to identify a familiar object. Also authentication fields that ask for your real name, your email address or your phone number, which, because that information is the same on every site, that doesn't count as a cognitive function test. And the third exception is where an alternative authentication method is available. And the most common alternate method you're likely to find is two-factor authentication. But note that the two-factor authentication itself also has to be tested against 3.3.0, excuse me.
Step three is, so I've put some specific cases that are so common that it is worth putting in specific steps. So for username and password authentication, verify that you can either copy or paste or auto complete into the field. These are mechanisms that help you to complete it. For a security code, you verify the same device and the whole code.
So I'll show in, I'll go to this example here. Here's a couple of things from an online banking interface. This password field, it supports auto complete. Now for this particular bank, it actually doesn't support copy paste. It blocks copy and paste into that field. And if you encounter that, I'd probably log that as a warning saying that's not good usability. And there are literally no security benefits to doing that. But technically that would pass if browser auto complete worked. Ideally it should support autocomplete and copy and pasting.
And the security code in this case fails because it's asking you to enter the second and fourth digits of your sixth digit security code. So this is a reliance on transcription and there's no mechanism to help because you can't just copy and paste the whole code, you have to enter parts of it. So that counts as something that relies on transcription. Therefore that is a cognitive function test and it fails.
Right now this, you may recognize this, this is a screenshot from a a "Where's Waldo?" book, sometimes called "Where's Wally?" It's a children's book series where you identify a familiar character from a crowd scene like this example is a crowded beach full of different characters and goings on and Waldo himself was always wearing round glasses, a red and white striped hat, and a red and white striped jumper. So Waldo is a familiar object that you have to recognize. So this would in fact pass because it's familiar object identification. I can't see you would ever actually do this on the site. It's kind of ridiculous. But this would technically pass. And just in case you're wondering where he is, he's up there in the top right hand corner. It's quite a hard one 'cause you can only see his head. But yeah, this would pass just not in reality.
So testing barriers with this, this requires the ability to visual identify content, like to identify that particular piece of content is a captcha object recognition test and what it's asking for, you have to be able to see that content to do it. So that's a pretty hard barrier.
Automated testing, this can't be tested by automation, again, because you can't, identify relevant content unambiguously. Like if a form field is requesting authentication information, how do you know? That label text or other identifying attributes could be any number of things. The visual content of a captcha can't be determined without sophisticated image analysis, which isn't currently possible to the required level of accuracy.
The fact that content is a captcha at all cannot be ambiguously determined and the availability of an alternative to that can't be unambiguously determined. Also, if when you did identify an alternative, you are unlikely to be able to perform the steps required to complete that like two-factor authentication or biometrics. Automation wouldn't be able to complete those. And actually that's quite a good thing because if it was possible to complete biometrics by automation, then biometrics itself wouldn't be secure. So this is not testable by automation. Although improvements in image analysis might make some of this possible in the future. Like content identification.
Well Accessible Authentication Enhanced, this is the AAA version of accessible authentication. It's exactly the same as 3.3.8, except there are no exceptions for object recognition tests or user provided content. So the normative requirements are exactly the same as per 3.3.8, except they're the two exceptions for object recognition and personal content don't exist.
So the manual testing steps are the same. Identify the presence of cognitive function tests. There's no step two, ignore those where exceptions apply. The only exceptions that still apply are those which ask for user's real name, email address, or phone number, or any alternative authentication methods available. So those exceptions still apply. Those are standard exceptions, but the object recognition ones don't. So captcha content of any kind is a failure of 3.3.9. Like that for example.
Okay, this is the last one I've saved the best for the last. 2.4.13 Focus Appearance. This expands on but does not replace 2.4.7 Focus Visible and 1.4.11 Non-text Contrast. By adding further requirements for the contrast between focus and unfocused states and the minimum size of the focus indicator relative to the element size. So it goes beyond visibility to define a minimum level of visibility. This benefits all users, but particularly some users with low vision, cognitive or memory impairments by ensuring that it's easy to identify where the focus position is.
So these are the normative requirements. When the keyboard focus indicator is visible, an area of the focus indicator meets all of the following. I'm paraphrasing here again, 'cause the full SC is too wordy for slides. So either it's at least as large as the area formed by a two pixel perimeter, and I'll define that in a moment. Or it has a contrast ratio of at least three to one between states, and states, that literally means the same pixel. So if you're dealing with an outline, it's the contrast between the outline color and the same pixels that the outline would take up when it's not visible.
The exception to this is where the focus indicator cannot be author modified. That would apply to things like assistive technologies that draw their own reading indicator or where the focus indicator and the background are not author modified. Now that's very specific. The focus indicator and the background are not author modified. So if you've got a page where you've set the page background color to like light gray and other than that you're just relying on native focus indicators, you've automatically failed. In order for native focus indicators to pass 2.4.13, the background color has to be either unmodified or set to white. The reason white counts as unmodified is because that's the default in color contrast calculations where one of the colors hasn't been specified.
So let's define perimeter. The perimeter of an element is a continuous line forming the boundary of a shape, not including shared pixels, or the minimum bounding box, whichever is shorter. I'll unpack that as we go along 'cause there's quite a lot to unpack in that. And then it defines the perimeter calculation for a two CSS pixel perimeter around a rectangle is 4h+4w, where H is the height and W is the width. For a series two CSS pixel perimeter around a circle, it is four pi R.
Now it took me a while to wrap my head around this because the maths for this is wrong. That's not how you calculate the two CSS pixel perimeter. But as I later eventually discovered, the maths is intentionally fudged to make it easier to use. And this was a brilliant idea, whoever came up with this, it's genius.
So we're gonna have to do some maths now and I apologize for that. We're gonna have to do a little bit of high school maths. So this is the standard maths formula for calculating the perimeter of an area. Perimeter equals two times the height, plus two times the width. In this case that's 300 because the width is a hundred and the height is 50. Now a mathematical perimeter, it's an abstraction that has a line thickness of zero. Once you actually draw that perimeter as you would with a two pixel outline, the lines will overlap at the corners which adds to the visible area and means that this simple formula is no longer correct.
This is what shared pixels means, at the point, so you've got an outline, each two lines of outline overlap at the corners where these darker blue squares are, and those are referred to as shared pixels. And in the specification, shared pixels are explicitly ignored. So the actual part of the outline that you're measuring is the line excluding the shared pixels. So in this case the white corners show that those shared pixels are not included, and that's what makes the formula so simple. Two pixel perimeter is now equal four times the height plus four times the width, which is 600 in this case.
And the fact that you don't have to include the shared pixels means that you can use the same basic maths formula for any kind of shape and it will still work. And this is particularly useful when it comes to non rectangular shapes. So you've got a circle for example, the formula for a circle, a two pixel perimeter, is four times pi times the radius. But in fact the actual correct, the actual literal area of a two pixel perimeter is slightly greater than that because the radius of the perimeter is greater than the radius of the circle.
Again, also with a five pointed star, the perimeter of a five pointed star is 10 times the length of one side, assuming all the sides are the same. So two pixel perimeter is 20 times the length of one side. If you had to calculate the shared pixels here, it'd be really complicated because the shared pixel would form triangles and convex quadrilaterals, which require like intensely complicated maths to calculate all that. But thankfully you don't have to because of this simple maths fudge that's in the formula. You don't have to calculate any of this. So for any kind of shape you have, that's a standardized shape, just google what the formula is and that will tell you just multiply it by two. And there you go. It's really easy to use, which is a godsend, quite frankly.
The last thing to define is the difference between boundary and minimum bounding box. So it defines that the perimeter is either the boundary of the shape, which is a line going directly around the shape or it's the minimum bounding box, which is a rectangular box that entirely encloses the shape. So the perimeter definition says that you should use whichever one is smaller and that's the minimum area for the actual focus indicator of whatever that might be. In this case, the boundary is smaller than the minimum bounding box, but it would vary depending on the shape. The cow is spherical, yes.
So manual testing for 2.4.13. Identify the presence of user interface components. We talked about that earlier. So for example, the slider, that would be the whole slider, not just thumb. Manually tab through such elements to, so you see their visible focus indicator. Ignore those where exceptions apply. Again, I've talked about those. Those are things like the focus indicator is determined by the user agent and the background hasn't been changed, or cases where the element is, if the element is focusable, but non-interactive, like a heading or a section, that's the target of a skip link and has negative tab index. Elements like that are not required to have visible focus so they don't fail.
And step four is for you to verify the contrast between the focus indicator and onfocus pixels. As I said, it's the same pixels that it would take up if it were not shown in the cases like of a background color change that would be the whole background. Then you'd calculate the total area taken up by all visible pixels within the focus indicator and verify that that total area is equivalent to the area of a two pixel perimeter.
So when you're testing this, you would start by measuring the element, determine what its minimum perimeter is, and then you would measure the size of the focus indicator to make sure it matches. But most of the time you won't have to do those measurements because the most common examples of this are quite straightforward.
First example, this is an outline. So we've got a one pixel outline here, which is, which fails because it doesn't meet a two pixel perimeter. In the second example, there's a two pixel outline that doesn't fail. So two pixel solid outline. That's the easiest way to, and if you were giving recommendations to clients, the most obvious recommendation would be use a two pixel outline and then you'll always pass.
Another example, sometimes you see inner outlines that have been changed by outline offset. Now if you have a two pixel outline with outline offset enough to take it inside the border, that fails because it no longer meets the minimum perimeter, it's smaller. So in order to make that meet, you'd have to increase the thickness of the outline. And in this example, a three pixel outline would be enough.
Dotted outlines are another example. What we're concerned with is the contrasting area, i.e., the area of the focus indicator that has sufficient contrast. And in this case, only half the area has sufficient contrast because half of the pixels are transparent and half of them are blue. So only the blue pixels count, as a result, to make this dotted outline pass, you'd have to double the outline width in order to compensate for that.
You can also use a box, this is a box shadow to create a glow effect. This comes back to the contrasting area. Like some of this box shadow has sufficient contrast and some of it doesn't. So you would only consider the part of the focus indicator that does have sufficient contrast. In this case, a one pixel shadow with a two pixel blur does not create a two pixel perimeter with sufficient contrast. So that fails. But if you increased the basic shadow to two pixel with a one pixel blur, then it would, because the overall contrasting area matches. Any shadow or glow effects that extend outside that, you could just ignore those completely. And the same is true for any pixels that are modified by anti-aliasing. You'd always use the author color as specified colors.
Background color passes if it has sufficient contrast. Again, the contrast here is between the unfocused and focused states. So the pale blue background is too similar to the white, so that fails. But the darker blue does pass. To my eye, that doesn't look very good, that dark blue and black text, but that is actually seven to one contrast. So I guess that just reveals some weirdness in the way color contrast is calculated. But that does technically pass. I just wouldn't use that color in practice.
Finally, the border color. Now this is a bit of a cheat. Again, this is where the maths comes in. It says in the normative requirements that the smallest possible, no sorry, in the understanding docs, that the smallest possible two CSS pixel thick indicator that is still a perimeter is a solid line that appears inside the component against the components outer edge, for example, by using a CSS border property. However, that's a fudge again, because the area of a border is actually slightly less than the area of a two pixel perimeter, because the border is drawn inside the bounding box, but the outline is drawn outside the bounding box. So technically both of these should fail. But because it specifically says in the understanding docs that this is okay, then this is okay. And let's just say no more about that.
Testing barriers are, this requires the ability to visually identify content and to do perimeter and area calculations. So you have to be able to see to do this. And you also have to be able to do the maths necessary to calculate whether they're correct.
Automation, finally, this can't be tested by automation. There's no way of programmatically detecting, reliably detecting what a focus indicator. There's so many things that could be used to style a focus indicator. There's just no way you'd be able to test that reliably with automation.
Okay, so let me give you some takeaways. There are nine new success criteria. Six of them are A or AA. Two of them are enhancements of those to AAA, which are relatively easy to pass if you are just a bit stricter. And one of them is a unique AAA, which is Focus Appearance. Seven of them have some barriers to manual testing, which is unfortunate and ironic, but basically inevitable. And two of them can be partly or wholly tested by automation.
And that's it for me.
Oh, there will be a video in the next few days and I'm also going to be writing this up into an article over the next couple of weeks.
[Kari Kernen] Thank you, James. We've taken a few questions, but since we are over on time, I will just reach out to those of you who asked questions after the webinar and make sure we get answers over to you. But thanks again everyone for joining us today. Thanks again, James, for your time. Again, this was recorded, recording will be sent out later this week and I'll make sure anyone who asks questions gets answers to their questions by the end of the week as well. Thank you.
[James] Thank you.