The idea to write this article came to me after I noticed, on some forums, that young developers care about accessibility more than previously. This is something I consider to be good. However I saw that in many cases their good intentions, to make the app more accessible, ended up in changes that made the app less accessible (than if nothing had been done).
In the following text I will address some accessibility topics, from how my app Speech Central tries to provide a higher level accessibility to users, what tech and ideas the application uses (or could potentially be used) to what the 2021 Inclusivity Apple Design Award winner Voice Dream app did/does regarding this challenge.
VoiceOver – accessibility for the blind people
I’ll focus here on how to make the app improvements so I’ll explain what is the VoiceOver in just one sentence. It is a tool that blind people use to interact with the graphical interface – it reads aloud the elements and allows interaction with them.
An important precondition for VoiceOver is that all elements are properly labeled. Sometimes graphical interface shows only the icon to the user. In such instances the app will work well for regular visual interactions even if the button has no text assigned to it. But for the blind people that is like night and day.
SwiftUI concepts encourage developers to implement this properly. As controls can be displayed in various context in a different way, developers are encouraged to provide both the text and the icon for each control so that it works properly if it is reused in another context. The best practice is to use the Label
element to represent the control to the user.
Speech Central uses SwiftUI and those best practices are fully respected. I think that SwiftUI has made the VoiceOver accessibility better as it is now encouraged by the framework design and even developers that don’t care about accessibility will likely make their apps much more accessible.
The 2021 Apple Design Award winner Voice Dream does this in a great way, especially considering that at that point it was likely fully made in UIKit and the effort to label all elements cannot be underestimated. It also had some additional accessibility feature that , to the best of my knowledge, no other app implemented before (and even now nearly all apps do not have these features):
- It makes one adjustment if the VoiceOver is on – it replaces two buttons on the player screen with other buttons more appropriate for this context.
- It implements VoiceOver gestures on four buttons – for example you can adjust the speed by swiping up or down when the audio button is selected.
- It implements custom accessibility labels for the textual content in at least one case. This isn’t technically necessary as textual content would be read aloud, but its original form may be optimised for the visual consumption and for a good VoiceOver experience the adjusted text should be provided.
As such a good VoiceOver accessibility ends with proper labelling, but the great VoiceOver accessibility can be much more than that. Even though there are just very few of mentioned tweaks in Voice Dream, that makes a very big difference in the user experience and it was considered as a gold standard for VoiceOver accessibility for many years.
The Speech Central app uses similar tweaks to improve user experience, but they are many more than in the Voice Dream app. Here is how many adjustments Speech Central has and how it compares to Voice Dream:
Behavior/API | Speech Central | Voice Dream |
---|---|---|
Checking if the VoiceOver is on to adjust the interface/behavior | 93 | 2 |
VoiceOver gestures | 12 | 4 |
Custom Magic tap action | 6 | 0 |
Please note that the values for Speech Central are based on the code counting, while the values for Voice Dream are based on the interface counting. As such some implementation in Speech Central may refer to abstract components which may be used dozen of times in the user interface though they are counted only once. For example Speech Central makes the custom picker control that is controllable by VoiceOver gestures and that custom control is used in many dozens of instances. Also while nearly all of this code should affect Speech Central iOS app, few instances may affect macOS app only but they are still counted in this statistics.
As that comparison is limited both because I don’t have access to Voice Dream code and UI frameworks are different, here is the count of all other VoiceOver related APIs used in Speech Central:
API | Count |
---|---|
accessibilityValue | 16 |
accessibilityLabel | 34 |
accessibilityHidden | 52 |
accessibilityElement | 8 |
accessibilityAddTraits | 2 |
accessibilitySortPriority | 3 |
One thing that I am particularly happy about is that one of the behavioural changes based on the presence of the VoiceOver, is to make the app free when it is on. That makes the app also financially accessible which should be important part of accessibility. Accessibility should not be a luxury.
Dynamic Type accessibility
The Dynamic Type is similar to the VoiceOver in a way that it tries to help people with visual disabilities. However this tool addresses the population that still has some level of vision and that can read the text when it is presented with very large fonts.
The Dynamic Type is hard to implement in a proper way for most of the apps. Going to our example, Voice Dream didn’t support Dynamic Type accessibility sized fonts at all at the point it got the award. Today it supports them in 3 screens that were added after that and were likely made with SwiftUI. This is the standard behavior in UIKit, all of the apps opt-out of this by default.
In SwiftUI Apple enables the full Dynamic Type support by default. There was certainly good intention behind it, but this seems premature in the current state of this framework, as the framework doesn’t encourage good practices regarding this in any way and even many of the well-mannered developers that have learned few things about Dynamic Type and accessibility will produce apps that are significantly less accessible than if there was no Dynamic Type support.
To understand this keep in mind that the Dynamic Type is capable to increase the font size in a dramatic way. If you set the Dynamic Type font size to the maximal level, that will reduce the amount of the text that you can place on the iPhone’s screen to something similar to the Apple Watch screen and its regular font size!
Now imagine if Apple did allow every iPhone app to run on the Apple Watch – 99% of them would be useless. But that is almost exactly what turning the Dynamic Type support to on by default does to the iPhone apps when the user push this setting to its limit. As such if you decide to make the proper Dynamic Type support you may need to design almost entirely new interface that will likely be inspired by your Apple Watch app design (which can’t be applied literally either as while the amount of content that can fit is similar, the Apple Watch is of a much more square form which may make some of its layouts hardly feasible on the iPhone).
Unless you go to the great length to create and test the custom design you will certainly end up with some of those problems and possibly even all of them at once:
- The rows of text contain so few visible characters, that they don’t provide the necessary information, which is their primary function,
- The text where almost every word is hyphenated and split into multiple lines that it becomes very hard to follow it,
- The list where each row is spread across multiple screens, do that it becomes very hard to follow.
The Dynamic Type is not just about simple scaling. In most cases it is a good idea to reduce the content, especially the image content. Good design is always about making priorities and this doubles down when it comes to the Dynamic Type.
The good thing with SwiftUI is that it allows you to easily shuffle the components that you build in various layouts, and as such it makes the Dynamic Type support somewhat easier. For example, Speech Central changes the layout of the main screen from the Tab view to the Split View when the Dynamic Type is on on the iPhone.
Using the Dynamic Type on the iPad is much less problematic. As good apps should have various design adjustments for the iPad to use its capabilities in full, making just the iPhone app work well with the Dynamic Type isn’t enough. Testing and improving the iPad design is also necessary. However due to the increased screen size the problems are likely limited to just sidebars and popovers.
When we talk about the sheer amount of adjustments used for this accessibility feature in Speech Central, here is the API statistics:
API | Count |
---|---|
isAccessibilitySize | 66 |
accessibilityShowsLargeContentViewer | 10 |
As you can see there are 66 differences when this feature is on – from dramatical one like the complete navigation change to some minor tweaks.
Other accessibility features
Apple provides some more accessibility features and APIs for apps to conform with them.
One notable feature is reduced motion setting. As some people may be sensitive to the motion the app needs to adapt to this accessibility setting. The system animations are adopted automatically so for some apps this may work out of the box, but if the app has some custom motion animation, it should disable it or turn it into the fade animation which doesn’t cause problems for those people. Important thing to note is that for some system animations you also need to check another option in the system Settings (“Prefer Cross-Fade Transitions”) to reduce their motion.
While refraining app the to the system animation only does resolves the reduced motion accessibility, it may not be the best root. For many people animation makes the app easier to use and may improve the app accessibility in their use cases and as such animations aren’t counter to accessibility, rather they are important part of accessibility.
Previous Apple Design Award winner Voice Dream at the point when it has received the award had just one custom animation (scrolling of the text) and it didn’t support reduced motion accessibility. Recently they have added several custom animations that don’t appear to respect this mode either.
To support the easy and pleasant interaction Speech Central has range of custom animations. That also required quite a few adaptions to respect the reduce motion mode properly. It calls APIs like accessibilityReduceMotion
40 times to avoid displaying the motion animations to users that are sensitive to it.
Another Apple’s accessibility feature is the use of bold text as the standard app font. For many apps this will work out of the box without additional coding, but if the app already uses the bold text for some feature this will likely lead to suboptimal result. In most of such cases the app also instructs the rest of the text to be strictly regular. Sometimes that may be the only solution, as if the text is being printed that way it should look like the printed page. However in most cases it is a good idea in such environment to make the regular text bold and to apply some other accent to the text that was meant to be bold (like to make the text underline).
Looking at Voice Dream it applies custom font on the reader screen and this system setting is not respected as the text remain the same no matter of accessibility settings.
Speech Central also applies the custom font in some screens, but in this case Speech Central checks the legibilityWeight
API and make necessary adoptions so that the regular text appears bold when user indicate such preference in the system settings. This API is called 4 times in the code.
Going beyond accessibility APIs
While proper application of accessibility APIs will make your app highly accessible and certainly highly above the average regarding this, there is more if you are after the excellence. Some of those improvements may not be strictly considered as accessibility, but any design that may be easier to use and understand will benefit the most to some people that need accessibility to use the device.
One thing that Speech Central cares about is communicating the level of destructiveness of some activity. The iOS was the first operating system to distinctively mark destructive activities with the red color so that the user can easily distinguish and understand them. That was a great improvement in usability and Speech Central goes further in communicating the destructiveness to users:
- Swipe to delete is treated differently in different context based on the level of destructiveness. When the action is highly destructive (e.g. it refers to the folder or it cannot be undone) swipe only reveals the delete button which needs to be pressed again.
- While Delete is the destructive action in every context, the level of destructiveness is not the same if the items are stored in the Bin and can be recovered and when they are completely deleted. Those two cases use completely different graphical and semantic descriptions in Speech Central.
- In the next version Speech Central will make a distinctive (and I think beautiful) close sheet button that will communicate whether the close action is destructive or not. In some way this is inspired by the interaction that Apple uses on the macOS though visually it uses different cues more inline with the iOS design.
- Special care is taken to make the primary action easily discoverable to users. For some users this appears to be a huge accessibility problem and a deal breaker to use the app. Speech Central features more prominent button design by default. Also when there are no items and user can’t do anything but the primary action, it appears slightly enlarged.
- Finally while Speech Central doesn’t provide any voices and can’t directly influence their quality, it does that in an indirect way by making various fine tunes and optimisation both for natural reading and speed reading. The difference is dramatic, just check this benchmark.
Hopefully this will inspire developers to make their apps more accessible to users that need it!
Words of Gratitude
This was, and still is a journey on the path of knowledge. Understanding and adopting to other people needs is hard and sometimes even impossible without their feedback. As such most of this was initiated by hundreds of messages that I have received from the users with their valuable insights and suggestions. Each of them was valuable in its own way and the real star is whole the community. By my personal impression, two users have helped the most and I would like to mention their names in order to thank them for the time that they have invested in this app – Leonardo Graziano, Ricardo Abad and Arturo Fernandez Rivas.
See It In Action
Get Speech Central for iOS and Mac and see all those accessibility features in action!