Ahead of Global Accessibility Awareness Day, Apple announced new software features for cognitive, speech, vision, hearing, and mobility accessibility to be available later this year on iPhone and iPad, and Mac for some features.
Also: The best assistive tech gadgets
As technology continues to be ingrained into every aspect of our lives, it's vital that tech companies make their devices accessible to everyone. Apple is going beyond basic assistive technology like text-to-speech, text enlargement, and adaptive keyboards. Apple's new lineup of assistive technology includes Assistive Access, Live Speech, and Detection Mode.
Also: How to change Alexa's voice on your phone or Echo device
Apple is spearheading many initiatives during the week of Global Accessibility Awareness Day. These initiatives include highlighting people with disabilities within the App Store, incorporating American Sign Language into Apple Fitness+ classes this week, connecting deaf customers with sign language interpreters in four more countries, and offering informative sessions covering Apple devices' accessibility features.
Assistive Access
AppleFor Apple users with cognitive disabilities, Assistive Access extracts the essential aspects of an Apple device's interface to decrease the likelihood of cognitive overload. Apps like Phone, FaceTime, Camera, Photos, and Music have customized interfaces with high-contrast buttons and large text.
Assistive Access can be customized to fit an individual's communication needs based on if they prefer visual communication like emojis or plain-text communication. Users' Home Screens can be customized to include a grid or row-based layout.
Also: Want to control your electronics with your tongue? This company is making that happen
Live Speech will be available on iPhone, iPad, and Mac for text-to-speech conversations on FaceTime and phone calls. For people unable to speak, Live Speech allows them to type a phrase to be spoken out loud to the person on the other end.
Live Speech
AppleA user can create a Personal Voice on iPad, iPhone, or Mac by reciting prompts out loud into their device. Then, their iPhone, iPad, or Mac uses machine learning to generate a voice for all their Live Speech calls. Once the Personal Voice is created, text inputs to Live Speech will sound like the sender's voice for the recipient to hear, enabling a more personal way to use text-to-speech software.
Apple says that Personal Voice caters to those with a recent diagnosis that will affect their speaking ability. Personal Voice is like creating a reservoir for your voice for people whose voice will be progressively impacted.
Also: Alexa can now place your Panera delivery for you
To create a Personal Voice, users must read a randomized set of prompts for 15 minutes. The 15 minutes of audio allows the iPad, iPhone, or Mac to learn the inflection and cadence of the user's voice when talking.
Personal Voice requires a voice to study and replicate within phone and FaceTime calls. Those unable to speak can still use Live Speech to speak to people in real-time on phone and FaceTime calls, but it will likely be a computerized voice.
Also: I spent$130 on these reading glasses and can never go back to cheap readers
Detection Mode makes interacting with physical objects easier for those with visual impairments. Point and Speak are built into the Magnifier app on iPhone and iPad and can help those who cannot see or have difficulty seeing navigate the world around them.
Other accessibility features integrated into Apple products include pairing hearing devices directly to Mac to adjust to a person's hearing comfort. Voice Control improvements will offer suggestions to distinguish between certain homophones like site, cite, and sight, based on context. Text Size will be easier to adjust within Mac apps such as Finder, Messages, Mail, and Calendar, and users will be able to adjust Siri's speaking speed, ranging from 0.8x time to 2x time.