New accessibility features are coming to Apple


On World Accessibility Awareness Day, Apple is announcing new software features for cognitive, voice, sight, hearing and mobility accessibility, coming later this year to iPhone and iPad. iPad, as well as Mac for some features.

Apple goes beyond basic assistive technologies like text-to-speech, text magnification, and adaptive keyboards. Apple’s new suite of assistive technologies includes Assistive Access, Live Speech and Detection Mode.

These initiatives include highlighting people with disabilities in the App Store, connecting deaf customers with sign language interpreters in four additional countries, and organizing information sessions on the accessibility features of Apple devices.


image of an iPhone on a blue background with the Assistive Access interface on screen


Assisted access. Apple

Live Speech will be available on iPhone, iPad and Mac

For Apple users with cognitive impairments, Assistive Access extracts essential aspects of an Apple device’s interface to reduce the risk of cognitive overload. Apps like Phone, FaceTime, Camera, Photos, and Music have custom interfaces with high-contrast buttons and large text.

Assisted Access can be customized to meet a person’s communication needs, depending on whether they prefer visual communications like emojis or plain text communications. User home screens can be customized to include a grid or row layout.

Live Speech will be available on iPhone, iPad and Mac for text-to-speech conversations on FaceTime and phone calls. For those unable to speak, Live Speech allows them to type in a sentence that will be spoken aloud by the person on the other end of the line.


image of iPhone on blue background with Live Speech interface on screen


Live Speech. Apple

Create a voice pool for people whose voice will be affected

A user can create a personal voice on iPad, iPhone or Mac by reciting prompts aloud into their device. Then the iPhone, iPad, or Mac uses machine learning to generate a voice for all Live Speech calls. Once the personal voice is created, text entries in Live Speech will sound like the sender’s voice for the recipient to hear, allowing for more personal use of text-to-speech software.

Apple explains that Personal Voice is for people who have received a diagnosis that will affect their ability to speak. Personal Voice is like creating a voice pool for people whose voice will be gradually affected.

To create a personal voice, users must read a series of random messages for 15 minutes. These 15 minutes of audio allow the iPad, iPhone, or Mac to learn the inflection and cadence of the user’s voice as they speak.

Pairing hearing aids directly to Mac

Personal Voice requires studying and reproducing a voice during phone and FaceTime calls. People who can’t speak can still use Live Speech to talk to people in real time on phone calls and FaceTime, but it’s likely to be a computerized voice.

Sensing mode facilitates interaction with physical objects for people with visual impairments. Point and Speak are built into the Magnifier app on iPhone and iPad and can help people who can’t see or have difficulty seeing find their way around.

Other accessibility features built into Apple products include pairing hearing aids directly to the Mac to suit a person’s hearing comfort. Improvements to voice control will offer suggestions for distinguishing certain homophones like site, cite, and view, based on context. Text size will be easier to adjust in Mac apps such as Finder, Messages, Mail and Calendar, and users will be able to adjust Siri’s speaking speed from 0.8 times to 2 times.


Source: “ZDNet.com”



Source link -97