From Punched Cards to Prompts
AndroidIntroduction When computer programming was young, code was punched into cards. That is, holes were punched into a piece of cardboard in a format...
Amidst the various new features and tools announced at Google I/O 2019, Google placed significant emphasis on expanding upon and investing in accessibility within their products and services. Google unveiled exciting new features now available or coming soon to the Android ecosystem in an attempt to make technology more accessible and easy to use for everyone.
Perhaps the most impactful announcement made was that Google will now support live captioning. When this feature is enabled on a device, subtitles will be displayed for any video or phone call. No more need for audio whatsoever! Whether a user is hearing-impared or simply in a quiet area without wanting to disturb others, this feature promises to be incredibly useful. While Live Caption will roll out first as English only, Google hopes to add support for more languages soon. Live Relay is another feature meant to assist users who are hearing-impared. It can be turned on during phone calls, enabling the conversation to shift from an audio experience to a chat-like, visual experience on a device. Both Live Relay and Live Caption will allow users to get a real-time transcription of what’s being said in any video or audio, in any app, across the entire operating system on the device.
Google revealed that Google Lens will be able to read as well as translate any text aloud. This feature’s codebase measures in around 100KB, allowing it to be accessible to users on cheaper smartphones in parts of the world with low literacy rates. This feature has powerful and practical applications. Google showcased one in which a mother in India was able to go about her day despite her inability to read without the constant help of her school-aged sons.
Not only can this app read text aloud in over 100 languages, but it can also overlay translated text atop the original when using this camera-based app. With computer vision and augmented reality, the camera is becoming quite the impressive tool with which to understand the world. This feature is launching first in Google Go, the lightweight app for entry-level smartphone devices.
This incredibly beneficial project born out of Google’s AI for Social Good program is meant to help people suffering from the neurodegenerative disorder ALS, people who have had a stroke, and people with other speech impediments or impairments be able to communicate more effectively. Google will use machine learning to turn hard-to-understand speech and facial expressions into text so that people can converse more easily. Project Euphonia is gathering data by recording people with ALS to build voice recognition models so to collect large enough data sets for voice recognition technology to understand people with speech impairments. Google is exploring the idea of personalized communication models in hopes that voice technology can better serve more people.
Google has put a ton of work into designing and developing products for people with disabilities. Often, this work leads to better products for all users and a more inclusive platform. Google reiterated throughout the conference that they want to “build a more helpful Google for everyone”. With these exciting new feature announcements, it is clear they are following through. We at Big Nerd Ranch are incredibly excited to work with these new platform features and tools and hope you are too!
Introduction When computer programming was young, code was punched into cards. That is, holes were punched into a piece of cardboard in a format...
Jetpack Compose is a declarative framework for building native Android UI recommended by Google. To simplify and accelerate UI development, the framework turns the...
Big Nerd Ranch is chock-full of incredibly talented people. Today, we’re starting a series, Tell Our BNR Story, where folks within our industry share...