Search

The Quest for Q: What’s new in Android

Andrew Bailey

5 min read

May 13, 2019

The Quest for Q: What’s new in Android

Later this year, Android Q will appear on some of the 2.5 billion devices running on Android.

The announcement of Android Q isn’t brand new from Google I/O – there has been a public beta since March – but Google has given us more information about what’s in store in version 10.0 of the Android operating system and the Android platform as a whole.

New Features in Q

My personal favorite feature coming in Q is live captioning.

Google announced that they’ve developed a way to perform speech-to-text processing entirely on-device, without any audio ever leaving the device.

Users with hearing loss or who are in an environment where they cannot hear their phone will be able to enable live captions in Android Q (this feature is not yet enabled in Beta 3).

Any audio that contains speech that gets played on the device — no matter which app is playing it — will be transcribed to text and displayed on screen.

This works across the entire operating system, and app developers don’t need to do anything to support live captions in their app.

Another feature that users have been asking for a while is OS-level support for dark mode.

Dark mode lets developers change bright white backgrounds for darker backgrounds.

This not only increases usability at night where you might accidentally blind your users, but also helps save battery on many Android devices because OLED display technology uses less power for pixels on the display that are dark or completely black.

Many first-party apps from Google already support dark mode on existing versions of Android, but it is becoming a system-wide toggle in Q.

You can start designing for and implementing night mode right now.

There are also so many other subtle changes to things like permissions, notifications, and privacy that will most likely go unnoticed to many users, but are greatly improving the security and the overall user experience of the operating system.

For example, there are some new changes to notifications that bring AI-powered suggested actions to some notifications, and also changes to the prioritization and user-controls for how notifications are displayed.

Google has a blog post that describes all of these features in more detail.

What’s New for Developers

As an Android developer, I’m even more excited by what Google has announced for Android Jetpack and their developer tools.

One exciting change this year is Google’s push towards making Android a Kotlin-first platform.

This which means that if you’re using Kotlin in your app, you’ll get access to new and concise APIs that take advantage of Kotlin’s language features.

And if you’re just getting started with a new app, Google suggests that you should use Kotlin instead of Java.

This Kotlin-first paradigm manifests itself in a lot of Google’s Jetpack libraries.

For example, coroutines are now supported by Room, LiveData, and ViewModel — which were some of the first Android architecture components announced two years ago at I/O.

Google is also introducing new libraries that are designed for – and sometimes written in – Kotlin.

There are a number of new libraries that Google is working on, including View Bindings, CameraX, Benchmark, Security, and biggest of all, Jetpack Compose.

Jetpack Compose is a completely new UI toolkit built entirely in Kotlin.

It’s entirely decoupled from the existing UI APIs, and isn’t built off of View at all.

Instead, it provides an API reminiscent of Flutter, React, and Anko.

These APIs let you create a declarative UI that reactively responds to changes to your app’s state to update what’s on screen.

Here’s an example of what this might look like — the following code is from one of the slides at I/O, and lets you implement a list in 6 lines of code instead of breaking out a RecyclerView ( RecyclerView):

@Composable
fun NewsFeed(stories: LiveData<List<StoryData>>) {
    ScrollingList(stories.observe()) { story ->
        StoryWidget(story)
    }
}

Jetpack Compose is completely interoperable with existing layouts and views, so you’ll be able to migrate to it gradually — much like how you can gradually introduce Kotlin into a Java project.
The library still very early in development, and there is not yet an alpha version you can use yet.

In the meantime, you can see Jetpack Compose being developed in AOSP.

You can also read more about the new Jetpack libraries on Google’s blog.

These libraries have seen much more rapid changes than the APIs built into the OS itself recently, and that’s an extremely good thing for developers.

Even the brand new alpha versions of Jetpack libraries that have been announced or released this week at I/O support API levels as low as 14, which represents more than 99% of active devices on the Play Store.

You don’t need to wait for users to be on Q to use these new APIs like you would have in the past.

Google I/O marks a continuation of a trend from Google.

As Android is maturing, each version brings more polish and fewer overhauls than the last.

These changes will ultimately benefit users, but nothing coming in the Q release of Android is as drastic as features like runtime permissions, material design, or multi-window from previous versions.

Many new features that will impact users the most — like Google Lens, off-device machine learning, and updates to Google apps like the Assistant — are being distributed in Google Play or are Pixel-exclusives instead of ending up in the open-source portion of the Android operating system.

As a platform, however, Google is paying more attention to what developers are doing, and continue to empower them with libraries based on what developers are asking for.

These libraries are decoupling the APIs that developers are using from the framework itself, which helps fix device-specific bugs, and lets developers use more effective APIs much more quickly than in the past.

It’s a welcome ideology, and I’m looking forward to see where it takes us in the future.

Andrew Bailey

Author Big Nerd Ranch

Andrew Bailey is an Android engineer and instructor at Big Nerd Ranch. He graduated from the Georgia Institute of Technology with a degree in computer science. When he’s not building apps or teaching a class, he can be found baking, playing video games, or ruining his dungeon master’s game plans.

Speak with a Nerd

Schedule a call today! Our team of Nerds are ready to help

Let's Talk

Related Posts

We are ready to discuss your needs.

Not applicable? Click here to schedule a call.

Stay in Touch WITH Big Nerd Ranch News