Android Dev Summit 2018 Livestream | Day 1, Theater 1

Android Dev Summit 2018 Livestream | Day 1, Theater 1


Welcome to Android Dev Summit
2018. Welcome to Android Dev Summit 2018. Welcome to Android
Dev Summit 2018. Welcome to Android Dev Summit 2018. >>Everyone, please welcome to
the stage, Dave Burke. [Applause]
Hey, everyone. And welcome to the 2018 Android Developer Summit. This is an
event for developers, by developers, with tons of
in-depth content and most importantly, direct access to
the engineers. We have so many of them here this week that I’m
pretty sure Android development’s going to stop.
We have attendees from over 70 countries, both in-person and on
the livestream. Now, speaking of history, it was
— we’re about to celebrate our 10th anniversary of Android and
it was about 10 years ago that customers were unboxing their G1
devices. There’s huge potential for the G1, developers create
more applications for Google Android.
It did okay. But it’s what came next and what you built on
Android that fundamentally changed the mobile industry.
10 years ago, the mobile landscapes looked very
different. Mobile platforms were not
friendly. Each OS required completely different and
non-transferrable skills so it was impossible to build a mobile
app at-scale. In building 44, a small team of
dedicated engineers were quietly working on a crazy project. The
idea was bold. To build a new open source operating system
that any device maker could use with a powerful SDK that put
developers first. To many at the time, this seemed like a
hair-brained idea. What did Google know about
telecommunications and how could it possibly influence this
established industry? It was an intense time for the
Android team. And to add to the drama, while getting close to
launching version 1.0, Apple announced the iPhone. We had
the sooner device and the Dream device, which included a
touchscreen. We had no choice to accelerate the schedule. We
felt like we had a window to deliver on our vision of the
smartphone before it launched. So we started a tradition of
putting on a huge breakfast in building 44. Bacon, eggs,
pastries, you name it. And it was super productive. No
meetings, just coding. And in parallel, release planning.
These cats do not know about Android Studio’s code
completion. You would have a previous week’s
version of Android, but by the evening, notification would
appear. It was like watching the OS come alive before your
eyes. One of the big challenges is the
bootstrapping problem. And why would a user buy a phone with no
applications? So, we did two things. First, the core Android
team wrote mobile app versions of Google’s desktop services from gmail to Maps to YouTube.
We experienced the framework at the same time.
But to make the platform shine, we needed apps from across the
industry. We launched an early-look SDK and announced the
Android developer challenge, with $20 million to be awarded.
Developers responded and by April 2008, we had over 2,000
submissions. And it was amazing, given there
were no physical devices, just the emulator. The apps were
surprisingly diverse from games to social networks and utilities
and location and GPS were the top-used features, along with
camera and media and messaging. So it really showed this pent-up
demand for developers to be creative on mobile and use
features. Some of the winning apps are
still around today, like Life360. It paved the way for
apps and businesses. The T-Mobile G1 launched. It was
the predecessor to Google Play and it had over 50 apps on day
one. One week later, we opened this door for developer uploads
and advanced capabilities like in-app purchases and broader
country support were yet to be built.
The following 10 years was one of rapid evolution. In the
early days, we were doing two big releases a year and our lead
program manager, at the time, made this off-hand suggestion
that we code name it after desserts and that idea stuck and
here’s what came next. So, Android 1.5, Cupcake, we were
eating a lot of Cupcakes at the time, adding virtual keyboard
support so we no longer required the physical keyboard. We added
the copy and paste. There were different screen
densities and sizes, thereby laying factors that will come a
couple years later. That was Diane Hackborn’s idea.
Android 2 Eclair changed driving forever with Google Maps
navigation. Froyo had voice actions, which
allowed you to get directions and taking notes and setting
alarms. That was the precursor to today’s Google Assistant.
Gingerbread was the first mainstream version of Android.
With Honeycomb, we added support for tablets with the hollow
theme. But, now we had a problem because phones were
shipping on Gingerbread and tablets were shipping on
Honeycomb. So we merged both with Ice Cream
Sandwich and we introduced more intuitive navigation with the
use of gestures and that release saw the arrial quick settings.
Jellybean included project butter. And smooth
animations are something that I personally ubzesabout.
Android Kit Kat came with project with 512 megabytes and
DSP off-loaded Google Hot word. Android 5 Lollipop followed and
was the mother of all releases. It brought material design to
Android, giving it an entirely new look and feel and so between
Project Butter and design, we changed the narrative, giving it
a beautiful refined. Lollipop also introduced support for new categories for
wearables, auto and TV. There was the work profile, which
we’ve been building on. Lollipop was such an epic
release that we frankly needed to spend our energy in Android 6
Marshmallow on improving quality. We made an overhaul of
privacy with the introduction of runtime permissions.
Nougat brought virtual support. Oreo had entry-level smartphones
and came with a massive overhaul of the hardware interface layer
to help speed up and reduce the cost of doing upgrades.
Finally this year, we launched Android Pie, which is an
AI-first experience. It contains tons of UI improvements
and introduces the concept of well-being.
I think it’s pretty incredible to see just how far we have all
come in a decade of smartphone development. And while we’re
solving different problems today, it’s clear the principles
upon which we built Android are just as true today as they were
10 years ago. Giving developers a powerful SDK or open source
code to enable device makers from entry level to high-end and ever-improving UX.
So, what is the next 10 have in store for Android? Well, I
obviously don’t have a crystal ball, but there are three trends
that I want to call out that I think are important. One,
smartphones are getting smarter. Two, multi-screen computing is
becoming pervasive and our phones are going to be able to
help us with safety and digital well-being.
Our smartphones are getting smarter. AI will enable your
phone to get to know you better. You can see this in Android Pie
running on Google Pixel. The screen brightness automatically
learns your preferences and in next apps are predicted to save
you time. And the camera is able to recognize objects in
realtime with Google Lens. For developers wanting to tap
into AI, we announced ML Kit. Whether you’re new or
experienced in machine learning, it has face
detection and more. It builds on the neural networks
API by DSP and NPUs.
API, boost performance, MobileNet, for TensorFlow, runs
eight times faster using NNAPI on the Qualcomm Snapdragon. You
can expect NPUs to become faster in the next few years.
The second trend goes beyond phones. We’re investing heavily
in a multi-screen experience. This means a great Android
experience across TVs, wearables, cars and Chromebooks.
For example, user engagement on Android TV is grown. This year,
Android auto has seen 250% user growth and our partners launched
9th watches. Just when you thought you’d seen
everything in phones, we’re about to see a new idea.
Foldables. They take advantage of technology. The screen can
literally bend and fold and you can think of the device as both
a phone and a tablet. And broadly, two variants. When
folded, it looks like a phone so it fits in your pocket or purse
and the defining feature for this form factor is something we
call, screen continuity. You might start a video on the
folded, smaller screen, but later sit down and want a more
immersive experience. You can unfold the devis to get a larger
screen. As you unfold, the app transfers to the bigger screen
without missing a beat. It’s an exciting concept and we
expect to see foedable devices from several Android
manufacturers. We’re working with Samsung on a new device
they plan to launch early next year, which you’ll hear about
later today. For our part, we’re enhancing
Android to take advantage of this new form factor, with as
little work as possible from you. We’re adding resizable
flags so your app can respond to folding and unfolding and we
expect to see a lot of innovation over the next few
years. The third trend is safety and
well-being. Smartphones have gone from non-existent to in
depensable. In the very idea of leaving your home without your
smartphone literally sends shivers down people’s spines. Beyond utility, we FWEEL a
responsibility to your safety and well-being. More than 80%
of emergency calls originate from mobile phones. However,
locating these phones can be challenging since technologies
fail indoors or have a radius that’s too large.
In a serious emergency, it can mean the difference between life
and death. We launched Android’s Emergency Location Services.
When you dial 911 or your country’s equivalent, your
location is accurately calculated and sent
directly to the emergency provider.
ELS is built in to 99% of Android phones, all there way
back to version 4 and we’re continuing to look at new ways
to improve your safety. Now, having a smartphone with
you all day is awesome, but we also want to make sure you’re in
control of your digital well-being and we know that 72%
of people are concerned with amount of time they spend on
tech. So with Android Pie this year,
we introduced new tools to let you control usage with app
limits, gray scale wind-down and Do Not Disturb. Like most
features, we’ve added developer hooks so you can tell if Do Not
Disturb is enabled and you can implement an intent filter.
We’re continuing to invest in the space with lots of
enhancements planned. Okay. So, let’s wrap-up. Android,
from the beginning, was conceived as a platform built
around developers. We poured a ton of energy into growing this ecosystem from the ground up.
And in return, you’ve been an amazing community building
incredible apps and services that enable and delight users
the world over. We simply could not do this without you. So,
thank you. So with that, let’s get down to
business. I’m excited to hand it over to Steph and team to
talk about the resent work we’ve been doing. Thank you. [Applause]
>>Hey, everyone. I’m Steph. I’m on the Android team and Dave
is right. You’re not on top of it, you’re a part of what we do.
Kotlin’s a great example. It’s not a Google-designed language.
It was not maybe the obvious choice, but it was the best
choice, as you made clear. We could see developers voting with
their feet in the adoption and the months before we announced
support. Like Kotlin, our developer investments come down
to two things at heart. Number one, your feedback. And
number two, Google engineers using Android and thinking, how
do I make something people will love? So, the past several
years, we’ve been investing deeply in Android’s developer
experience. It’s been guided by your feedback. We’re going to
talk about some new things that we have to share.
So each year, we’ve been investing. Let’s start with
ides. We demoed Android Studio at I/o. Profilers, mobile
layout tools, better R C++. We wanted to add the
little things, whether that’s Maven integration to Lint
checks. Second APIs. In 2016, Diane
Hackborn wrote a famous post on app architecture saying, we’re
not opinionated. To which she repl replied, please be
opinionates. [Laughter]
So we created architecture components and refined them over
many EAPs and expanded them into Android Jetpack. We see Jetpack
as the future of our mobile APIs. They are opinionated and
easy to use, intuitive APIs that work on 95% of devices. We want
them to integrate seamlessly with Android’s core primitives
so you get the best possible combination of efficiency and
deep control. Expect to see us continue
expanding Jetpack every year. Third was languages. In 2017,
we announced support for Kotlin. We’ve added since then, ide
support, API and moved it into the Kotlin Foundation.
Fourth, app delivery. So, developers have always loved the
Play store, that’s great when you want to launch fast. You
told us app size is way to big. We announced the App Bundle and
Dynamic Delivery. They are sliming down apps worldwide with
most apps saving up to and over 30%.
Finally, security. Android was built with security in mind from
day one, with application sandboxing. We’ve expanded our
mobile security services. Today, 99% of abusive apps are
taken down before anyone can install and after you install,
we use Google Play to protect and scan over 50 billion apps
every day. Every app on every connected device. And we find a
potentially harmful app, we disable it or we remove it.
Let’s say you’re doing everything right and you
accidentally get caught in this net without someone to talk to.
This is a place I think we need to do better. We need to make
it much easier for you to reach us in these cases. So, our
engineers will be here, tomorrow, at the fireside chat,
to talk with you about it and get your feedback.
Now another way we protect the ecosystem is moving APIs to
target current APIs. You told us, okay, makes sense. But
please give us a long notice period so that’s why we gave
almost a year’s notice. We think of you as a part of how we
work, whether it’s early ideas, beta and iterating after launch.
We really want to be trustworthy and we’ve heard about things you
love like Architecture Component and Kotlin. Sometimes we have
underestimated the time it takes to get things right, like
instant run. What we’ve heard is you want from us open sharing
so you can see things that are early, as long as we’re clear.
As well as things that are ready for production.
Today, I’m going to share a range of early ideas. I want to
walk you through two big themes. First, foundations, using
languages and libraries to work smarter. Second, productivity.
Using mobile ides, console and distribution to develop easier,
have higher quality apps and to grow adoption.
We’re going to start with foundations and Kotlin.
Throughout, I wanted you to hear from some of the people who have
been instrumental in these projects. So, we’re going to
start with someone who was key in the Kotlin decision, he’s a
huge contributor to Android, both while he was in the
community, now in the Google team. It’s a privilege to turn
things over to Jake Wharton. [Applause]
Hey, everyone. So, I’m jake, I’m part of the team working on
Kotlin for Android and it’s been 18 months since Steph was on
stage at Google I/O announcing Kotlin would be supported as a
new first-class language. Something that had never been
done in the history of Android. Based on positive feedback from
developers, it’s clear that this was the right choice. According
to GitHub, Kotlin is the number Worker fastest-growing language
in terms of contributors. Stack Overflow
places it as number two. In Android, 46% of proAndroid
developers are using Kotlin to build their apps. This is according to a survey.
In October, we had 118, 000 seven-day active projects using
Kotlin in Android Studio. This is based on those who opt-in.
10 X increase from the numbers last year.
Now when Kotlin support was announced, there were a bunch of
apps that were using in the Play store. Includes WeChat, Amazon Kindle and Twitter. It moved
into the Kotlin Foundation and we’re fortunate to partner with
JetBrains. Just last week, they released
the newest version of Kotlin, 1.3 with new language features,
APIs, bug fixes and performance improvements. For example,
in-line classes, which in most cases, don’t actually allocate
like a normal class would unless they’re boxed. For constrained
devices that we target, avoiding allocation while still
restraining typesafety is a big one The Kotlin library includes
a set of numbers, such as UByte and ULong. And in addition to
Kotlin code targeting Android or the JVM, you can target
JavaScript or native code. This unlocks the possibility of using
more on platforms The long-awaited support is in 1.3,
which is a lighter version of threading. It combines how you
do operations. Things that are essential to
every Android app. And, as I’m sure you are, we are looking
forward to using these new Kotlin features in the
Kotlin-specific APIs. The majority has been through the
Kotlin extensions through Jetpack.
We announced the Kotlin extensions were expanding to KTX. And all
these are now available at stable releases.
Since then, though, as new APIs are added to existing libraries
or new libraries, the KTX extensions are being built
alongside, navigation, paging, Slices have extensions being
built with them. We’re starting to go beyond,
lifecycle is going to let you launch it and have
cancellations. WorkManager with a work object based on
Coroutine. They provide more close operability.
If you want to get started with, say, Coroutines on Android,
there’s a new codelab, it covers work, testing of Coroutines.
Since Kotlin isn’t a language for building Android apps,
Google cloud platforms has it. And finally, new udacity courses
are available today. They use both Jetpack and popular
third-party libraries To speak more about Jetpack as a whole,
I’d like to turn it over to Romain Guy.
[Applause]>>Hi, I work on the Android
framework team. A few months ago, we announced Jetpack. It
builds on the foundation that we laid out with support library.
We also added new tools and libraries to the mix. It is
about less code and more devices. All
Jetpack libraries are backwards compatible. We first started
running early access programs about two years ago and then our
first one was 18 months ago at Google I/O 2017. Out of the top 1,000 applications, more
are using it. Jetpack is using New York Times,
EverNote, Pandora, Twitter and many more. Apps built all over
the world in India, Germany, France, Korea, Israel, China,
U.S. And more. Also, at I/O, we have new
libraries for paging, navigation, WorkManager and
Slices. These are in early faces of development, they’re
being used in 38,000 applications worldwide.
We know that many of you have expressed the desire than simply
give feedback. We moved it all to public. You can see bug
fixes and features in realtime. All you need is Android Studio
and a public SDK. We also want to use AOSP. Our hope to
earlier access will help us refine and ship even better
libraries. So, please join us. With Jetpack, we introduced
Architecture Component libraries, navigation and
WorkManager. There is a simplified way to have the
principles in the application using a single activity. You
have consistent animations and easier animated transitions.
WorkManager makes it easy to perform background tasks. You
do not need to whether you should be using Job Scheduler,
JobDispatcher or alarmmanager. If you have any feedback about
those APIs, the Android teams are here today and tomorrow, so
now is the time to give us that feedback.
We introduced Android Slices, a new way to bring users to your
applications. It is a mini snippet. It can help users book
a flight, call a ride and so on. We want to take the time to get
things right. After working with several of you on the APIs,
we’re moving into public EAP at the end of the month. We will
run experiments surfacing Slices in Google results.
Our team has been hard at work bringing numerous improvements
to libraries. In 2.1, you have more control over usage. 2.1 is
our biggest launch since 1.0. We have search tables and
integration. 3.2 brings most faster multi-module.
One thing you told us worked pretty well was
deeply-integrated tools and libraries. A new great example
is navigation. You can easily-understand and build it
in your application. Let’s go straight to a demo.
Here, it was already partially done. You can see the flow of
navigation. If I run the demo in the emulator, and we wait for
Gradle to do its job, I can click on leaderboards and see
different profiles. If I click on the profile, nothing happens.
I can go back to the editor, add a new screen, select the
fragment to the user profiles and then the link — the
leaderboard screen to the user profile. I also need to add an
argument for the selected user. And it’s a string. And just
rerun the app. Wait for Gradle. Want to chat with me after this?
[Laughter] If I click on the profile, now I
can see the profile. You can see there are no animations, no
transitions. So if I go back and select this navigation no
flow, I can choose which animations I want. I’m going to
choose the intro animations and the exit animations. And now,
if I rerun the app, one last time, go back to leaderboards
and now you can see the transition.
So if you want to play with — [Applause]
So, in you want to play with animation, all you have to do is
download Android Studio from beta today.
As we expand the Jetpack libraries, we’re focused on your
feedback. So please let us know. We want to know about
animations, UI, we are here today. We are here tomorrow.
With that, I would like to turn it over to Karen, who will let
you know about our plans. Thanks.
[Applause]>>Hi. I’m Karen. I’m on the
team that builds Android Studio and Android Jetpack. To build
on top of that, I’m going to talk about productivity. For
3.2, we asked ourselves, what can we do to have a meaningful
impact on productivity and where do you spend the most of your
time. We heard it’s build speed. Something you do every
day. Something you do every day, multiple times a day and
every time you’re waiting for that build to finish, we know
it’s a minute wasted. We found two things to be true, the first
thing we found is that build speeds is actually getting
slower over time. The second thing we found is that new
releases of Studio are actually improving build times. We saw
build speeds get faster by 42%. Something’s going on and we had
to take a deeper look. Codebases are getting larger,
custom build-ins are adding to times are negating the benefit
of incremental builds. If you have many modules, research
management can add time, as well. It is outgrowing our
build improvements. We are committed to making build
faster. A large part of the time, on build here, this week,
you can listen and learn more about what we’re doing. We want
to get this right and we need your help to do it. We’re
giving ourselves goals and working on attribution tools to
better-understand what’s affecting your build in your
projects and we’re making Gradle faster.
We know that speed is better. It’s about trying things out,
iterating and failing fast and doing it again. With instant
run, we want to quickly apply changes. Part of that is around
deployment times. We know they play a huge part and we’ve
shipped an update in Android Pie where we’re seeing a big
difference in real-world and sample projects between Pie and
Oreo. If you’re using USB, we have seen close to that emulator
speed. Please let us know if you’re interested in giving
early feedback. That takes us to emulators.
Because we want to make iteration speed faster, we’re
investing in emulators for every OS. We show you a snapshot and
boot up and switch in under two seconds.
Productivity is also about making the hard problems easier.
We heard it’s hard to know how you app is affecting battery
life. You can visualize the battery usage and inspect
background events. The new beta for Android Studio
3.3 is available today and was just released moments ago.
We know that in order for it to be delightful, it has to be not
just stable, but it has to be rock-solid stable because of the
number of hours that you spend there. The main focus for our
next few releases will be quality, which we’re calling
Project Marble. Fixing user impacting bugs and investing our
infrastructure and tools. We know that sometimes we’ve missed
memory leaks before we’ve shipped so we’re building tools
to help protect those leaks before they even happen.
Dave mentioned how millions of Android apps run on chromebooks,
we’re bringing it to Chrome OS later next year.
Now, I’d like to invite Matt Henderson to share more about
app size and what we’re doing with the Android App Bundle.
[Applause] So, I work on developer tools,
like the Play console. Apps have grown dramatically in size.
The average is up five times since 2012. But larger size
carries a cost. It reduces the install conversion rate and it
increases uninstall rate. You told us that using multi-APK was
a painful way to reduce app size. So the Android App Bundle
makes it much simpler. Using the App Bundle, we reduce size
by generating an APK for the languages, the screen density,
the CPU architecture that each user needs. And it’s working.
While size reductions vary, on average, apps are seeing a 35%
reduction in size compared to a universal APK.
Now, with the recent stable release of Android Studio 3.2,
App Bundles and production have taken off. They’re up ten
times. Thousands of developers have embraced App Bundle and the
number in production total billions of installs. And
Google’s apps, they’re switching, too. YouTube, Google
Maps, photos, Google News are all in production. Photos, for
example, is now 40% smaller. So, we’re really excited about
the App Bundle’s potential. We sign APK for delivery to the end
user. This is highly-secure. We protect your key in the same
memory that — in the same storage we protect Google’s own
keys. It allows us to process the App Bundle to generate the
optimized APKs and this allows you to benefit from additional
optimizations in the future. Starting right now, so I’m happy
to announce that the App Bundle now supports uncompressed native
libraries. This utilizes an existing Android platform
feature that was little used because in the past, you would
have had to use devices. We have no additional developer
work needed. The App Bundle now makes apps using native libraries 8%
smaller to download. By adding dynamic features, you
can add any app functionality on-demand. For example, you
don’t need to send that same big feature to 100% of your users if
it’s only going to be used by 10% of them. And you don’t need
to keep big features that are only used once. Dynamic
features can be installed and uninstalled dynamically when
your app requests it. Or, you can chose to defer
installing to a later time, when the app goes to the background.
Facebook was one of our launch partners. And they are using
dynamic features in production in the main Facebook app and in
Facebook Lite. For example, card scanning is a feature that
only a small percentage of Facebook’s user base is using so
moving it to a dynamic feature avoids it taking up almost two
megabytes on each user’s device Within an App Bundle, installed
and instant apps can have the same base module and the same
dynamic feature modules. Separating it out is a great way
to get your base small enough to offer an instant app experience.
Now, you can start building and testing dynamic features using
Android Studio 3.2 today and join our beta Now, I’d like to
invite you up to talk more.
>>Thanks, Matt. We’ve heard your feedback, that you’d like
to more control to ensure users are running the latest and
greatest version of your app. There’s the Google Play update
API. There’s an immediate update flow where they have a
full-screen experience, they can accept the update and have it
installed immediately. Many of you have implemented similar
variance in your app. This new implementation is less
error-prone and super easy to integrate.
Next is flexible updates. Flexible updates are really cool
because they actually allow the update experience to be
integrated into your app. As an example, Google Chrome nudges
users. If the user accepts that update, the download happens in
the background and the user can continue to use your app. When
the download is complete, it’s up to you, as the developer, to
decide if you’d like the update to be applied immediately or if
you’d like it applied the next time the app is in the
background. Google Chrome is testing this now and we’re
excited to be testing access coming soon.
Next instant apps. Instants apps are avail on 1.3 billion
devices. We’ve been hard at work on simplifying the
development experience for instant apps. Earlier this
year, we increased the size limit for instant apps from 4
megabytes to 10 megabytes. Many developers are already able to
get under that size limit without additional work.
Additionally, the dynamic features are instant compatible.
We’ve also made it possible to upload and test intent apps of
any size. You canexperience iterate on the user at the same
time you’re optimizing for size. And we’ve also made web URLs.
You can reroute your users into your instant experience
automatically. Lastly, I’m excited to announce
that today, in Android Studio 3.3 beta, you can now have a
single Android Studio that houses your instant and
installed apps. This dramatically helps.
Additionally, the App Bundle can be uploaded once to the Play
developer console. We’re super excited about that.
And, with that, back to Steph.
[Applause]>>Android’s open source means
it’s an incredible to watch what you’re building on top of the
platform. With over 2 billion devices, three-quarters of [no audio]
audio]. So, that’s it for the keynote.
Thank you. I hope you enjoy Android Dev Summit.
[Applause]>>Now, everyone, please give a
warm welcome to Dan Galpin. [Applause]
[Loss of I first wanted to give a
shout-out to everyone on the livestream, I hope you could all
be here in person. You can also follow the online action on
Twitter. And now, I need my next slide.
[Laughter] Otherwise, I’ll have to
improvise. I can do this. So, there’s a lot of stuff going on
here, today. One of the things I want all of you, onsite, to be
able to take advantage of is the fact that we have a tremendous
amount of Android experts available here in office hours.
It will be out in the lobbies, next to these rooms. Whether
you’re here or tuning in remotely, check out the Android
Dev Summit app so you can look at all the events and build your
own schedule. Now you’ll notice we have
different sections throughout the day. This is a bit of an
experiment and the 20-minute sessions are going to run
back-to-back with the intention that you watch both sessions
because there’s no time to leave the room. Also, we have
lightning talks that are going to be a speed round where we
move as quickly aspossible to smash as much content to 40
minutes. We’re going to have Q&A with the
presenters, outside of these. Make sure to check out the
schedule posted on the wall. We have so many that wanted to talk
to you that we actually can’t fit them here at all once. You
want to make sure that you’re here so that the engineer that
worked on bugs that you might be interested are actually going to
be there to defend themselves. LAEF [Laughter]
Office hours will be running all day so you’re allowed to skip
class if you want to go to office hours. And decides,
we’re going to be running all the these.
We have having a party later on. Get ready for an epic night of
music and standing around and talking to people.
[Laughter] And that’s it. I really
appreciate everything. We have a little break now and so enjoy
the rest of Android Dev Summit [Applause]
Everyone, our next session will begin in 10 minutes. Welcome, everyone. Welcome
back. Our program is going to be underway in three minutes.
We ask that you mute your mobile devices and thank you for taking
your seats. Our program will begin in three
minutes. minutes. minutes.
minutes. minutes.
minutes. minutes.
minutes. minutes.
minutes. minutes.
minutes. minutes.
minutes. minutes.
minutes. minutes.
minutes. minutes.
minutes. minutes.
minutes. minutes.
minutes. minutes.
minutes. minutes.
minutes. minutes.
minutes. minutes.
I haven’t done anything yet. Thanks for coming. My name is
Jose Alcérreca. My name is Yigit Boyar. I’m an
engineer. Today, we’re going to talk about
LiveData. LiveData is one of the first Architecture Component
that we’ve released last year and in this talk, we’re going to
explain what it is. We’re going to talk about some of the
transformations you can do, how to come been LiveDatas and talk
about patterns and anti-patterns you might want to avoid.
We’re going to explain all these characteristics, but first,
we’re going to start with observable, what’s an
observable? So, in our object-oriented world, probably
the easiest way of communicated one component and another is
having a reference from one object from another and call it
directly. However, in Android, this might
have problems. Components have different likecycles and
different lifespans. This is the ViewModel scope diagram.
Simple thing like device rotation can re-create the
activity. So, you probably know that having a reference of
activity in this ViewModel would be a bad idea because it leads
to memory leaks, even crashes with no exceptions.
So, instead of having a reference AKactivity, we’re
going to have in the activity. How to we send data? We’re
going to let the activity observe the ViewModel. And for
that, we’re going to use observables LiveData.
Let’s see how this looks with a little bit of code. In the
ViewModel, we expose our LiveData. You’re going to see a
little examples of how to expose LiveData from our ViewModel. In
our activity, we make the actual subscription and we do that by
calling the observe method. The first parameter is something
called likecycle. Yigit is going to talk about this.
The second part is an observeable. This is what’s
called whenever the observable, the LiveData’s value is changed.
So, Jose mentioned you want to reference an object in the
larger scope, like a ViewModel from an object in a smaller
scope, like an activity. But of course when you observe
something, it has to keep a reference back to you to be able
to call it so there is a reference. But why is this not
a problem with LiveData? It is a component. To be able to
observe LiveData, you have to provide the lifecycle and it
maintains your subscription for free. So, if you’re observer’s
lifecycle is not in a good state, it’s not going to call
you back. Or when your activity or fragment is destroyed, it
will call it back for you. If you go back to the previous
graph, your LiveData observer will only be called when it is
started and before it starts. You don’t need to care about fragment.
Probably the most, like, distinctive property of LiveData
is the data holder. It’s not — we keep saying this, it’s not a
stream, it’s a value holder. If you go back to our previous
graphs, on the right, we have LiveData in ViewModel and on the
left is activity or fragment observing this. Once you set
the LiveData, it is passed to the activity. When it changes,
the activity receives a new updated value.
The difference happens when you change the value when
observeable is not in an active state, it has no idea that it is
the activity. While your activity’s still in the
background, you send a new value and your activity still doesn’t
see this. Now, the data holder property
comes in now when your activity comes back, what user is seeing
in the foreground. It receives the latest value from the
ViewModel. It only cares about holding a single value and it’s
the latest value. This works perfect for UI because you only
want to show what it is right now. But, if you are trying to
process this, this is not what you’re looking for.
Similarly, if you change the value after activity is
destroyed, nothing happens. Okay. Let’s talk about
LiveData. The libraries provides map. We already said
that LiveData’s great to communicate view and a
ViewModel. What if we have a third component, maybe a
deposit? How do we make this from the ViewModel? We don’t
have a lifecycle there. What if it is observing data sources, in
this case? Well, Yigit said to me, if you need a lifecycle in
your ViewModel, you probably need a transformation, but
that’s actually wrong. [Laughter]
Sorry, Yigit. What I say is that you definitely need a
transformation. Don’t eve R use lifecycle.
So, how do we make a bridge between the view and this? We use a map. A one-to-one static
transformation. We use LiveData, ViewModel result.
It’s the result of a transformation’s map. The first
parameter is the source, the LiveData source and the second
parameter is the transformation function. It’s converting from
the data of the model to the UI model. This is how the
signature would look like in Kotlin. It has source, which is
a LiveData of X and it returns a LiveData y. So, it’s a breach
of LiveDatas and in the middle, we have a transformation
function that transtorms from X to Y. It doesn’t know anything
about LiveData. So, when you establish the
transformation, the key, here, is the lifecycle is carried over
for you. Let’s say you run a transformation of a couple of
LiveData and you hold on to it. When someone — that lifecycle
is propagated to the LiveData elements without you doing
anything and it’s completely managed by us, so it’s
completely safe. We have switchMaps. You have an
application I you have a user measure that keeps the log user
ID. When you grab it, you need to talk to your user to get the
user object and that probably goes to the database and to the
server. It turns the LiveData, as well. Because user object
might change, right? It might return you the cache. So you’re
in this situation where you have LiveData and a LiveData for user
and you need to chain these things. So networks, if you are
chaining from an ID to a user, how do we change from an ID, that’s switchMap.
We provided LiveData and our function, this time, returns the
LiveData. So, signature looks like this. You have a source.
At the end, you have a LiveData and you provide a function that
converts the X to LiveData. What this technically does is
every time the user ID changes, it calls your function. You
give it a new LiveData, it takes it from in previous data and
that is the new one. Like switches tracks or like
switchboards. It’s completely managed for you.
You get all the benefits of using LiveData.
Now, we only provide Map and switchMap. You don’t have a
million transformations. This is very limiting and sometimes
you may want to write you own and we don’t want to provide
many. If you want to write your own, it’s very easy. If I show
you the little code we have for the mapping implementation, it
returns the LiveData and you give it a function, right. All
it does is creates this MediatorLiveData and it is a
source for the MediatorLiveData. It kind of tells us that the
value of this MediatorLiveData is derived from this other
LiveData. So whenever that other LiveData changes, call my
call-back and in the call-back, we apply the function to the
value and set the value on the MediatorLiveData. This is super
simple to write. And there is no lifecycle here, but all of
this code is lifecycle. So, if it’s so easy, let’s
create a new one. Let’s say we want to do a bunch of strings
and you want to have the total. LiveData and another LiveData
and you have a LiveData that has the total number of characters
in those elements and it updates if any of those values update.
So, we call the total line. We receive a list of LiveDatas and
we return the LiveData. What we do, here, is we have a function.
It is very simple. It goes through all of the LiveDatas and
sums their length. We need to account for nulls here because
LiveData has nulls. Look at all the LiveData values and the
total length. Once we did that, we add each given LiveData as a
source to our mediator. It says the value of this mediator
depends on this other LiveData. Any time it changes, it calls
back our function, which calculates the new value for the
MediatorLiveData. And this is, like, four lines of
code and you have a transformation of your
LiveDatas. Now, there are some common
mistakes you can make while using LiveData and we want to
touch base on these things. One thing we see a lot is, let’s say
you — you use JSON, using transformation for that is
not a good idea because it is a value holder. The long string
you fetch from your server is going to stay in memory. It’s
going to hold on to that so you probably don’t want to use
LiveData for something like that.
Okay. The second item is about sharing instances of
LiveData. At one point, I was trying to make an app with
LiveDatas and I had a repository that was there. I said, okay, I
can just save some LiveDatas and share a single LiveData. I a
repository, it takes a data source and the LiveData we’re
returning in load item is shared by everyone that calls load
item. Now, this is fine, it works.
But this a very interesting case. This anti pattern is
about you thinking about which servers are going to be active.
There is this case, in Android, where two activities are going
to be active at the same time. Imagine activity 1 observes item
number 1 and activity number two observes number two. When we
load activity 2, it’s going to load data for item 2. Activity
one is also going to receive that data. Because it’s in the
middle of the animation, you’re going to see a flash, a glitch.
You created a field that’s an instance of LiveData, you’re
probably doing it wrong. The solution is to create a
LiveData every time. It’s very light-weight.
The third item is about where and when to create your
transformations and this is all about wiring. It’s similar to
when you create a circuit. You lay down your components and you
wire everything up and for one set of inputs, you’re going to
have a known set of outputs. But you don’t unplug a wire when
it’s in operation and plug it in somewhere else. This is exactly
what this ViewModel is doing. Lots of horrible things
happening in this ViewModel, by the way.
[Laughter] For starters —
You should have a, don’t do this, in this slide.
[Laughter] Someone copy-pasted and then
blamed us. [Laughter]
It’s exposing item data, which is a valuable. And also, it’s
exposing a mutable LiveData. You should almost never do this.
You should always expose something that is immutable so
your observers can’t change it. So, after subscription, we call
load data from our activity to set the ID of the thing we want
to load and we reassigning item data to something. Something
happened, so the observer is not going to know —
Actually, even if you’re returning LiveData and your observe observer’s subscribing,
they’re doing the new one. So, the solution to this
requires a little bit of planning. We have two
LiveDatas. One is mutable and it’s private to this ViewModel.
And the other one is the one that is exposed from the
ViewModel. The is transformation source map. It
is that mutable LiveData so every time that item ID changes,
the transformation is going to be called with the appropriate
ID. After the subscription to this
item data has happened, we call load data, we pass the string,
it might come from the intent and when we set the value, it
triggers an update and everything is going to work as
you expected it to work. Okay. So, we like to think
LiveData is awesome and is solves all the problems, but it
doesn’t. We see people trying to use it in other areas. I
want to make it clear, if you’re writing an application for operators, you totally bought
into this, use RxJava. If you have things that are not
related to lifecycle or UI, you are trying to synchronize it to
the the back-end, there’s no reason to use LiveData for
something like that. Use a call-back.
Another use case is having these operations, you have data and
convert it and log back and return it. For those things, if
you are using Kotlin, it is a new, exciting area. You might
use RxJava, but don’t use LiveData. LiveData works very
well as the last layer for your UI. It’s perfectly okay. If
you try to scale it, it’s just not going to work.
So, many things we mentioned in this talk — actually Jose has
more. You can go read them. Check out our samples on GitHub.
We have simple uses of LiveData, as well as complicated with
using Room. Has multiple data sources, transformations and you
can look at the code. If it has bugs, you can blame this guy.
He wrote it. Yeah. Thank you very much for
coming. I hope this was useful and we will be in — after the
talks. Thank you.
Thank you. [Applause]
[Applause] [Applause]
[Applause] [Applause]
[Applause] [Applause]
[Applause] Okay. We will start. Sorry. Okay. This guy’s new, who are you?
[Laughter] I’m Daniel Santiago. I work
mainly in Room. Okay. So, today, we are going
to talk about Room. But, before we — I don’t think my clicker’s
working. Nope. Working on it, guys.
Okay. Sorry. All right. Yes, it started working. Now
I’m trying to go back. Okay. Now — yes. All cool.
So, why do you want Room or do you want to write Room? We
asked people to write offline-ready applications [no audioaudio].
It’s pretty much impossible. So for this reason, you do need a
database. We have esculite. It’s very fast and when you need
to optimize it for your use case, it’s very easy to do so.
It’s a really powerful query language. You can express many
things and make it concise.
And SQLite scales very well. For an application, you probably
won’t have much data, but you can have multiple gigabytes.
For your scalables, SQLite would be
perfect. You need to write out of
boilerplate to convert between your Java and your SQLite.
There’s no compile time safety so if you’re building a query
and you forget an “if “case, you’re going to get a runtime
crash. You cannot observe what has
changed. We want people to write reactive applications or
UIs and if you can’t observe, it is hard. You have to build it
yourself. So, we build it for you. So, about two years ago,
we ship Room, we introduce compile time safety and a strong
ID integration. As you can notice with Room, with
navigations, this is a big thing for us. We want to develop
librarlibraries, together with Android Studio.
This year’s I/O, we had SQLite and support for paging so you
can have data sets, queries and log them.
The release is a conversion from Android support to AndroidX. We
kept it the same as 1.1 so you can have an easy migration.
2.1, we’re going to talk about today. This is 2.0. We have,
like, full text search, wheels, AutoValue and more stuff.
One of the pretty cool new feature s we added in 2.1 is
full-textsearch. It’s a way to index text documents and make
them searchable. Let’s take a look at an example. Imagine we
have a music app and we want to search function. You want to
type something, you want to be able to find songs within that
music app. If we have Room, we expression, we store this song
data in a table that’s an entity. Conveniently, we have
our label objects and our song labels, you know, what’s the
song name, label name and artist name, this is kind of what we
want to search and make the index.
If we were to do this, we need to write a query and you had to
use the operator. This is not very good. It’s very limited.
That percentage sign is kind of like a wild card and this —
Even if you index that query column, SQLite won’t be able to
use it. So, don’t do this. Moreover, if
you try to actually search across, you have to expand this
query. And this, as you can see, doesn’t easily scale.
They helped us with this situation because it now creates
a table and all the columns are — you now want to take your entity.
Now, in your query, you use a different operator and we were
seeing the same column and that basically tells the mash
operator that you want to search across all those labels so this
helps us with searching. You might say, oh, I can use
this on all of my tables, but not quite. It consumes more
space because when you create a table, you create a table and
that’s back by a few tables and a lot of the information. This
is known as chatter tables, when you query from your table, the
information comes from these tables.
There’s also a few things. You cannot have keys or compose
primary keys. But there’s one pretty neat feature, which is external
content and going back to this, if we wanted to use our real
table and create a second table for only our labels, we just
basically use that annotation, but we told it, hey, my data is
actually going to be storing this other table that I already
have. Conveniently, this data class
and virtual table has labels. In the previous one, even the
URL is not what we wanted to index.
What happens now is we have a table in front of it and behind
it, we have the same shadow tables for indexes but the
actual content we store in the table we have. This is way
better in saving space and it’s a little bit more flexible.
To query this external table, you do have to query from the
virtual table and then you would do a join because we want to get
the songs and similarly, you would still use match.
One thing, though, is that because these are two different
tables, when you insert into the sound tables, things are not
actually inserted, which means it doesn’t get updated so you
have to do that yourself. But you know, we don’t want you
doing this. We want to make it easy. When you use Room, it
will create trigger for you to keep these two things in sync,
that’s pretty cool. Another important feature is
support for database views. We have songs and albums and a song
might be in multiple albums so we have a junction table that
associates the songs. Now, this is all cool. You want to fetch
a listing and have the album name and all the songs in it as
a list. Okay, cool. We have the list and we write a query
and fetch from that junction table. You cannot do this
because that table doesn’t have the song’s name or the album’s
title. You kind of need to write a query like this where
you fetch from the table and join it with the song and table
and then you can return your list and data.
SQLite is powerful that you can express this, if you find
yourself writing these things, it is like a boilerplate. It
could be cool if you could have a table that has the song and
the album together, without duplicating the data into that
data and the songs and the album titles. This is where database
views come into place. You basically write the query that
defines album and song together as a query. You annotate an
entity with that database view. And in that, this is the same
Room. You can have any with the fields or whatnot.
Once you declare it, add it to your Room database — so we have
that declaration, if you try to rewrite the previous query, you
select from that table. Well, we are selected from a view,
that table doesn’t exist. But for all intents and purposes of
querying, that’s a table. Now it’s much more simpler because
it’s like a table. You can also return or even return the live
data because we know how that is constructed, we know when it
might change so you can get the LiveData and run queries. You
can do everything you can do with a table, except you cannot
do inserts and updates. But you can have views inside other
views. So, this makes it much nicer to
write queries and allows you to logically address your data.
Another important feature we have added is support for
multiple instances. So, let’s say we are writing the
applications, we have a play list, all the songs and we have
a sync service that goes and pulls the names, updates from my
play list. When you’re using Room, if the sync service
updates the database, it automatically updates the UI and
this is a super cool feature because you write these. This
works perfectly, but then your application is
bloated so you decide to go into background posses.
It pulls the song, writes into the database and the UI has no
idea. It doesn’t know the database has changed because it
only knows if the same Room. We don’t get that information from
SQLite. Now, with Room 2.1, you can in
build multiinstances which will look for other instances of
Room. Once you do that, now your background process service
can update the database and all this in Room will update
automatically. Now, this is all by default because we need to
create the server s. It is a cost that most people don’t
need. We enabled this flag to take advantage of this feature.
Another feature we added, which was actually requested by
the community, was added value support. If you’re using
Kotlin, you don’t have to worry about this because you have data
classes. If not, you might be using other values. Room can
understand these AutoValue annotated objects. If you know
a little bit about value, you have an abstract class and you
annotate it with other values. Now you can annotate that same
abstract value and Room will be able to discover you want to
make a table for it. These can now be annotated with
Room annotations through the key column information and things
like that. The only caveat is that you have
to add AutoValue computation and this is the annotation that
makes these two tools work together.
To support this, normally these annotations were only
limited to fields and we needed to extend it to let you put them
on those abstract methods but it only works if you’re using
AutoValue. Similarly, if you were using a
normal class, you would would have fields. You still need
that factory method and Room will be able to discover this.
You would use the abstract class that you would declare.
Another highly-requested feature that has been requested for
awhile is this. You can have a sync return times and insert,
update and delete. We listen. When you request, we’ll listen.
[Laughter] This is actually — this is only
available in rx, that’s interesting.
It might be available —
So Room 2.1 is a really big release, the full-text search,
the database views. When we decided which features to work
on, we were basically relying on your feedback. People really wanted
it. This is our philosophy, we look at what the community is
doing, how are they using it, what do they want and implement
it. So, please, like, try to use 2.1. It’s a very big
release and we want to ship it as stable as soon as possible
and we need your feedback. We look at in number of apps
shipping through them and see how they are using. We look for
the incoming box — we don’t really have box, but sometimes.
Look for incoming user error. [Laughter]
To fix them. So, please work with us and
we’ll try to wrap it up and ship it and also, please let us know
what other features you want in Room All right. Thanks a lot
for coming to this talk. I hope it was useful.
[Applause] Thank you.
We will be in the sandbox area, after the talk.
Thank you. Everyone, our next session, in
this room, will begin in 10 minutes. Thanks. minutes. Thanks. Welcome, everyone. Welcome
back. Our next session will get underway in about two minutes.
As a courtesy to our presenters, we ask you to mute your devices.
We’ll get under way in about two minutes. Hey, everyone, welcome to our
lightning round and this is where we try to smash an in
credible amount of content into This is going to be40 minutes.
really, really fun. We have an incredibly-distinguished set of
speakers and I’m going to talk really fast. If you hear a
gong, that means somebody has gone over. We are going to try
to move this really, really quickly.
Our first talk is going to be about JNI. Please welcome
Elliot Hughes. [Applause]
My name’s Elliot Hughes. I’ve working for Android for awhile.
My first job on Android was working on the JNI libraries and
cleaning up some of the bugs. First off, I’ll show you what
you’re expecting to see when you see JNI, which is code that
looks like this. I’m guessing no one can tell that the code
that does anything useful isn’t on that screen yet. And I’m
guessing no one can tell me where the useful line is in
that, either. This talk is how to not do that.
How do we get away from that? The one-line answer is, use C++
better. If you’re using the CAPIs, it is tricky. You end up
with a nesting style. There are a lot of special cases, like,
I’m trying to throw an exception, but there’s already
an exception pending. So, don’t write that in every
single JNI method. Write that once. And in particular, have
classes that let you use a string as a string. You know,
use a J string as if it’s a string. Similar for a raise.
You don’t want to do with a J when you can use operator square
brackets. Local references, too. The
strings and rimative arrays are most of them. Exceptions,
harder than they look and the — the sort of raw primitives you
get in JNI are not super useful. They expect you to find the
class yourself and create an instance. If you want to
actually include a proper detail message or a cause, you end up
doing weird things like, I need to find the constructer for
this. Blah, blah, blah. It’s a lot of code, especially if you
deal with special cases. Having a function that takes a format
string is a huge relief and — right.
I’ve been talking about this in the abstract saying, you should
use these things. There are many choices and I think a
problem a lot of people have is they get hung up on what’s the
best way to do this? Any of these are better than writing
the code that we saw. Android uses a native helper. It has
things to do the stuff I’ve been talking about. If you don’t
like any of the others on the internet, you can write your
own. So, what does it look like if
you switch to using something like this? This is the same
code. This is the same two slides we had before, now
condensed into one. I think five seconds is enough time to
see what does the work here. You don’t need to have the style
where we have the constructers and we check, did that actually
work? If we’re prepared to use C++ exceptions, that is more
advanced. This gets you 90% of the benefit with 20% of the
effort. This is what the code looks like in Android for that
call. So, that was a really simple
thing, where there really was just one line of active
ingredient in there, but this scales really well. Our
recommendation is you try to keep your code like that. Don’t
mix all the JNI boilerplate stuff. On the other end, you
wouldn’t mix your business logic and UI rendering stuff. It ‘s
similar advice, don’t do that. If you want a good example of
this, the Android system OS class is implementing exactly
this same way. It’s super repetitive, really boring and
that’s the way we like it. If you need to worry about old
Android releases and you have multiple SO files, that can get
tricky. We recommend you go to GitHub. The files on compress,
that was mentioned earlier in the keynote. One, big library
is generally bigger than lots of small libraries.
Thank you. If you have questions, please come find me.
I’ll be doing open house all afternoon. Thank you.
[Applause] I wanted to share with you a
short story about my experience with the Kotlin multiplatform
project. When we come and talk about Android and Kotlin, what
we really mean is Kotlin JVM, that’s the Kotlin that we know
that gets confiled to Java bytecode and we can transform it
and run it on Android. It can run on cloud servers and our
desktops and so on. There are two more flavors of
Kotlin, JVM. And then there’s Kotlin/Native
that can run or target various platforms, even web assembly and
even Android. How would we actually get
started with this. Kotlin 1.3 has a new structure. If you
apply that, you can then select from a set of presets to target
any of these platforms. Here, I’m targeting an Android library
and a JS target. When you add these to your module, it
automatically creates source sets for these Kotlin files. If
you put your Kotlin files in the JS main folder, they will get
compiled to JavaScript files. What do I mean by
platform-specific Kotlin, the reference pages for each of the
Kotlin packages, you mouse over any of them, in the top-right,
you see these multi-colored chips that tell you which
compilation target this is available on. The Kotlin
browser package, that lets you access interfaces from the web
browser environment only makes sense on the JS target.
Fortunately, they are available all cross all the compilation
targets. You can see Kotlin Common. This is a pure Kotlin
library that can run independent from any platform it’s
targeting. In fact, if you add
multi-platform plugin to your project, along with the
platform-specific set, you get a common source set where you can
put platform-independent code. The thing about
platform-independent code, it cannot call any of the platform
APIs. It cannot call any of the JS-specific or Android-specific
APIs. The other way works. You can have your platform-specific
code, it depends on a shared common library or source set.
So, knowing all that, I set out to write an example app just to
learn about Kotlin a platform and I wrote a game. I still
need to create an Android app with Android-specific code just
like I normally would and then a web page with JavaScript code to
initialize things to my app. In my case, the sudoku engine, I
take it out and put it in a shared library used Kotlin
Common. And in fact, the only source set I have is common main
so I put all my code there and that means it’s available across
all the platforms I choose to target.
But then I thought, okay, I have this core engine for solving my
code but I would also like to draw the board on my screen and
why code it separately if it should look the same on each of
them? Wouldn’t it be nice if I had an API the drawing on the
screen? But then what I want to do is I want to actually have it
delegate to each of the platform’s implementation. I
want to use the Android canvas and the HTML to draw on the
website. The thing is, I just told you that Kotlin Common code
cannot call any platform interfaces so I can’t depends on
these. How does it work in Kotlin?
Well, there’s this expect an actual mechanism that lets you
declare expected classes in your common code, which is like
almost defining an interface in Java. In my platform-specific
source sets, I provide the actual implementation that can
use APIs, just as the Android canvas.
Now, when I add that dependency from my common source set to the
other one, it looks something like this. But, actually, when
compiling for a specific platform, such as JS, this
dependency will actually use HTML canvas.
[Laughter] Okay. If I could just show the
link to the project so that everyone can look at it, that
would be great. [Laughter]
[Applause] Yeah, that’s it. [Applause and cheers]
I guess not. [Laughter]
I’m Nick Butcher. I’m a designer and engineer at Google and I love
vectors. Most assets in your applications should be vectors
these days. Vectors are awesome. They’re sharp on every
single density display. They’re also extremely flexible and I
want to talk about this so you can get the most out of vectors.
So, most vectors in your app probably look something like
this. They have parts and hard-coding a color. Something
like this, fill or stroke here. Maybe you’re using a color
resource. There’s a lot more you can do here. The first
thing is using theme colors. So, you can use theme colors in
two ways. You can apply a theme color as a tint. It will tint
the entire drawable. Here is for the icons and you can have
one, single asset that displays in different themes. You no
longer have to worry what color asset you got from your
designers, that they got exactly the right shade of gray you
need. It will be tinted at runtime so it’s always correct.
In this example, I’m going to use color primary. Say you have
a sports app, which using a theme for a given team. You can
reference that theme color so you have a single drawable.
Vector support and lists, you can do some fun stuff. We’re
changing color. Or perhaps you have a list app where when a row
item is selected, you can change the renderings. You could do
this with a drawable and flip between them. If the rendering
is 99% the same, you want to save the stroke here, this saves
duplication. You define it like this in your color resources and
refer to it as you would like a color resource.
My favorite feature is gradient. Vectors support linear, radial
and sweep. A linear has a start, end, X y coordinate s.
You can actually get much more fine-grained and embed these
tags inside it to find individual color stops. Here,
I’m going for a color at 72% of the way through.
You define gradients in a directory or the in-line
resource syntax and it build time, it will extract it to a
color resource, which is handy. Gradients have been super handy.
Here’s an illustration from an I/O app. It would have been
one-fifth of the what we had to ship. It’s useful for adaptive
icons. Vectors don’t support drop shadows. If you need to
build a customized spinner, this is necessary to achieve.
Gradients have certain shapes, but they can be transformed,
like rotated and so on. I wanted to create this
oval-shaped shadow. So I did this by drawing a circle with a
radial gradient and using the scale y feature to transform it
to produce the effect that is after. If it doesn’t fill the
entire shape, you can use this. It continues to color outwards.
If you use a repeat mode, it will repeat and continue. Or a
mirror mode will go back and forth through the gradient.
You can also use gradients which don’t go through different
colors. You can have the solid color block. Why would you want
a gradient that doesn’t do that? You can have some fun. This
example is one single shape using a radial gradient. You
can do a loading spinner or combine it. This is the
gradient over this area and you can have some fun animating it.
So, hopefully I’ve shown you that vectors are sharp, small
and effective. I want to show you what you can build. This is
a single drawable vector. I had the pleasure of animating it.
This is one vector drawable, extremely small, extremely
sharp. That’s vector drawables. Thank you very much. [Applause]
[Applause] We have a short pause while we
have the next session queuing up. I hope you enjoyed that vector-oriented presentation.
We’re going to get into a little bit of Data Binding, hopefully.
So, how’s everyone enjoying the summit so far?
[Applause and cheers] I want a little bit more
enthusiasm, but I’ll take it. [Laughter]
And, we are so excited to be doing this again. It’s the
first time we’ve done this in three years and it is so great
to get out in front of you for Android’s 10th anniversary.
And, again, thank you, all. I know many of you — how many of
you traveled for more than 1,000 miles to be here. Wow! All
right. That’s amazing. We’re almost there, I think.
All right. Hold on, we’re having technical difficulties.
Are we ready? All right. Excellent. Let’s do this.
Level Up with Data Binding. When Data Binding was back in
2015, my reaction was, what have we done? Expressions. Data
Binding is pretty cool. You can actually choose how much you
want to use. It’s the beginner level. Immediate benefits. At
the intermediate level, you get custom binding adapters. You
have data from UI and UI to data. Let’s get rid of find
view by ID. [Applause and cheers]
We need to enable Data Binding. All you have to do is set Data
Binding equals true in your Gradle file and put layout
wrappers. You can do that in Android Studio automatically
know by pulling down and saying convert to Data Binding layout.
You can inflate it and set your attributes like this. You’re
going to want to use real Data Binding so let’s talk about
binding expressions. We declare variables in this data section
of our layout and then we can use expressions in layout Xml
attributes. They are in curly braces. Here are examples.
We’re designing a text property to a ViewModel property. In the
second one, height of zero and in the third one, we use a
lambda, which gets past the text few. On the fourth one, it has
another view. Now, to give Data Binding
access, we set the binding object like this after inflating
the layout. So, pretty straightforward and then our
ViewModel is now available to that layout. But the real
question is, how does this all work? And the answer is, there
is no magic in Data Binding. But it does seem like Meige
magic and that’s because we have adapters. Every call is made a
binding adapter. You can see the code and use a debugger.
That last line is actually the set text we’re looking for.
They make it behave intelligently across all these
views. Looking at the source files will
help you build your own custom binding adapters. So, let’s
talk about it. The adapters take one or more attribute
names. The method takes a view of the first parameter. They can just —
adapters can differ just by data types. So you can also use
adapters to override the behavior. This makes the image
load glide but you have to be careful with this. We also
could do a bunch of stuff with advanced binding adapters.
Sometimes the old one is good, like listener. The binding
compiler will pass the old one into the first one.
And, also, you can use multiattributes, which is pretty
cool. So you can define these multiple attributes here, when
you declare the binding adapter and those are both available to
your code. Observebility is cool. We can use LiveData to
automatically do observation. This is pretty cool. We’re only
exposing an immutable class here. And then you just expose
a LiveData using Kotlin. And then you need to do one more
additional change. You need to set the lifecycle owner so you
can observe it. Two-way Data Binding. This is
really trivial when you’re actually using LiveData. You
can use one-way Data Binding in two ways. You can actually call
this with two-way Data Binding with at equals. We can observe
LiveData. So, in this case, it’s fine to expose it and then
we set the lifecycle owner and we use at the at equals notation
and that’s it. Two-way Data Binding. Maybe that’s not so
expert anymore. Check out the Data Binding
codelab. There you go. [Applause]
Hey, everyone. My name is Carmen and I’m on the Android
performance team and today I’m going to show you examples of
analyzing performance using Systrace. Before I do, I want
to remind you that your app is not an island. It’s running on
top of several layers, the phone hardware, the Android framework,
libraries, AB tests. The reality might actually surprise
you. And this is where Systrace comes in. So Systrace is a tool
that let’s you collect precise timing information about what’s
going on, on your device, and visual it. It records down to
the individual CPU time slice. It’s the most important tool we
have for debugging issues. We have given talks about how to
use Systrace in the past. Google for the I/O talk.
Today, I want to talk about the issues you can find. I used
Systrace for three apps I don’t use. Let’s jump in. With the
first app, when I look at the trace, three different activity
starts jumped out at me right away. There’s a lot of reasons
to use trampoline activities. I see when developers are trying
to use a splash screen. They definitely impact your launch
time. If you’re trying to make a splash screen, you could set
up a launch theme or refactor your code so you only open the
separate activity when you need to. I don’t know why this app
has these activities, maybe they are critical.
In the same app, I also browsed through the names of the views,
it looks like it’s a drawer views. They often have a lot of
child views. Sometimes we need them immediately for UX reasons.
They could save 42 more mill seconds.
The second app is following what I would expect. There’s no
extra activities or services being started. I dug in more
and clicked on the views being inflated and the names of the
widgets I could see matched up with what was visible with the
app. Then I saw this gap in activity inside blind
application that takes up 30 milli seconds. I’ll see its
monitor contention. Monitor contention is saying lock
contention, where the owner of the lock of the thread. And so
I scrolled down and I did see activity during this time.
And then it’s giving me a pointer to the stack. I wasn’t
familiar with realm. It’s like a mobile database library like
Sequel. This may be or may not be something you can fix because
you might need to coordinate with the Realm library.
Either way, this is another potentialpotential 30 mill
second. There are two activities being
started, but there’s another potential improvement here. I
included the thread name, this is the UI thread. If we scroll
down, we can see these background threads running. CPU
0, CPU 1 CPU 2. It’s awesome they made background threads but
there’s a potential performance issue. These background threads
are doing a lot of blocking I/O. So that’s the orange sections.
They’re kind of hard to see. So you can see there’s some I/O
happening. Now it turns out that on a lot of devices, we
have to be concerned about I/O contention. There’s not
necessarily more than one channel to use. They may be
slowing down the I/O request from the UI thread. That is
highlighted down below. We see the busy 4 milli seconds. We
can see that in this section, we spend 107 milli seconds. We
could shorten this if we move the background activity and
overlap with is somening else. All I needed to do was clone.
You can open the output HTML file in your browser and see
everything that I showed you today. And this barely
scratches the surface of what you can do with Systrace. I was
able to identify these opportunities in apps I don’t
work on. When you look at a trace of your
own app, it’s going to make 100 times more sense to you. And,
you can even add your own trace points inside your app code so
you can see the context of what’s running in your app from
within the trace. Thank you.
[Applause] PRAEF Hi, everyone. Hope we’ve been
enjoying the day one so far. I’m parul. We want to talk to
you about certain practices that you, as app developers can adopt
to build products that continue to value users, at the same
time, protect their right to privacy. We want Android to be
a platform where you can offer personalized experiences and at
the same time, privacy is important and transparency in
terms of what data you’re collecting and how you’re using
it. So, it can serve as a platform
to build social experiences. We have ongoing efforts. So,
things we want to touch upon today are how your apps are
accessing user data, ensuring your user’s know what is being
accessed and transparency. So, we have a multi-level
approach. We focus on improving APIs.
We set up for keeping abuse in check and assuring that a level
playing field is provided and a safe experience for our users.
We’ve built out to identify when apps may be abusing a
user’s personal information. We also have human reviewers,
so it’s not bots and AI, but actually people who review the
apps to make sure the user’s right to privacy is secured and
the safe experience is provided. We also invest heavily in
surcurity. We have the security program and vulnerability
program. Feel free to drop more if you want to know more about
these. Finally, we have Google Play,
which helps users conserve their privacy and security — sorry.
Why are we here speaking to you today? We feel privacy and
security is a partnership between the platform and
developers, as you. You, as developers, who build these
apps, play a very big role in the ecosystem. You can advocate
for better privacy policies and ensure it is not compromised at
the right of users privacy. For example, if you have a
request to collect certain information about a user from
their device, a question all way worth asking is, do we really
need it? How do you plan to use it? And are we still using it?
There reason is that it is actually not really uncommon for
certain information to be collected, which is actually
never used and is sometimes abandoned.
Some data we see being collected includes IMI, a list
of installed apps, to target ads to them. Collecting information
of an app information or a network, such as the name or
strength. You probably always don’t need such information.
So, for a few cases, we have privacyprivacy options. Some
examples include we encourage you to use instance IDs. If
you’re trying to confirm the user’s phone number, we
recommend you use this. You can also consider qualification
instead of fine location and then if you want to see if a
user in the core, you could check for the focus rather than
requesting the read phone state, which gives out a lot more data.
Lastly, you want to ensure that your users are available of
what data is being collected and how it is being used. This is
not just a best practice, but a requirement. If a user is not
aware that some data about them is about to be collected and for
what purpose it is being used, you are required to disclose it
to them and get permission to do so.
The data’s being collected off the in-store app for fraud
prevention purposes so this needed to be disclosed to the
user and only after the user consents to it should the data
be transferred. We’ve worked hard and we’re
going to walk you through a few of those changes. In Android 9,
we split up the call log permissions from the phone
permission group into their own permission group call log. If
your app is requesting — reading phone numbers from the
phone’s state broadcast chain, we are requesting you to ask for
a call log and state permissions.
In Android 8, replaced it with a build with function that
requires a read phone state permission. Please make sure
you use if for valid reasons. Speaking of limited access,
apps running in the background will require the following
restriction s. You will no longer have access to mic or
camera in the background, sensors. If your app needs
access to sensor events on devices, you will need to use a
full-ground service and to further-inform and protect the
user, the system will add a visual aid when they are
accessing the camera or mic. Our contacts provided API used
to allow apps to provide data to glean information. As of
January next year, a limited set of contact fields and methods
will be made obsolete. So if your app is accessing or
updating these fields, we ask you to use alternative methods.
You could fulfill certain use cases by using private content
providers or storing data in your back-end systems.
So this was a really brief talk. We hope we’ve been able
to offer you insight to build apps that are conscious. As
we’ve mentioned, we definitely believe that security and
privacy is a partnership between you, the developer, and our
platform. If you have any questions about it, please do
come find us at the office hours and I hope you have a great
summit. Thank you so much. Thank you. [Applause]
[Applause] [Applause]
[Applause] [Applause]
[Applause] [Applause]
[Applause] Contained a guide to app
architecture, which shows one way they can combine to have a
testable and maintainable app. The separation of concerns Hello, everyone.
It’s great to be here. My name is Phil Adams. I’m a
researcher. And I’m Pierre Lecesne.
We’re here to talk about how we’re rethinking app
distribution on Google Play. We’ll talk about the format and
share features we’ve been working on. Tostart with, let’s
talk about app size and the impact it’s having on your app.
Why does app size even matter? We shared this chart at Google
I/O and you saw it earlier today. It shows when the app
gets bigger, install success rate goes down. Many users
don’t have enough space left on their device. Data can be
expensive and connection speeds, slow. I want you to think about
your own experience, too. How many have you seen a warning
from Play? We’ve started looking into this
area more closely and found that freeing up space is a major
driver of uninstalls. This is a problem for people with
low-storage devices. It’s also a problem with
high-end devices who fill up the devices with HD content. One in
five devices in the U.K. have low storage. A key request we
hear from developers is also for help understanding and
uninstalls. We ran a user research study to see why users,
in the U.S., uninstall apps. The leading reason was quality.
However, the leading reason apps games were uninstalled after a
month was to free-up space. Apps and games keep getting
bigger. They have grown over five times on average. Newer
devices have more storage, but the app, games, photos and HD
videos keep getting bigger, too. Making your app big puts it at
risk to suffer from all these downsides. They lose
acquisitions and get uninstalled to free up space. I’m sure you
already know that and you’ve probably just considered it a
trade-off. Do you add new features? Lose installs and
drive more uninstalls? We don’t want you to have to worry about
these trade-offs. For a few years, there’s been a
way to optimize. You can use multiple SDKs, but it’s incredibly inefficient. In
number of APKs grows quickly, 64 bits, 34 bits.
It also doesn’t help with some of the dimensions. The
languages are in every APK. We can do better. Let us show you
the solution we have built for this and see how the new app
model helps make your life easier.
So, the new app model is focus on improving the whole user
acquisition journey. It helps by making your apps smaller,
directly improving install and uninstall rates. It makes your
releasing more manageable. In that context, for the rest of
today’s session, we’ll talk about steps we want to help you
with. First, we want to help you convert more installs and
minimize uninstalls by building smaller apps. Then, we want to
make it possible for you to deliver different features to
different audiences, on-demand. And, finally, we want to help
you keep your users up-to-date on the latest and greatest
versions of your app. Let’s start with how to make
your app smaller. This is where we good be gan with the App
Bundle. It is the app publishing format. Apps that
have adopted it have seen a saving of 35%. That’s compared
to a universal APK and that’s quite a bit. How does it lead
to such savings? Here’s the big idea, Google Play can assist and
take care of delivering just what’s needed on your behalf.
There’s no need to send a bunch of languages and device
resources. We support three slices dimensions out of the
box, language, screen densities and architecture. All of this
is made possible by split APKs, we added in Android Lollipop.
Split APKs allow multiple APKs to be on one app. They can be
installed in different combinations, on different
devices and can be installed all at once or piece by piece.
Given a bundle, Google Play starts by putting everything
that is common in the base APK. This is the manue fest and dex
files. We generate a different MRIT APK.
It is all the drawables that would have had that device and
density. We then also generate different
split APKs for each native architecture and generate a
separate split for each language supported by your app putting
each string in a different APK. Together, we call these
configuration splits or config splits.
When we go to serve an app to a device, we only need to serve
a portion. We will install the base APKs, as well as the
density split, the architecture split and the English language
split. It can get a bit trickier than that. I speak
both French and English and have specified both languages so my
pixel will not only receive the correct density and architecture
split, but the French and English language splits. If I
move to Brazil and learn Portuguese, it will attempt to
download the language split for all the apps on my phone.
For devices, which don’t support it, they with have stand-alone
APKs for API and screen density. Each of these contain all the
necessary files. My old Galaxy will run it. All the languages
are included in those APKs. Putting it all together, the
picture looks like this. You actually don’t need to worry
about all the details of how these APKs are generated. All
you have to do is upload a single app bundle and select the
right things to serve for each device.
To summarize, the App Bundle contains everything, signs each
APK to deliver to devices. Because Play is now signing the
APKs, this means you need to upload your signing key to
Google Play. This is part of the program called App signing by Play.
Is this secure? The answer is, absolutely. As you can imagine,
Google takes this very seriously. We protect your key
in the same storage we protect Google’s own keys. You’ll
benefit from our owngoing investments.
We’ve been chatting to developers who are already using
the App Bundle about what they like. Recently, we’ve conducted
a workshop with developers from India, they have millions of
active installs and they’re very sophistsophisticated about
keeping their app sizes small. It improves their conversion
rates. Red Bus says it is more
streamlined. Switching was a simple process and they were
testing with the bundle within an hour. It’s just not
developers in India, all the developers have switched and
have seen fantastic-size savings. 56% size saving
compared to a universal APK. It’s hard to get that from incremental.
Google apps are adopting the bundle and production and seeing
strong savings, as well. Google MapsMaps, Google News 27, they
report streamlining of their release process and noticed isn’t experimental, this is
ready. There are thousands of apps — thousands of app bundle s in
production. So, when you adopt the App
Bundle, you’re not only gaining size saving, today, you’ll be
benefiting from automatic optimizations. Here is another
one. We’ve added a new Android platform to the App Bundle
called uncompressed native libraries. Native libraries
have to be uncompressed from the APK before the platform can use
them. The end user ends up with two copies of the library. They
can read it directly from the APK if it’s left uncompressed.
You will need two versions of your app. If you’re using the
App Bundle, you give us your libraries and we re-create it.
The size savings we’re seeing are around 16% reduction on size
and 8% reduction in download size. As I explained, the app
is smaller on disk because it does not need to make a copy.
It is smaller because our compression algorithms perform
better on data that is not already compressed. Our partner
saw savings of 22% and 16% on their download size. And these
savings are in addition to the size savings they’re already
seeing. With this optimizing, the
download size is smaller, faster to install and takes up less
room on disk. Now, we still want you to remain
in control to when these optimizations should be pushed
to your users. Play will only apply optimizations that have
been built with the version of Gradle that introduces the
optimization. The uncompressed native libraries will only be
applied to your app if you build it with Gradle 3.3.
Now, let’s take a look at how you can build, test and publish
Android App Bundles. You can build App Bundle in the 3.2
stable release of Android Studio. It is very similar to
building an APK for most developers so it’s easy to
switch. For who prefer the command line
or wish to integrate with automated build systems, the
Gradle Android plugin provides tasks to build bundles. You
would use the assemble task. With the Android App Bundle, you
use the new bundle task. Similar to assemble tasks, you
can build specific flavors. The bundle task will generate an
Android app bundle with flavor and build type chosen. It is
called bundle.aab. We do want developeressess to
retain control over their splits. If you need to disable
splitting, you can do so using the new bundle block, as shown
here. Android Studio and Gradle are
not only ways you can build bundles today. They are open
source, others are already adopting them. Games using
Unity can build Android App Bundles, too. They added
support in the beta release and you can join the beta program
now. So, now, let’s see how you can
adapt your testing. During the development phase, when you need
to iterate quickly, you don’t need to go through the App
Bundle. You can build through Studio, much faster. Before a
release, you may want to test the APKs that would be generated
from the App Bundle. This is as easy as creating a new run
configuration and selecting APK from App Bundle. Studio, under
the hood, uses the same tool Play does so we’ll get high
fidelityfidelity. When you want to share the APKs,
generated from the app bundle, you can use it. This is what
Play and Gradle use on their hood to generate APKs. It is
transparent about how we generate APKs and you can
download it on the GitHub repo you see here.
It generates what we call an APK-set archive which will
contain all the APKs for all the devices that your app supports.
You can share this archive and still use bundle tool, which
will simulate what Play does. As you can imagine, an APK set
can become quite big so if you want to build it for a given
device, you can do it with specification. You can share it
around and can be installed on the devices.
This is what the common line looks like, to build the APK set
archive. In this case, when instructor bundle to build APKs
only for the connected device. If you don’t have a device
at-hand, if you’re generating it from a CI system, you can do
device specification in this JSON format.
If you want a unique APK, you can build a universal APK. It
can be installed on any device and is very convenient for
sharing. The best way is to go through
the internal test track on the console. You can get, bite for
bite, what your userishess will get. It is similar to the alpha and beta tracks.
There is no delay between the upload and being avail on the device.
You can create a list of emails. The testers can follow the link
and they’ll receive automatically the latest
version. We know that for some of you,
these testing options are not ideal and we see a gap in
testing with Android Studio and tracks. We’re thinking really
hard about how to close this gap.
So now you’ve built and tests bundles, let’s talk about a new
view. In the Play console, we’re starting to show an
estimate when we think an app could benefit from the App
Bundle. We’ll calculate what you could save if you switch.
Once you choose to switch, you manage your release just like
you did with APKs. Simply create a new release and drop
the App Bundle in the same location. In order to aid your
migration, you can upload APKs on your production track. When
you do this, Play is not going to reassign the APK.
We did this so you can feel confident feeling out the App
Bundle with a small amount of users first.
Once you’ve uploaded it, you review your release, you roll it
out. And that’s it. I can’t stress this enough, there’s no
multi-APK to deal with. Play console has created all the APKs
for the devices supported for you.
Now that you’ve uploaded your App Bundle and Play’s done this
heavy lifting, it would be nice to have an overview of what Play
has done for you. We’ve built a new tool called bundle explorer,
which lets you navigate your uploaded bundles.
On the first screen of bundle explorer, you’ll see the size
savings. This is going to be different, device by device, so
we calculated this using a popular configeration. If you
click on view devices, you can see which devices are in each
bucket. Alternatively, you can search
for supported device by name to download the set of generated
APKs that get served to that particular device. This is
helpful when you get a bug report so you can get the exact
APKs that Play has served to it. We haven’t forgotten about
everyone who uses our publishing APKs, it is available by the API
and automation and CI tools are adopting the bundle. You will
find all the documentation as these URLs. [No audio].
You can reduce the size of your app. Some features may be
used by 10% of your users, to avoid having the disk space for
a feature they don’t use, you can choose to retract it.
Dynamic features can be used or you can defer them from
installing them to a later time when the app goes [no audio].
They evaluate the app size impact to ensure that the
benefit of the feature is worth the size increase. Dynamic
features mean they can increase it.
Dynamic features help Facebook with their high-end strategy.
They are able to deliver advanced features to just
supported devices and remove larming features that are not
used often to avoid taking up space on that device forever.
Facebook has told us that dynamic features work well when
they are working on a new feature. They can have a
separate team of engineers working on it and they can add
the app and add it to the app without increasing the base app
size at install time. Here are some of the examples of
dynamic features. They are in production. Froexample, card
scanning is a feature that only a small percentage of users are
using. It avoids it taking 2 megabytes. Another one is
realtime communication. Only users with devices who can
support them and actually want to use them need to download it.
What might that experience look like for a user? Let’s take a
simple example. Imagine that you have a recipe app and you
want to keep it small. While all of your users like to browse
for recipes, only a small fraction of them like to add
recipes and you notice that this takes up significant size in
your app. You can choose to break these feature out into its
own module and serve it only when needed. We can see what it
looks like for the user here. The app opens, and then the user
goes to add a recipe. The app then requested the module be
installed. It’s downloaded with progress visible to the user and
it’s ready to be used after a few seconds.
Which parts of your apps might make good candidates to be
broken out? If only a small fraction of your users, use this
feature, it could be a good candidate. And finally,
consider if users can wait a few seconds before downloading and
using that feature. If you’re interested in
modularizing your app, I encourage you to look the
articles. Now that we have covered how
dynamic features work, let’s see how to create them. To create a
dynamic module in Android 3.0, use the wizard. Choose
dynamic feature module. Type in your module’s name and Android
Studio will download a new feature for you. Under the
hood, this is what Studio does. A split identifier is added. In
this case, we’ll call it add recipient. This is how the
Android platform recognizes that although this APK has the same
package name, it’s a different module name.
A new tag is added for distribution aspects. It reads
properties of the modules of your app.
Next, you declare that this is an on-demand module, meaning
that it will be only delivered to users devices when you
request it. Note that on-demand modules are only supported since
Android L so you need to specify what Play should do when it
generates the pre-L APKs. This is configuring using the fusing
tag. Here’s an example with our
recipe app. We have two dynamic features. It has equal true and
the module is fuse equals false. You can see that Play will only
add it in the pre-L APK. In the dynamic module, you can
see a new Gradle plugin being used. You also have to add the
base module to access functionality from the base
module. Looking at the build Gradle, the only change is to
declare all dynamic modules. Gradle will make the resources
stored available to them. Now that we’ve created our
on-demand modules, let’s write the code to download them. In
order to interact with the Play store, we have to use the split
install API, which is part of the Play core library. This is
a library that communicates with the Play store. The Play store
communicates with Play servers. The API is structured using the
same task framework from Google Play services. Installations is
by the split manager. You construct a request with all the
modules you wish to download and install the splits required for
the requested modules. For large modules, you’ll need to obtain the user confirmation.
You’ll need to do this whenever an app requests more than 10 mega bytes. You can listen for
updates and display this progress to users. Here, we
show the download progress bar. Alternative option is to use
the deferred installation API. These will be installed at a
convenient time for the user, generally when they aren’t using
the device and are on WiFi and charging.
Because of this, we allow you to install up to 100 megabytes with
our requiring user confirmation. It allows you to manage your
on-demand modules. You can uninstall modules that are no
longer required by the app. So, when installing an on-demand
module, the app does not need to be restarted. Code is available
immediately and your resources and assets are available once
you refresh the content object. On Android L And M, it needs to
fully restart. We include a split library which has splits
for L and M until the app goes into the background and we can
properly install it. If you are familiar with it, you will set
up split compat in a very similar way.
You can use the split application as you default
application or you can simply extend. And if none of these
options suit you, you can choose to override the context in your
application and invoke split compat at install.
Now, let’s talk about versioning. When you release an
adate to your app, Play ilwill update the base module and any
on-demand modules so the version of your modules are always in
sync. Partners tell us this is something they really like about
this model. Let’s now talk about the final
step here, helping users update to the latest and greatest
version of your app. You know that Play offers auto update
functionality and many users do have automatic updates turned
on. In some markets, it’s not uncommon to have it turned on
but their device to not meet the requirements. For example, they
might not connect to WiFi. I’m happy to share we’re launching a
new feature. You can call this API to determine first there is
an update available and then if so, you can share a prompt to
your user s. The flow is designed to immediate critical
use cases, such as user privacy or revenue-affecting bugs. It’s
a full-screen experience where the user is expected to wait for
the update to be applied. We take of restarting the app for
you. Some of you have built similar
flows for yourselves. But this is a standardized method that
you can use with very little effort.
Instead of that immediate update, you can also put
together a flexible update which does not have to be applied
straight away. The really cool thing about this API is that you
completely customize the update flow so that it feels like part
of your app. You may choose to nudge users, like Google Chrome
is doing in this example. The download happens in the
background so the user can keep using the app. The update is
complete, it’s up to you, and your app, to decide how to
prompt the user to restart. Or you can simply wait until the
app goes into the background. Google Chrome is testing this
now and we’re inviting early access partners to start testing
this with us, as well. Talk to your manager if you’re
interested. Let’s take a look at the code
that allows that flexible in-app update to work. First, you can
request an instance of app update manager. And then
request the app update info. This result is going to contain
the update availability status. If an update is available and
the update is allowed, the returned app update contains an
intent to start the flow. If the app is allowed to start,
you would extract this pending intent and start it. This will
start the download and installation.
You can monitor the state of an update by registering a listener
for status updates. When the download is complete, you can
choose to install it directly or defer to install it. The
restart happens when a complete update is called.
So to recap this new API, ensuring your users get the
latest update is important. And you can make that happen by
following some of these best-practices up here on the
screen and also by integrating by our in-app updates API. The
API is available for any app and so you can get started with it,
in parallel to switching with the App Bundle.
And, that’s it. We’ve now covered how to make your apps
smaller and create dynamic features and how you can ensure
that your users stay on the latest version of your app.
If you want to chat, you can find us in the office hours. If
you want to share about what we’ve talked about, the post of
this link is a great place to start. Enjoy the rest of your
day. Thank you. [Applause]
Everyone, the next session, in this room, begins promptly, at
2:50. Thank you. 2:50. Thank you. 2:50. Thank you.
2:50. Thank you. 2:50. Thank you. 2:50. Thank you.
2:50. Thank you. 2:50. Thank you.
2:50. Thank you. 2:50. Thank you.
2:50. Thank you. 2:50. Thank you.
2:50. Thank you. 2:50. Thank you.
2:50. Thank you. 2:50. Thank you.
2:50. Thank you. 2:50. Thank you.
2:50. Thank you. 2:50. Thank you.
2:50. Thank you. 2:50. Thank you.
2:50. Thank you. 2:50. Thank you.
Good afternoon, everyone. Our program will resume in three
minutes. We remind you, as a courtesy to the presenters, to
please mute all mobile devices. Thank you. Thank you.
Thank you. Thank you.
Thank you. Thank you.
Thank you. Thank you.
Thank you. Thank you.
Thank you. Thank you.
Thank you. Thank you.
Thank you. Thank you.
Thank you. Thank you.
Thank you. Thank you.
Thank you. Thank you.
Thank you. Hi, my name is Kodlee.
And I’m Rasekh. Here to talk to you about what’s new about
app developer and Android Auto. We’re excited about the
automotive space right now, connectivity, electrification.
Cars are turning into full-blown computers on wheel. They have
cameras, screens of all shapes and sizes everywhere.
Android Auto is an effort from Google and our automotive
partners to bring these together and bring a safe experience for
drivers everywhere. Of course, that’s easier said than done.
Many different input types, from touchscreens to touch pads, many
different screen shapes, sizes and resolutions. Today, you can
see that vision at work in any Android Auto-compatible carment
drivers have access to their favorite apps right from their
car’s display and developers build their app once without
worrying about different makes and models. Today, we’ll talk
about two of the most important app categories, mesinging and media.
First is messaging. That’s where
CarExtender came into play. CarExtender allowed a way for
messaging apps to provide details and a way to reply to
conversations to Android Auto. But sense Android N, apps could
stylize their notifications with MessagingStyle. It is a huge
step up for CarExtender, as it allows messaging apps to bring
conversation into the notification. Not only does it
provide a nicer UI, but it provides affordances like
applying and liking. Android Auto now fully-supports
the use of MessagingStyle without the need for
CarExtender. This means Android Auto and the assistant allow
group messaging. For MessagingStyle, apps not only
gain a richer user experience and the benefit of automotive
support. So, let’s see how Android Auto
interfaces with this, starting on the messaging app side. From
Android Auto’s point of view, messaging apps have three core
functions. Notifying users of messages, marking those messages
as read and replying. Apps can implement reading and replying
with services. These services can be triggered inturnally with intents or externally with
pending intents. Notifying is done via an Android notification
and the messaging information is provided with the
MessagingStyle. The mark is read and reply are wrapped in
intentions. Note here that the reply action has a remote input
that’s added, that acts an input field for the reply.
And that’s the messaging app’s architecture. Moving on to the
other side of the notification, we can see how Android Auto
leverages these objects. They will post an in-car notification
and once tapped on, will read aloud the messages. The mark is
read pending. The user’s given the choice to respond and if
taken, a transcription of that response is set in that remote
input. The reply pending intent is then fired.
And that’s the entire Android Auto flow so let’s see how we
can put that into code. First, the app needs to declare support
for Android Auto. To do that, it needs to create a new XML
file linked in the Android manifest. This file says that
it has notifications that Android Auto should take a look
at. Note that for messaging apps that support SMS, this
needs to be added. So, now Android Auto’s taking a look at
our messages, we can build up the messaging style. We can’t
really have a conversation without people so we have to add
the user of the device. We create a new person object.
Person is used to set things like the user’s name, their icon
and unique key. So, we create this device user and we create
the MessagingStyle with it. We can add our conversation
information. So, I’m from Seattle and I love
skiing so I’m setting it to ski group. Because I’m taking
multiple friends, this is a group conversation so the
messaging app needs to set it as such. Note here that
conversation title and whether or not the conversation is a
group can be set independently. This is new in Android P and has
been back-ported in the compat library.
Finally, we can add all the messages in this conversation in
the order they were received. In this case, my friend wants to
coordinate breakfast, so there’s the text, the timestamp and the
sender. With this conversation set up, it’s time to add the
actions. For the reply action, we in stantiate an action
building and set the semantic action to semantic action reply.
That must also tell the OS that firing the reply pending intent
won’t show any extra UI. This is especially important in
Android Auto because we don’t want to be distracting users.
The reply action is supplied with that remote input I talked
about earlier. On the mark as read side, it is done the same
way. It is set to semantic action, mark as read and we tell
the OS that pending that intent won’t show extra UI. The mark
as read action does not need a remote input.
So that’s all three pieces, the notification can now be built.
For reference, here are the three elements we created.
MessagingStyle, which holds our conversation, reply and mark as
read action. To build a notification, some boilerplate
is provided and we set the messaging style. We can add our
actions. Here is where the messaging app has some options.
The reply is a regular, visible action and the mark as read is
added as invisible. One can add both as visible or invisible.
This changes how it shows up in the mobile UI. On Android Auto,
actions are never shown but it can read visible and invisible
actions. The messaging app can post the
notification and now we have planned breakfast on the road
and our ski trip is under way. And now that we’ve coordinated
with everybody, let’s find something to listen to. Getting
driver’s access to their content should be front and center. I’m
going to talk about new features to enhance the usability of
media apps. We want to make it more visually pleasing and
enabling search results. Let’s go over the architecture that an
app has when communicating with the Android Auto. The first
thing a media app is a media browser service. It provides a
tree of playable and browsable items. Browsable items are
basically things to organize app content instead of returning a
giant list of playable items. They implement the ownload
children method, which adds the tree. Here in our first call,
it would return home, recently played, recommended and play
lists. Since this is running in a car, we recommend that media
apps only provide two levels in the tree to avoid distracting
drivers. Now, once a user has picked
something playable from the browse tree, the media session
service is used to play music and provide metadata and
controls. For example, our media app supports play/pause,
play forward and skip back. There’s also the ability to
provide their own custom actions, maybe something like
30-second skip. Obviously, we want to get the user away from
touches or doing things so we bring in the assistant. Might
say something like, hey, Google, play my ski jams. They perform
speech recognition and request that it play the query and music
starts playing. We’re going to take a one step
farther today. We’re giving you the ability for media apps to
implement an additional function, onsearch, and once the
music has started playing from a Google Assistant query, they can
provide additional results. It provided a ski trip from this
year, as well as last year. This should look pretty
familiar. This is the onsearch method. It takes the query
string, an extras bundle and a result object which the app
fills in and sends back to Android Auto. Apps should
return an empty list if they get a query they don’t support.
Second, for queries that can’t be answered sink
cruinously, it let’s them know not to send anything back right
away. Apps can do extra work before sending the results to
Android Auto. Finally, when the results are
ready, they can send the result and the result object and
Android Auto will be notified and show the results on-screen.
All the code snippets come from the universal music player, an
open source media app published on GitHub. It can be cloned,
compiled and used as a great reference building your own
media app. So, our media app returns a list
of items in the ski jams query. It returns two play lists and an
album. It could be nice to Android Auto could group those
items. Fortunately, we’re introducing a way to do that.
Here’s an example function where your media app might use from an
internal representation of a media item into the media item
compat. We can annotate items with a category extra and
Android Auto will group any adjacent items with the same
category. For the two ski trips play lists, we can annotate with
play lists and Android Auto will add them for you.
We’re also adding some additional annotations on media
items that would be really useful on our trip. I might be
heading out to the mountains with my family, I might worry
about a song coming up with explicit content. We are able
to say this has exlicit content and Android Auto can show that
in the UI. I might not have great bandwidth, I’d love to
know if they’ve been downloaded or maybe I don’t want to burn my
data on music that I’m playing. We can also annotate with
whether or not media items have been downloaded and are already
on the device. Great. Looks like the ski trip
2018 is downloaded, doesn’t have any explicit content, great
choice for my trip out to the mountains.
There’s one more function that needed updating, the media
browser service is called when a media app is first connected to
by Android Auto. In order for search, you’ll need to add a
couple of extras to let Android Auto know you support those
features. As I mentioned, we’re introducing consent styling and
Android Auto will be interpreting it in a much more
visually pleasing way. Folders will be interpreted as lists.
But for playable items, things like songs or albums or play
lists, we’re going to be showing them now as grids. Most of
thesethese items have richer content that users can identify
by seeing much easier than reading and much safer when
you’re in the car. There are times when a list is
better than a grid. For example, in a podcast app, each
of the individual podcasts would have individual art that is much
more visually representative while the episodes, instead,
they’d have all the same art but different episode titles and
lengths and status and it would be much better to show them as
lists. In the ongetroute function, they can
say, I prefer them to be grids. My playable items to be lists.
Or vice versa. They have full control over how we’re slowing
the items. I already mentioned the
universal media player. I just want to reiterate, it’s a great,
comprehensive media app. It gaves you implementation of a
media app that actually plays music and it’s for Android Auto
as well as Wear and Android TV. I encourage you to check out the
Android media controller, another open source on GitHub.
It will connect to your app’s session and shows you
information in a clear, semantic format. If you’re using
whitelisting, it would probably be a good idea to add the white
list. So to sum up, we’ve shown code
samples for MessagingStyle, actions, attaching new extras
for media items metadata and declaring support for content
browse and search. So, great. We look forward to
seeing all of your messaging and media apps in the car. Rasekh
and I will be available tomorrow morning at office hours to
answer any questions you have about Android Auto.
Thank you so much for watching. [Applause]
Hi, everyone. Thank you for coming and watching my session,
which is going to be Android on large screens. Just waiting for
it to pop up. Cool. So as we all know,
Android has evolved from just being a phone platform. It’s
available on watches and cars, as we just heard. Desktop,
phones and specifically the mobile space has already changed
drastically. Let’s see what your mobile is
running on today. Starting on the phone, it’s what
we all develop for mainly. A couple keys about the platform,
portrait-first, touch-first and full-screen first. A lot of
apps lock rotation. Some users use multi-window or styles and
things like that. Majority of your users are using it with
regular touchscreens and in full screen. Moving on to tablets.
Both oriantsations are first-class citizens. If you
lock a portrait, users can still use your app in the portrait
landscape. Larger screens do bring challenges, design-wise,
and the ability to do different things and take more advantage
of the real estate and allow your users to do things faster.
The different medium brings a different focus on what apps are
going to be used for, content apps or media consumption apps,
productivity apps, things that can really take advantage of the
larger screen. And then moving on to kind of
the desktop platforms. You know, we have Chromeless, we
have OEMs. Also, Android has now
brought the ability to take advantage of having external
displays. Even if it’s running on a phone, it could be
displayed to an external monitor. This is where the
biggest difference comes in where it ‘s landscape first, all
these environments have some sort of window resizes and you
have new first-class input methods such as keyboard and
mouse and track pads that shift with the device or will be
connected. So, what’s a focus on when
you’re thinking about how to bring your apps to all these
platforms and have a good user experience? Number one is
design. Again, if you’ve really been focusing on phones, most of
your designs are very portrait-based and
smaller-screen based. Window management is probably the
biggest place where we see issues with partners apps,
dealing with resizing, multi-window consideration or
problems we’ve never had to focus on before.
We’re going to talk about the tooling available from Google to
make sure you’re able to develop for these platforms and bring it
into your actual cycle. So, talking about design, raise
your hands if you have layouts for large screens or lablets at
all? A lot more people than I expected. Cool. So, you know,
biggest thing is just thinking about largest screens again.
For the last couple of years, we’ve seen a lot more apps come
out that might lock to portrait, which makes sense. With a more
— more and more growing number of platforms that are running
your APK, but showing a different form factor or
platform, it’s time to start thinking again about how to
bring the best experience to those areas.
A really bad example of design is Google Play Music, which is
always great. A couple things about this that are not great,
super-stretched layout, tons of space that could be used for
descriptions. The biggest key here, though, is there’s no line
dividers, which on a phone is fine because see what options
menu you’re clicking on. When you take it to a larger
landscape layout, it’s hard to follow the lines and see what
item you’re pressing the options menu for.
An external partner is One Passward. It’s your standard
list of items you’ll drill down into. When you move to a larger
screen, they really take advantage of the real estate
going to almost like a three-panel layout, allowing the
user to get whatever content they need in a lot less clicks
and a lot faster. Also, being able to showcase more
information at the same time. So, kind of going back to the
same point is, again, building layouts for both orientations
and there’s a couple big keys for this. Again, not all
platforms are portrait-first. A lot of the desktop environments,
if you’re building for portrait, you’re going to have a pretty
bad user experience when I want to use your app in full screen
or the top-half of the screen. On top of that, resizing
capabilities allow the user to make your app whatever size
orientation or screen ratio that they really want to. So, you
really want to let the user really decide how they want to
use your app. Going back to kind of mainly the
desktop platforms is designing for mediums other than touch.
Take into consideration how your app works with a mouse or
stylus. The UX patterns are different for touch than
non-touch. Things like right-clicking are different
than what we’re used to with long-pressing. Whereas when you
right-click on a desktop, you’re usually expecting a pop-up
context menu. So, taking the time to really think about how
your app works for the different UX considerations.
The other big one we noticed is hover actions. On the web or
desktop environments with the mouse, you expect some type of
feedback when you move your mouse over an action item to let
you know there is something you can do there, whether it’s
clickable or dragbable. There’s some feedback. Myself, even if
I use the app on a phone, extensively, Google Drive is a
good example of this, Android on Chrome OS, I’ll miss actions
that I can actually do because just navtly on this platform,
I’m expecting hover actions. I’m going to go over two simple
APIs. There are many APIs that help handle a lot of these input
methods and things such as mouse scrolling and things and there
will be resources when the slides get posted.
Right-clicking, which is one of the feature blocks, is super
easy. You make sure you’re exes posing whatever behavior you
have on long-press. Again, also hopefully considering any type
of UX changes you need to make. And setting a hover listening to
watch for the user’s pointer hovering over the item and out.
Again, this is really the biggest one we see that will
lead to misfunctionality or really give you the desktop
native feel or even kind of the large-screen native feel.
In terms of hovering, most of our native components do handle
this. The contrast and change may not be enough for you or
your users or especially accessibility things like that.
At the end of the day, though, you know how your users use the
app and you know the product so, you know, take some time and use
your app on different platforms and think about how you’d expect
the app to behave. It might not need to focus on mouse input or
keyboard input. Where a productivity app is going to
take advantage of the screen real estate and find a way for
users to do what they want to. Talking about window management.
This is the biggest area we see challenges. I love rotation
because I see so many apps that lock to portrait, which is
completely understandbable but it really shows where the
experience falls. If you do lock portrait on Chrome OS, this
is what your app will look like. You’ll get these super black
bars on the side and it’s wasted screen real estate. It gives a
non-native [no audio]. Now, why do we — to run away
from handleing configuration changes because they’ve always
been rather difficult to deal with. Configuration changes are
more important than ever. Resizing and multi-window brings
a lot of new configuration change paradigms and challenges.
Chrome OS is probomy one of the most complex of the resizing
strategies because it does trigger configuration changes
pretty frequently. I know some of the other desktop platforms
don’t cause as many configuration changes. On top
of that, with bigger screens, multi-window is going to be used much more frequently.
This is a quick thing about how resizing on Chrome OS works.
Anytime one of those labels changes, it’s going through a
complete destroy and rebuild process. So you can imagine
that this can get triggered substantially faster and substantially more often than on
a phone. Jetpack helps with this. The
things like ViewModel and being able to build your own
components allows you to take your business logic in your
activity, anything you’re doing to save state so your activity
destroy and rebuild process is really quick.
Your app will not always be in focus. Again, this isn’t new.
With Android N, we brought multi-window. I know most of us
that work with a large monitor have multiple things up at once
and you need to make sure that your content is visible and
playing. So make sure that you’re still displaying messaging or your content
continues to play. Take advantage of the features.
Larger screens and external monitors bring new
possibilities. One of the things we’ve seen some of the
apps do, that really kind of bring a better experience, is
allowing — email compose windows or new documents being
shown in different tasks so the user can see their email, plus
this new email they’re creating in a different window.
Something that we released quite a bit ago, but is now being a
lot more is drag and drop. Again, these desktop platforms
specifically, users expect drag and drop to be a thing. So,
think about if drag and drop kind of makes sense in your app
and how you can kind of bring it to these platforms.
And then not really window management, but different
features that are kind of new to these ecosystems is different
info capabilities and specifically, like, increased
stylus usage. The biggest thing that we’ve seen that users have
loved is the ability to have keyboard shortcuts and keyboard
navigation so if there are things they use often, they are
able to do things faster, get in and out of the app and get it
done as quick as possible. So, this is great and all but
how do we actually build for this? Moving on to tooling, you
know, it would be awful if I came up here and talked about,
you should do all these things yet there are no tools to make
this possible. Most of the tooling devices that we do —
tooling examples we do have are around Chrome OS. A, it’s our
platform and, B, it’s the platform we have. Most devices
ship with a keyboard intrack. Some don’t have a touchscreen.
So, it’s kind of the best all in one platform to test on. We’re
working on bringing some integration around larger
screens in Chrome OS. Lint additions are coming soon. It’s
my job, once this event’s over, to finish this. We’re look ing
at more things we can bring to the ide. It’s great to have
devices you can test on, but we want to be able to give quicker
feedback and things to look for while you’re actually developing
in the ide. So if you have ideas, please come let me know.
There’s a Chrome OS emulator that’s currently in preview. If
you don’t have devices, you can download the emulator and start
to see where your app falls apart. Things that don’t work.
Things that crash. You know, et cetera.
One of the biggest complaints for developer for Chrome OS and
testing has been having to use ADB over WiFi and development
cycle’s been crappy. On the HP Chromebook and Pixelbook, you
can bring the devices into the same development cycle you do
with phones. We hope to bring more and more — more and more
devices to this feature. But the easiest way, really, is
just to run Android Studio on Chrome OS. This is available on
the Pixelbook on preview and we’re hoping to bring it to more
devices thin future. This is on the Android Chrome OS table. To
be able to build the app and deploy directly to the device
makes the whole development cycle substantially
easier and faster. There’s more and more platforms
that are running APKs, whether they’re different flavors or
mobile APK. So make sure your app is, too.
Thank you. Again, please ask any questions, any suggestions
that you have for tooling, any things you’ve ran into when
trying to develop for larger screen platforms, I would love
to hear about so that we can tackle this. Thank you. [Applause]
All right. So, I just want to let everyone know. It is
currently snack time. So, we have snacks [no audio]
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. . . . . . . .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . All you have to do is use the
right plugin. You have the base function and then you apply the
application plugin, which you have used for years. You can
apply the dynamic feature. Now, we have to declare a special
dependency from the base module to all of the different feature
and I will explain to you later why this is necessary.
Obviously, you have a dependency from the module to the base, but
you need to make dependency from the base to the futures. So, as
an example here, we’ve got the flow. You have a base module
and three modules. Nothing too specific. It depends on the
base. Now, by adding the DSL
declaration, we’re declaring that the module A, B And c are
future models. We have this dependency going both ways.
So, when we go through the build flow, the first thing that we do
is that we build normally all the files into classes. That’s
normal. That’s the normal process that we go through when
we build each of these different modules. The thing that really
starts to differ is we are publishing back all of this job
files back to the main module and that’s absolutely module
because you need to have a global view of the application.
You can’t do a shrinking on the future module because you will not know how and what the class
is. So, once you have those
published to the basis, you can feed it to the shrinker and it
will create equivalent text files. There is a one-to-one
method. But, it’s not exactly a 1:1. Let’s say you had classes
that were in your base module that you thought would be shared
by different features but used used by module B, but the
splitter can move that instead of keeping them inside the main
dexter jar of the main module. Otherwise, you can more or less
imagine there’s a 1:1. You have all these text files still
residing in the base module, now they’re going to flow back into
the originated module. We do that
so we can have — as you can see, at the beginning, we had compilation, it was
paralyzed. If you have a powerful machine, they run in
parallel and it becomes a bottleneck because it has to
wait for all the modules to be ready to do the shrinking and
then we can move back the processing to each of the same
modules. So, you can see that adding
modularity to your application might be a good self-practice.
We’re going to get much faster of a time. We move as much as
we can and we can run all of those in parallel as much as
possible. Once this is done in parallel —
one this is pushed back to each feature module, we can
eventually create all the necessarily APKs or artifacts.
All of those, again, are in parallel. The shrinker is
usually not a problem because — not a problem because usually
people do not use shrinker during the builds. But we
really try to limit this bottlenecks as much as we can
and how we have enhanced the processing with this type of
improvements. Another thing that we did was
D8, that’s a new JVM bind codes translator. We will essentially
remove the old one, which is DX. If you are using DX, you need to
start panicking because we are going to remove it. So if you
are using DX because you issues with D8, you must follow through
to figure what’s wrong. Otherwise you’re going to get
stuck in the past. R8 is going to follow the path. It’s
available now to try. We are very happy with the results we
are getting so far, so it’s very stable. You should definitely
try it. Eventually it’ll become stable and you can guess what’s
going to happen to the old code shrinker, eventually we will
remove it and replace it with R8. There is a session tomorrow
that will give you more technical details about how
these two libraries are implemented.
Okay. Let’s talk a little bit about what’s next in 3.3. So,
the first thing I want to talk about is tasks. The concept of
task — lazy task is — you should really understand it as a
task that will only get initialized if it’s on thee
excuse. You’ve got two variant. If you debug, there’s no need to
initialize the task. So, what we used to do, unfortunately —
so this was done before in this particular example where we used
to create all the tasks. We still have to do that. But add
the creation time, we’re also configuring them. So we’re configuring all of them.
With task, we have the tool to delay of the initialization
until it knows that those tangs with executed. What it means is
that it’s going to be up-to-date checked. Meaning, it’s going to
look if the task needs to run or not by running its up-to-date
checks. Maybe executed if it’s out of date.
So, how to you do lazy tasks? It’s basically similar to the
old style, but you can see that now the configuration code,
which is in blue here, will only be called if the task is on the
execution task graph. Now we used to pay a lot of attention
to all of our configuration tasks, to make this
configuration block as lean as possible because they were
always executed. So we tried to keep it as lean as possible with
no access to disk, no access to network, for instance.
Now, it’s probably a little bit more okay to do more work in
those configuration if you really have to, but you have to
remember two things. First, if you do real work, it will still
impact your build time because the configuration time is still
a thread event. All of the configuration will happen the
one after the other. The more you do, the longer it will take.
Configuration time happens all the time. It’s very anowing.
The second thing is to not look up tasks anymore. So when you
have customization — a lot of people are doing project, get
task by name. Stuff like that. This will actually look up the
task and and initialize it. It will do that and all the
dependancies as well, all the tasks and output so you
basically have a good chance of initializing it. Instead of
doing that, you should get a provider and get a lazy object
of the task itself and use that to register your dependency.
Now, what you can also do, if you want to have access to the
output of a task, is use that provider and map the output
using the start of API to get provider so it’s basically a
promise on the folder or a promise of a regular file that
the task that executes will give you it later. You can get an
object, that object does not initial ize the task itself.
Getting this provider does not initialize the task, does not
force it to run and it’s really lazy. It contains dependency
and information which means you don’t have to register yourself
as a dependence of the task. Holding the object will allow
you to not only get the object, but also register your
dependency. Eventually, you can do a get and that will get you
the object you can use. So, here when it is getting
configured, all the dependancy will get configured and so on
and so forth. We are retrofitting all of our tasks
using providers and stuff like that. If you use customization
a lot, you need to look into these APIs in 3.3.
And now Chris will talk about other improvements we have made.
Yes. Another optimization is classes. Previously, for every
single dependency and every subject, and alongside your
actual classes. It just generates a jar containing the
class directly. And especially for those with many libraries
and lots of dependencys, this avoided a lot of compilation.
For large multi-modal builds, we saw a large percentage of
speed-ups. So the class system has been
rewritten. Rather than relying on the ones on disk. It
actually speeds up indexing in Android Studio, as well. Even
for older Gradle plugins. This does, however, break some Gradle
plugins, including Butter Knife. It is in library device in 3.3.
Gradle now has supports for annotation processing.
We’re working to support the most popular annotation
processes, including dagger, Room, Gride.
And to allow to be an isolated annotation processor, speeding
it up even more. We also want to help you understand that
build time impact of annotation processes. We want to report to
you, like, which — time spent in and how much did they cost?
On that theme, there are several other areas we want to give you
better insight into your build, easily and simply. If you’re
using annotation processes, it’s really critical you have that
insight because it’s often a bottleneck for a lot of builds
we see. When a task causes you trouble, it’s great to know the
Gradle plugin or the script and what triggered it to run. And
we’re working to make finding that out more easily.
We also want to help you find these types of issues, even if
you’re not actively looking for them. Longer-term, we want Android Studio to flag
if there is an issue and point you toward the greater build
scan. Okay. So, on from these better
insights to complete rewrites. Android resource namespacing.
Resource namespacing is a completely new pipeline for
compiling and linking Android resources. We’re doing this for
two reasons. Firstly, speed up the build and make it easier to
understand and we want better support dynamic features.
Looking at how thingessess work now, there are two namespaces.
That means if you have two libraries, the same name and the
same type, they have to pick one and it’s not always clear what
the right thing to do is. It makes splitting your APK much
more difficult. Where those resources come from and where
they go is really important. Each library is compiled and
linked separately and then linked together in the final
APK. When you’re using resources from both XML and
Java, you need to be explicit about where they came from. If
they’re in the library that defined it, the next XML, you
need to use the name space. This also means they no longer
override each other just because they have the same name. When
you need overrides, we’re working on a new way to do that
explicitly. ARs will be backwards compatible
so you can have all the benefits of namespacing.
Okay. Passing on to Izabela to tell you a little bit more about
the details. Thank you. Now that we know
what namespacing is, you might find yourself asking your
questions like, where is that resource coming from or what is
the proper syntax? Or, how do I even namespace my dependancies?
The answer to all these questions is, we’ll fix it for
you. [Laughter]
So, yes, so the solution is automatic namespacing or auto
namespacing for sure. First is the oughtmotic rewriting tool in
the ideal. And the second one are the transforms and tasks in
the Gradle plugin that will rewrite your remote
dependancies, under the hood, no action required.
Here is an example of a dependency graph. The blue
nodes are local modules that can be rewritten using the ide tool.
The three orange nodes are not namespaced, classic remote
libraries that will be automatically rewritten. All
the resources, classes, the manifests will be rewritten to
use the full resource namespace. And finally, the green nodes are
— they represent dependancies that are already namespaced so
they will not be modified at all.
Let’s see what types of changes we can see after this migration
takes place. In the bytecode, you can see that now there will
be different classes present. If a resource was defined in a
different module or remote library, you’ll see the package
of the class change to match that package. In the On-Device — XML resources,
you’ll see it at the @ symbol. And finally, another way
resources can be referenced is in attributes, for example, in
layouts. Here, it will be modified to point and the
attribute will use this new namespace, as well. Since we’re
on the topic of resources, let’s talk about visibility. Probably
many of you created an Android library with a lot of effort
declaring which resources are public and published this with a
.txt only for consumers to ignore it. This currently is
only a Lint warning. This code compiles and runs fine at
runtime, completely ignoring the intended visibility of the
resource. I’m sure many of you actually ignored these warnings,
as well. We want to introduce visibility.
So, these violations will now become build errors instead so
we’ll catch them early and three levels of visibility. One,
public. This means these resources will be present both
in the public classes and the private classes for that local
module. Private resources, only present
in private classes and last, private XML-only resources. They
will not be present any classes at all. Instead, you can only
reference them from other XML files within that module.
This will result in smaller classes, both compile and
runtime. And, also, resource similar to the class or method in Kotlin.
Thank you. [Applause]
All right. So, as you can see, we’re working on a lot of
things and many of which we hope would help build speed. Going
back to my point around awareness and tooling, I wanted
to share some things you can do today to understand your build
better and improve its performances. So, the first
things is to upgrade. My first graph, we do improve with every
release and so if you’re really — care about your build speed,
the best thing to do is upgrade to the latest beta, stable,
Canary, whatever you feel comfortable with.
There are some tools you can start leverages to
better-understand your builds. One that I really like is a free
tool from Gradle. It implodes some of your data into Gradle
servers and provides dashboards. If you’re trying to understand
what’s going on with your build, why is it slow, this is a very,
very useful resource to use. If sharing some of your build data with Gradle is something
you don’t feel comfortable with, there’s–profile. Definitely
not as rich, but it provides some information and it remains
local. You can combine it with–info which gives you
information on a given task. Another tip is file bug. We
try to test all of our releases on any environment and use cases
we can, but there’s aways different configurations out
there. Please file bugs when you encounter issues. Please
include a scan with it. It really helps us go deeper into
the issue and understand what’s going on.
Last, but not least, if you’re writing plugins, whether it’s
for you to publish them or customize a little bit, your
build file, here’s a set of tips to follow. So, first, as Jerome
eluded to, it is to set up tasks, not really do anything
else. Remember, if you need to compute things for up-to-date
checks, you can always use provider and suppliers to always
run those checks if your task is part of the active graph.
For example, in configuration, you should not do things like
query get, read a file, search for a connected device or
compute anything. Configuration is really just a place to set up
tasks. And it’s a place to set up all tasks. Because build
doesn’t really know what pass is going to take in events so try
to set up all of your tasks in the configuration step.
Regarding task s, make sure that each task’s declared all input
and outputs, even if it’s non-file one and make sure
they’re incremental and cacheable.
If your were working with a complex step, try to split it
into multiple tasks. This helps with the incrementtality because
some tasks could be up-to-date. If you have multiple tasks, they
could run in parallel. So it helps with incrementality and
parallelism. This third best-practice sounds
obviously, but I still want to put it out there. Make sure
they don’t write into or delete any other task output.
When you write tasks, use Java and Kotlin. And put them in a
plugin/folder. And last, but not least, as
you’ve heard from Jerome, leverage a new worker API — no,
we didn’t talk about this. I did talk about stuff though.
That really helps with problems. If you didn’t get
clear pictures of all of what I just said, don’t worry. We’re
working on a full write-up covering everything that we
talked about around speed, around the findings that I
shared, the tooling and the best practices and more, so, stay
tuned. To recap, here’s the takeaway
from this session. First, we shared some finding on speed and
that basically a little bit outpaced by some of the
features, the plugins and all the other things but we’re
taking this very seriously so we’re doubling-down or efforts
on better tooling and attribution and continuing to
improve performances. We share the new features in 3.2
and I definitely encourage you to upgrade to 3.2, if you
haven’t already. And we mentioned some of the
things that were working for 3.3 and beyond. 3.3 beta is
available. So, I encourage you to try it. It has some of the
things we mentioned, like lazy tasks and others.
And last, I talked about some of the tools that you can use — —
profile. That’s it for today. I want to
make a bad joke, but unlike build, we finished earlier than
expected. We’re going to be out there with — in the speaker Q&A
or at the studio booths today and tomorrow, if you have any
questions. So, thank you. [Applause]
[Applause] [Applause]
[Applause] [Applause]
[Applause] [Applause]
[Applause] [Applause]
[Applause] [Applause]
Everyone, the next session, in this room, will begin at 4:50.
Thank you, everyone. Thank you, everyone.
Thank you, everyone. Thank you, everyone.
Thank you, everyone. Thank you, everyone.
Thank you, everyone. Thank you, everyone.
Thank you, everyone. Thank you, everyone.
Thank you, everyone. Thank you, everyone.
Thank you, everyone. Thank you, everyone.
Thank you, everyone. Thank you, everyone.
Thank you, everyone. Thank you, everyone.
Thank you, everyone. Thank you, everyone.
Thank you, everyone. Thank you, everyone.
Thank you, everyone. Thank you, everyone.
Thank you, everyone. Thank you, everyone.
Thank you, everyone. Thank you, everyone.
Thank you, everyone. Thank you, everyone.
Thank you, everyone. Thank you, everyone.
Thank you, everyone. Thank you, everyone.
Thank you, everyone. Thank you, everyone.
Thank you, everyone. Thank you, everyone.
Thank you, everyone. Thank you, everyone.
Thank you, everyone. Thank you, everyone.
Thank you, everyone. Thank you, everyone.
Thank you, everyone. Thank you, everyone.
Thank you, everyone. Thank you, everyone.
Thank you, everyone. Thank you, everyone.
Thank you, everyone. Thank you, everyone.
Thank you, everyone. Thank you, everyone.
Thank you, everyone. Thank you, everyone.
Thank you, everyone. Thank you, everyone.
Thank you, everyone. Thank you, everyone. Thank you, everyone.
Thank you, everyone. Thank you, everyone.
Thank you, everyone. Thank you, everyone.
Thank you, everyone. Thank you, everyone.
Thank you, everyone. Thank you, everyone. Thank you, everyone.
Thank you, everyone. Thank you, everyone.
Thank you, everyone. Thank you, everyone.
Thank you, everyone. Thank you, everyone.
Thank you, everyone. Thank you, everyone.
Thank you, everyone. Thank you, everyone.
Thank you, everyone. Thank you, everyone.
Thank you, everyone. Thank you, everyone.
Thank you, everyone. Thank you, everyone.
Thank you, everyone. Thank you, everyone.
Thank you, everyone. Thank you, everyone.
Thank you, everyone. Thank you, everyone.
Thank you, everyone. Thank you, everyone.
Thank you, everyone. Thank you, everyone.
Thank you, everyone. Thank you, everyone.
Thank you, everyone. Thank you, everyone.
Thank you, everyone. Thank you, everyone.
Thank you, everyone. Thank you, everyone.
Thank you, everyone. Thank you, everyone.
Thank you, everyone. Thank you, everyone.
Thank you, everyone. Thank you, everyone.
Thank you, everyone. Thank you, everyone.
Thank you, everyone. Thank you, everyone.
Thank you, everyone. Thank you, everyone.
Thank you, everyone. Thank you, everyone.
Thank you, everyone. Thank you, everyone.
Thank you, everyone. Thank you, everyone.
Thank you, everyone. Thank you, everyone.
Thank you, everyone. Thank you, everyone.
Thank you, everyone. Thank you, everyone.
Thank you, everyone. Thank you, everyone.
Thank you, everyone. Thank you, everyone.
Thank you, everyone. Thank you, everyone.
Welcome back, everyone. As a point of information, this
session will get under way in two minutes. And we ask you, as
a courtesy to the presenters, to take a moment to silent your
phones and digital devices. We thank everyone. Come on in and
be seated. Our program will get under way in two minutes. Hello. Good afternoon. I’m
super excited to be here and today, we’re going to be talking
about ConstraintLayout and how to use the visual editor to
effectively make constraints in Android Studio.
I’m Chris, UX designer and I’ll be talking about some of
the new features we’ve added. And I’m Sean McQuillan,
developer advocate for Android. After we talk about the basics in 1.0-1.1, we’ll talk about
some of the constraints. When I add a view to a
ConstraintLayout in the editor, I’ll have one in the top, left,
right and bottom. If I go to the view inspector, I’m going to
add a constraints to view. In ConstraintLayout, before I added
this constraints, it will lay it out somewhere in the screen.
It’s 30DP and I have fully-constrained this view and
now ConstraintLayout knows how to solve where this goes. We’ll
start with a simple example how to build up more complex examples of constraints.
I would change this to be 50DP or add a constraint to the
side. If I add another to the bottom,
I’m going to center this view on the entire ConstraintLayout and
this trick is going to work everywhere in ConstraintLayout.
This is how you center a view inside or on top of another
view. So, let’s take a look at one
more thing I can do. So, if I look at the slider that’s over
on the left. It starts at 50 when I’ve constrained the top
and bottom and I can change that. I can change it up to 25.
Instead of centering, it will introduce the bias to the
layout. It’s going to lay that out 25% along the way, 75% along
the way and a horizontal slider, as well.
So, let’s dive in further into this view inspector and take a
look at what’s available in the special editor. There’s this
triple chev going on. John said it’s because it’s wrapped
content and it’s trying to pull in as hard as it can from both
sides. I can change it to fixed width. That’s 100DP. So match constraints is
a new feature, a new way to lay out views. Take out all of the
views available. I’m constrained off the right
and the left so it’s the same thing as full parent. This is
how you would take up the whole screen. You wouldn’t want to
use full parent in ConstraintLayout. I get this
really interesting icon here. I thought this was a heartbeat,
for the longest time. I asked John about this. That’s
actually a spring and on some versions of Android Studio, you
get two springs and on others, you get one.
So, let’s switch this back over to wrap content and add another
view so we can start building more complex layouts. I’m going
to add an image view and constrain it so it’s 20DP. It’s going to move the
image view so it’s 20DP. I’m going to add another constraint
and the image view is going to center itself.
We can do this on another view, as well.
Now, I want to change the width of this view from wrap content
to match constraints. And this time, instead of match
constraints, it’s going to take the width of this text view,
whatever size this text view, it will try to match that
constraint and this new control shows up. This little line, it
creates a little triangle. I enable an aspect ratio. This is
a really nice feature, if you want to display an image with an
aspect ratio. Images, when we get them from designers, they
want 1-by-1. You’re laughing because you’re a designer.
It’s my fault. Sorry. [Laughter]
We can set up exactly what our designers want for and resize
this view as the text changes while maintaining this aspect
ratio. If I set it to 3:1, I set one aspect ratio that says,
I’d like to this to be one constraint, 3:1. And another
saying this can be no wider than this text box. ConstraintLayout
has to solve this and it will use the contraints from the text
box. I can free up another dimension.
So if I change the height to be match constraints, it’s capable
of resizing both dimensions so now it can set the 3:1 aspect
ratio. So, that’s all we can do with just a single, you know
element or two elements. Let’s add more of a complex view and
talk about how to lay things out with more features with
ConstraintLayout. My designer sent me this lovely
email form. This is talk about ConstraintLayout and not Login
forms. Don’t copy this, there’s many problems. There’s a couple
things going on. The labels are right-aligned to some sort of
invisible line. The edit text is left-aligned and the Login
and new account button are hanging off and there appears to
be a line. And then at the same time, email
and password are vertically centered on the screen. How are
we going to lay out those, you know, text views? We have the
email text and the email edit text. We could align the top of
the text view to the top. That would be incorrect. If we look
at font metrics, we have this baseline at the bottom. In
English and most languages, almost everything sits on and
the ascender line and that dashed line is the descends descender line.
This creates a single line of text for our eyes and allows us
to read it as a cohairants unit so we want to do that in
ConstraintLayout. I’m going to get this control. It looks like
this, I’ve enlarged it substantially. I get my
favorite control. It blinks in the editor, I call it the
green-glowing orb of baseline. We can go to the edit text and
drag from one baseline to the other and create a constraint
saying these text views should have the same baseline. And
we’ll do that for all the other text views on the screen here in
order to set up all the baseline alignments.
When you line up text next to text, you almost always want to
use the baseline. That’s the correct way to do that in
ConstraintLayout. Let’s put that Login button on the screen.
It has to be constrained on the left edit text. How to do this
centering, the email and password is vertically centered
on the screen. How am I going to do that? We put a constraint
on both side of the screen and centers. I’ll put it from email
to the top of the screen. From password from the bottom of the
screen and so far, this makes sense.
Now, I guess I’m going to have to add a constraint from
password to email. So, I’m going to do that and this is
going to center password between email and the bottom of the
screen. Let’s pull email back down with another constraint and
I’ll solve this problem. This introduces a chain.
Now of course, setting up all the constraints is tedious. You
can go into right-click on center and choose vertically.
When I have email and password selected, it’s going to set up
all of the constraints that I just talked about. So, inside
of a chain, there’s actually three different ways it can get
laid out — four, technically. We have spread, which means
evenly distribute everything. Spread inside, which is
basically the same thing except the first and last elements get
pushed to the side. Then we have pack. We’re going to use a
packed chain to center these views together. There’s one
more thing we need to do. We need to put this invisible line
in the middle of the screen. So to do that, I’m going to go to
helpers and add a vertical guideline. You can think of it
as a new edge of the screen. I have one on the left and now I
put an edge of the screen in the middle of the screen I can use
as an anchor for constraints. I take the text views and create
constraints from those. To kind of visualize what this is doing
underneath, if I move the guideline, it’s actually going
to move the entire layout now. So, let’s move that back and
then let’s get another design because it turned out that
design was not performing very well. After many user studies,
we’ve discovered the solution is left-aligning the labels. So,
let’s try to do that. Well, I did it and I translated it to
German and this is what happened. This is not great.
So, what happened here? So, it turns out, if I lay this out
similar to the way I just did, so, password’s the longest field
in these labels so if I set up a constraint and another
constraint from edit text down to the password edit text, this
is going to work great in English. When I translate it to
German, it is no longer correct. So what I’m going to need is
something that’s dynamic, that’s based on all these things. Kind
of like a view group. Basically, I might want a linear
layout. I’m in ConstraintLayout, so how do I do
that in ConstraintLayout? There’s another helper and we’re
going to use that now. If we use add vertical barrier, this
allows you to add a barrier to the screen. It is like a view
group. It’s a grouping code. We can open up the componentry
and add it. It’s a view that’s added to the screen. It’s
positioned on one side or the other of all of the views that
are inside of it. By default, it’s on the left. If I open the attributes pane, I can set it
up to the end. I will set up the constraints
and translate my English into German. So, that’s really it.
That’s all of the features in ConstraintLayout 1.0 and 1.1.
So Chris is going to talk about more tricks that can be used to
use the visual editor to build constraints.
Cool. Thanks, Sean. So, with constraint layout, we’ve
introduced many consents. We started with constraintss,
margins and chains. We’ve introduced guidelines and
barriers and groups and many more helpers to come and there’s
motion layout. One thing that we’ve heard consistently is that
as we’ve added more concepts, it’s becoming increasingly
harder to manage all of these with ConstraintLayout.
And so, what I’m here to tell you, today, is that we’ve
actually been improving this in the visual editor in Android
Studio. The four areas are creating constraints, and new
view and tricks on zooming and panning and then of course,
using sample data, which we introduced back in 3.2.
So creating constraints, in this case, we have two components.
An image view and a text view. If we want to center the image
view, we put one constraint on the top and one of there bottom.
Let’s take that lovely Login form from below. We have
labels, we have inputs, some buttons. But from the
constraint point of view, we have this guideline in the
middle, the Login button constrained to the bottom and
right of the input. We have the inputs constrained to the
guideline. Because they are all pretty close to each other, when
you’re dragging these around, it can be pretty challening to get
it right. Even when I made that slide, I
hid half the constraints because it was too busy.
Yeah, it was very simplified, actually.
We’ve added the ability to add constraints directly with the
context menu. If you have components that are really close
to each other, this makes it a lot more precise and direct to
set those constraints. So, in this case — and this is
available in 3.3 beta, as well, so you can try it out today.
And so, in this case, we have this lovely cat picture. You
can just simply constrain it to the parent. So, what does it
look like if you have multiple components? So, in this case,
we have these two text views that are really close to each
other and so I’m not sure if people have tried to create it,
it becomes painful when you’re going from the bottom of one to
the top of the other. You can keep the two selected, and then
when you open up the context menu, there’s this constraint
menu and you can see the two elements you want to use are
there and then you can easily cascade to the right constraint
that you want. In this case, we’re only showing
the start and end and that’s because, in this case, the top
and bottom constraints have already been set so we don’t
show them. So, here, we want to constrain
the location icon to this vertical guideline on the left.
So, if you use the drag and drop method, you get all these — Sean’s favorite green flashing
stuff. If you’re trying to target some of these smaller
things like the following text or numbers, it gets hard when
you’re trying to do drag and drop. Again, this makes a lot
more direct so you can select the guideline and use the
context menu. If you really don’t want to
select these things, you can use the component tree. This
becomes useful. It does the exact same thing.
But if you do like drag and drop, you know, you can still do
it and one thing that we’ve tried to make easier is actually
when you drag it and so in this case, we have this new gesture
which, pending, is called drag to center. As you drag, you now
see all these little targets and so instead of trying to actually
target those specific green dots, you can simply drag to the
middle of the thing you want to constrain to.
In this case, if I drag from the Mountain View text view, I can
drag to the cat picture and get a pop-up menu that shows me the
two constraints I can set. Because we’re going from the
left of the mountain view text view to the cat picture, the two
constraints are the left and right.
And we actually have this, as well, which comes in handy when
you have overlapping views and so this one’s pretty simple but
sometimes you have views you want to hide and show at runtime
and so all you have to do is drag to wherever — to the
target and what we’ll do is actually figure out which fews
are under the pixel and show you a context menu accordingly.
So, if we move on to view options, so, the design surface
has always had view options to take advantage of when you’re
working with your layouts. The two are show all constraints and
live rendering. So, if we go back to our Login form, we’re
going to reuse this a lot. [Laughter]
You know, the constraints are set here. But the thing is,
when you’re trying to — let’s say you’re new to this layout
and you’re trying to edit constraints on one of these
controls, there is a lot going on. And this is simplified
compared to the normal design surface. And so what we’ve 3.3 is added this option to show
all constraints but it’s turned off by default. What we’ll do
is we only show the constraints on the actively-selected
component. It makes it easier to work with the component
you’re working with. Of course, you can easily turn
this back on if you do want to see all the constraints at the
same time. And so this is kind of showing
you a side-by-side. On the left, we have it turned off. On
the right, we have it turned on. Especially in the design surface
or design mode, it cleans it up a lot because you don’t have
arrows and margin, especially for the 322 and following and
the 20 followers. Blueprint mode is the same thing. Even
though blueprint mode is heavily simplified, it still gets a
little hairy to look at. And so, we think this is a good
option, as well, here The other view option we have is live
rendering, we’ve done live rendering for quite some time.
Let me go back. Can I go back? Oh. And so, it’s on by default
but depending on the specs of your machine, it can be slow.
You might make a mistake. Often times when I’ve tried to use it,
I’ve tried to create a constraint and the button will
move after it and that causes me to make more mistakes. And so
if you turn it off, it’s much faster as you drag and move
things around. You can still see the bounding boxes and so
you’ll know where things end up. The only downside is it doesn’t
render as you drag Alternatively, you can use
blueprint mode. Here, we don’t do any live renderings. This is
the best way to work with ConstraintLayout because you can
focus on the constraints So to set these options, they’re in
the top-left corner and if you want to switch, that is using
the blue layers icon, as well. Zooming and panning. So, you’ve
actually been able to zoom and pan in the lab editor and it
comes really in handy when you’re dealing with
ConstraintLayout when things are really small or close to each
other or overlapping. What we’ve done, in 3.3, is changed
the keyboard shortcuts to match more from Photo Shop and Sketch.
You can use command or control and then the equal [no audio]. So, it’s command and control and
then command and control with the mouse and pinch in the
opposite direction. And then zoom to fit, so if you’re zoomed
in and you want to get back to that layout, you can use command
and control, plus zero. And so then, if you’re zoomed in
and you don’t actually want to zoom out, but you want to pan
around, you can actually do so by holding space and using the
mouse to click and drag. This is kind of a familiar gesture if
you’ve used Photo Shop or other design tools.
And so the last tip is using sample data and so with sample
data, in ConstraintLayout, it’s easier to preview how your
layouts will respond to different content types at
runtime and so we introduced sample data helpers to make it
easier to work within the design surface. Specifically for image
views, text views and recycler views.
And so with the image view, we have two sample sets. We have
avatars and scenic backgrounds. And so if you — and if you want
to add your own images to the sample data, you can do so. You
just create a sample data directly at the root of your
project. Sample data, with your image views constrained, you can
quickly switch between different types of images and set
different ratios so you can see how your layout responds without
having to run your app With text view, we have sample data. We
have cities. We have dates, full names. If you want your
own sample data, you can create it at the root of your project
and I think we support flat text files and JSON. With text
views, this is more important because you have text views for
open-ended content. We have domestic shorthair is a
very short description. On the right, this is a bunch of text
plopped in there. And so, you know, without having to run our
app, you can see, just with sample data, how your layout
responds So, I don’t need to copy lorm ipson off the internet
anymore? [Laughter]
I think as you mentioned, this is great for testing out across
different languages. And so, with that, I’ll hand it
back to Sean to talk about some new features.
Thanks, Chris. So, that covers everything in 1.1 and
1.2. Now I want to move on to new features coming out in
ConstraintLayout 2.0. Who has tried playing with motion editor
already? So, I see five people. So, hopefully we can give, like,
a nice introduction here to the basic concepts and Chris is
going to talk about the design surface.
So motion layout allows you to build dynamic layouts using all
the features of ConstraintLayout we talked about earlier and
changing the constraints over time. We see building a
collapsible header that Chris Banes put together. So you can
see that title image actually hides itself behind the view as
it scrolls up. It’s a pretty dramatic animation. Before we
get to something like that, let’s talk about what we can
build with MotionLayout, it can be used to build collapsible
headers, state feedback or transitions, maybe the open and
closed state of a draw and you can make most of the animations
in this presentation, as well. To understand motion and
animation, it’s really important to take a step back and think
about what defines an animation, not just on Android, but a
Disney movie. They are defined by a start and an end. I start
over here. I’m here. And then I’m ending over here. And in
betweenbetween, over time, I created an animation so that’s a
very complex motion. Let’s talk about a simple one. I’m going
to put a blue dot on the screen and I’d like to build an
animation. In order to do that, I have to define a start. I
have to define an end. I’m going to put that in the bottom
right-corner with constraints and in order to build a
animation, all I do is transition from one to the end.
That’s what MotionLayout will do for you. It’ll figure out how
to transition that blue dot from the start, down to the end.
To build a MotionLayout, you have to start with a
MotionLayout in your XML. We did that so it would
have all the features of ConstraintLayout. It points to
a motion scene, which is a separate XML file and you encode
the start and end information that defines your animation.
The start and end are defined in terms of constraint sets. What
are constraint sets? You may be familiar with this already.
What we’ve been talking about, so far, is this. The views, the
actual labels, plus all of the constraints and all of the
sizing information. A constraint set is just this
part. Just the constraints and just the sizing information. It
points to IDs of actual views but doesn’t contain the views.
If I animate a constraint, it would look like that. If I
applied it to an animation, it would look like that.
Let’s build a fairly easy-to-follow-along easy
layout. Here, we have a pretty dramatic reveal animation, the
title comes up top, the subtitle expands down below. At the same
time, the image in the background is resizing itself.
So there’s a lot of things going on. This might actually be hard
to write in code, but it’s fairly easy to write using
MotionLayout. So, let’s take a look at how we’re going to do
that. To make a MotionLayout, I’m just going to add — I’m
going to define the start and the end. The start, I’m going
to move the title off the screen and do that by making a
constraint to the end of the title to the start of the view
— to the ConstraintLayout. ConstraintLayout’s very happy to
lay your views out of it, if you ask it to.
We’re also going to do the same thing on the bottom, where we’re
going to put a constraint to push the description text off
the screen. Then to actually build that, we’re going to go
ahead and make a MotionLayout. This is a subclass of
ConstraintLayout. It has a layout tag. Here, I’m going to
call it spacing. Then I have to define my layout, which is just
the views. I don’t give it widths and heights, I don’t
constrain anything. I’m literally just going to make a
list of three text views and an image view.
Now I’m going to go to the file I was talking about earlier.
This is the motion scene file. Inside that, it defines a
transition and a transition has a start and an end. Again,
that’s the thing that defines an animation. An animation always
has a beginning and it always has an end. To define start,
I’m going to make a constraint set and a constraint set is just
a tag. It has an ide — ID.
But, we’ll say which ID, I’m going to set its height, width
and padding. I’ll constrain — to push it off the screen, I’m
going to constrain to the end of parent and do the same thing for
the constrain set, end. I’m going to go ahead and make the
title view constrain the start to the start of parent. And
this brings that title on to the screen. I’m going to do the
same thing for all the other views in this layout, as well.
It’s a clarity of XML and I built this animation.
So now I’m going to pass it back to Chris, who’s going to talk a
little bit more about — The motion editor. So, at I/O
this year, we gave you a sneak peek of the motion editor.
We’ve been working pretty hard on it but it’s not quit ready
yet. We wanted to make sure that we focus on getting some
foundational pieces in place before we release it. We don’t
want to be too impatient and get it right, unlike things like
instant run. [Laughter]
The motion library has been out for awhile. We wanted to nail
the right animation concepts and roles required. The library
also needs to be performant.
John and Nicolas has been working hard on the library.
They love your feedback and all the cool demos that have been
coming out. So, please, keep it coming and thank you.
The other thing is the quality of Android Studio. It’s been
the primary focus for us in 3.3 and the upcoming 3.4 release.
We’ve made performance and interactions improvements
because it has to be able to render animations at 60 frames
per second and making it easier to work with constraints because
you have to know how to use ConstraintLayout and so we think
that if we invest in the quality now in the tool, it will
actually make the MotionLayout editor better With that, I’m
here to show you very early exploration of the motion
editor. This is mockups, not the build. I’m the designer.
This is all made in Photo Shop.
Feel free to find me or Sean and I think John’s here, too.
If we take the example from before we have the space picture
and text views animating in, let’s use that as the context of
what we’ll see in the motion editor. So, what does that
actually look like? So, here, we have a new perspective on the
component tree, for now we’re calling the transitions view.
For the purpose of the talk, I’m going to talk about this new
view because we think this is the most significant KNT part.
You’ll have the property panel and pallet. We haven’t quite
figured out the details about how that integrates with the
timeline or view. So, stay tuned for that.
So, in this case, we have the start of the transition and so
you can see the text views are off the view port but you can
see there’s a motion path that goes from outside and in. We
don’t actually render the text views outside the view port
today, but that’s definitely something we will need to have
for animation because we know that’s a very typical animation
example to have things fly in. And so if we kind of fast
forward halfway through the transition, you can see the text
views have moved halfway in and we have the space image zoomed
back out and that’s kind of what we intended here. So, if we
rewind, let’s go deeper on what this transition view actually
does. So, we’re only showing one transition now and it’s
uniquely named by its start and end constraint set. You can
have multiple transitions per MotionLayout and so with this
drop-down, you’ll be able to switch between the different
transitions and we’ll load the corresponding constraint sets
and change the timelines so you can see how the components
change. Each transition has its own
properties, which is the start and end constraint set. It has
duration expressed in milliseconds.
So, if we move down, we have the timeline. And so you have,
starting from the left, you have the play back bar, you can loop
the animation as many times as you want. You can quickly jump
to the start or the end. If you want to speed up or slow down
the animation, we allow that, as well. Just to tune the
animation perfectly. And we have this time control
here so you can actually step through millisecond, by
millisecond and then for the timeline itself, you know, we
show from zero to 100, 100 being the end. You can use this
slider here to make the timeline bigger or smaller depending on
which part of the transition you want to focus on.
And so we move down and see all the components that you can
animate in the MotionLayout. Each component will show they
have a start and end constrain set, which are required to
animate anything. And so if we look specifically at this space
flash image, it has a key attribute or key frame which we
will change halfway through the animation.
If we zoom out, they corresponds sponds to the same ones. Here’s
my components I’m animating and where they’re starting and
ending and what are the actual motion paths.
And so, that’s kind of where we are with the motion editor. We
hope to get it out soon, but I can’t promise anything.
Next year. [Laughter]
Some time in 2019. And that’s it. [Applause]
Everyone, the next session, in this theater, will begin at
5:40. Thank you. 5:40. Thank you.
Welcome back, everyone. The program is going to get under
way in about two minutes. Our program will resume in two and a
half minutes. half minutes.
half minutes. half minutes.
half minutes. half minutes. half minutes.
half minutes. half minutes.
half minutes. half minutes.
[No audio] In the end, I will talk about
what we’re working on. Two years ago, Google CEO we’ll move
to mobile-first world. It means machine learning and mobile.
Years and years, more and more mobile apps are using machine
learning to produce fascinateg user experience. More and more
machine learning logic is shifting from the server running
the cloud to the mobile device in your pocket. It has many
fast,advantages. It’s it does not need to spend costs. It
also runs anytime, anywhere, with and without network
connectivity. It also provides better production for user
privacy since the data does not have to leave the device. Let
me take a quick poll here. How many of you traveled from
outside of California to attend this dev summit? Wow! Welcome.
Welcome to Silicon Valley. I hope you get a chance to visit
Golden Gate Bridge and enjoy good food.
Talking about food, my biological watch tells me it’s
time for dinner. So, where should I eat? Well, I’ll pull
out my phone, ask Google Assistant. Hi, Google, I’m
hungry. Any good restaurants around here? They recommend
where to go for dinner around the area and location I’m
currently at. When I tell it a new place, I would like to take
a lot of photos. Mobile device can now run really
increasingly for machine learning tasks. Last spring, I
took my family to Tuscany in Italy. We drove around those
beautiful hilltop towns. There are many signs, but I don’t
speak Italian, unfortunately. I wish I do. How do I know what
it means? Here’s where Google Translate came to rescue. It will use a natural
language processing to translate the text from one language into
another. Finally, it uses speech to
convert text into voice and tell me what the sign is, in my own
language. All these involve machine
learning. There is a seamless and powerful user experience.
This all look great. But as a developer, how do I do something
like this? Machine learning requires
specialized knowledge and years of experience. It requires a
large amount of good high-quality data.
The mobile device has very limited computing power. Models
run on a server in the cloud are often too large or too complex.
You need to spend a lot of effort to optimize model for the
mobile usage. After, finally, the app is built, you need to
worry about how do I deploy, that becomes another headache.
Take all these problems, we launched ML Kit, which helps the
mobile developers build Android and OS apps using machine
learning technologies. ML Kit is aimed at making
machine learning easy for mobile developers just because you want
to use machine learning on mobile, it does not mean you
need to worry about collecting data, building models,
optimizing, hosting, deployment, downloading. ML Kit will take
care of all of this for you. We provide common models that work
out of box. They optimized for speed, accuracy and efficiency
for the mobile device. We provide one consistent API
across both Android and i OS.
For commonly-needed machine learning tasks, we have base API
that come with pretrained Google models that work out of the box.
There are five APIs we’re supporting. The test
recognition API are on the ondevice and cloud. It can
recognize Latin characters and a wide range of
languages and special characters.
Face detection is API the support to have faces in image
and well as live video streaming. We have contour
detection which can help you identify different parts of the
face and then apply face mask. The barcode scanning can be used
to detect one-dimensional and two-dimensional barcodes. Image
labeling API can detect objects inside the photo. For ondevice
and the cloud. The ondevice covers most of the common things
you see in photos. While the cloud API can support 10,000
labels across many categories. Finally, our landmark API can
recognize well-known places in the photo, like White House or
Eiffel Tower. If this does not fit your need and you’re an
experienced developer with the knowledge on how to build and
train models, you’re more than welcome to bring your own Custom
Model. We run it on TensorFlow. It is an open source framework
for learning and TensorFlow Lite is optimized for mobile
platforms. For models trained with TensorFlow, we provide your
tools to convert and compress into a format. While you’re
using Custom Model, you can do it inside your app or host it in
the cloud. If you choose the latter option to host in the
cloud, it does not mean you need to build your own cloud server.
ML Kit will find a way for you. We will manage the model
hosting, deployment, downloading, upgrade and the
ongoing experimentation. Since ML Kit was launched six
months ago at the Google I/O, we have made several enhancements.
First, we greatly-enhanced our face detention model, which is
now 18 times faster and 13% to 24% more accurate. We also
polished our text recognition API by making them more
streamlined and consistent across both ondevice and the
cloud. In addition, we launched face contour detection. You can
see now you can use the API to identify the contours of the
face in the photo. It includes the entire face, both eyebrows,
eyes, nose and lips. This is where the realtime apps can put
the face mask like a goggle or some funny nose on the face and
make the mask move with the face in the live video streaming.
Next, I’m going to share some tips and practices for how to
use ML Kit so I can build impressive mobile apps using
machine learning.
Poor image focus can hurt the accuracy. Second, you should
ensure the image has sufficient size. For example, for face
detection, you should have at least 100-by 100 pixels for each
face. If you want detection in the selfie mode, it should be 200-by-200. For language API,
it should be 16-by-16 in size. If you use our cloud API to
recognize Chinese, Japanese and Korean, each character should be
at least 24-by-24. Similarly, barcode has the size
requirement. Please check out online documentation for more
details. Machine learning and libraries
can be large, which can slow down the app download. There
are two ways to reduce the APK size. First, you can build your
app as an Android App Bundle. By doing that, you enable Google
Play to automatically generate APKs for specific screen
density, architecture as well as the languages. Your user only
have to download the APK and match their device configuration
Another way you can reduce API14 RR RR size is if it is not a
primary purpose, you could move machine learning features, which
require ML Kit into a dynamic feature model. That way, you
prevent users for downloading model, which sometimes can be
large. We all know machine learning
involves a lot of computation, so, the speed becomes really
important. Here are tips on how to improve. You can reduce the
image resolution and video frame rate to limit amount of
computation it involves. When the current frame speed process,
you should also throttle incoming video frames, which
increases memory, as well as slow down the performance.
For realtime face detection, you should use face fast mode, which
luckily is the default mode. Often times, resolution is
sufficient for face detection. For realtime processing, you
should also choose between contour detection versus
classification or landmark detection, but not both.
Because doing both could be expensive and may not be fit for
the realtime processing of the slow device.
Another tip and trick you should use is, you should wait for
detection to finish before rendering the face and contour
together. You can check out our online quickstart app on GitHub
for more details. To illustrate what I mean, I
will do a live demo. Let’s switch to the demo mode. So,
for the purpose of demo, I’m using slower, 3-year-old Nexus 5 X phone. In the first
video, I’m going to show you, without any tips and we do not
do video — any drawing to make sure the contour and the face
are together so if you just call the API without any performance
improvement, you can see, the contours are not full in the
face. And there’s a big gap between these two.
All right. So, now I switch to another version, after applying
performance tips. In this version, it’s using the
exact same Nexus 5X phone. Also, we wait for the
detection to finish before render both the face and the
contour. As you can see now, the contours are full in the
face all the time. There’s no more gap. Cool.
So, let’s switch back to the slides.
[Applause] If you’re using our Custom Model
API, how to include model is something you should consider.
There are two ways, you can bundle inside your app or host
it on the cloud. If you bundle your model on the app, it’s available immediately.
It doesn’t need any downloading. You get a bigger app because app
contains the model. It may slow down the app download. Also,
you cannot change the model without a new app release. On
the other hand, if you host a model in the cloud, we provide
all the hosting support for you. You get a smaller app size
because the app does not contain the model, which translates into
a faster installation. You also can choose the download model
only if it’s needed. The model updates can come over the air
into the app without any new app release.
You can also use remote config and AB testing provided by
Firebase. The drawback of hosting model on the cloud is,
obviously, it means connectivity. When there’s no
connectivity, you cannot download the model. Also, the
model will not be available until they’re downloaded.
So, a third option is using a hybrid approach. You can bundle
the model in the app, so make it usable right away. Then you can
receive model updates over the air from the cloud. If you’re
using our base API, it’s provided in two different forms.
For the better terms, SDK. The model’s actually provided by the
Google Play service so it’s across all apps so the app,
itself, does not have to contain the model, which will make your
app smaller. Text recognition are provided through the SDK.
The second type is thick SDKs. The models are bound inside the
SDK. Each app will have their own copy of the model, which
will increase the app size. Face detection and image
labeling are supported through the thick SDKs.
To use this type of SDK and SDKs provided by the ML Kit, you need
to include the appropriate ML Kit dependancies in your app.
Inside that file, inside this dependency section, if you want
to use the API support through the thin SDK, you should add the
dependency called Firebase. And in addition, if you want to use
thick SDK, you should still keep this line, because all the API
entry points are coming from this thing SDK dependency. But
you also need to add additional dependancies. If you want to
use image recognition, image detection, then you need to add
the firebase ML vision because it’s a thick SDK.
Similarly, for image labeling, you need to add image label
model dependency. Next, I will talk about a few
new areas we’re currently working on. A few new ML Kit
features are either under development or in early testing
phase. We started with natural language processes. We have
smart replies. We are also planning to go into other areas,
like speech. At the same time, we will continue to enhance
performance and accuracy of base APIs.
We launched model compression and conversion service to our
alpha services, which helped them to convert and compress
large model into a smaller and faster versions for mobile
usage. The conversion service is alpha, using pruning,
quantization and transfer learning to retrain the large
models, make them smaller and faster without sacrificing too
much accuracy YAERS. Fishbrain allows you to share
the photos of your catch. It can identify any fish with a
photo. Their model is more than 80 megabytes. By using our
conversion and compression, they were able to reduce it. They
only maintain the same level of accuracy, it is slightly better.
If you are interested in trying out our model compression
service, please join our alpha program by signing up today at g.co/firebase/sign-up.
I hope you enjoyed the talk today and can take home some
tips and I can’t wait to see what you will build with ML Kit.
If you have questions, I’ll be outside in the lounge and we’ll
also be in office hours. Thanks so much for listening. [Applause]
[Applause] [Applause]
[Applause] [Applause]
[Applause] [Applause]
[Applause] [Applause]
[Applause] Hi, everyone. Welcome to Modern Android Notifications.
My name is Jingyu. I’m Paul Matthews. I’m a
developer advocate in London. So, three years ago, on this
stage, Chris Ren gave this quote and it’s a brilliant one. Don’t
annoy in user, respect them, empower them, delight them,
connect them to the people they care about. And this is still
very much true today. So, we’ll look at channels and how you can
use them in your app. What’s new in notifications and
finally, digital well-being. First, how to respect your
users. So respect your user’s attention. Don’t annoy the
user, respect them. Some useful tips, so do respect the user’s
settings. So, if they’ve communicated to you, in your
app, that they want a certain setting for your notifications,
then you should respect that. Don’t try and override it.
Don’t try and ignore it. You should check that the
notifications you’re sending are not blocked. That they do still
want to hear these notifications. And finally, if
you’re capable in your app, you should back up any settings they
have told you about notifications and make sure
they’re synced over install and devices.
You should use well-structured apps, making use of the styles,
such as messaging style, inbox style, big-picture style. You
should make sure your notifications are timely. Using
high priority messages to ensure they get your messages when you
in tend them to get your notifications. And prioritize
the notifications and making them look better.
So some don’ts, don’t send these notifications and forget about
them. We want you to use the platform features that are there
to help you. For instance, auto cancel, making sure they
disappear. Timeouts, is it relevant after four hours? And
synchronizing across the devices. If you know they use
your app on multiple devices, you should try and synchronize
notifications they read on one. So don’t send notifications that
are not actionable. The point of notifications is that they’re
there to be used. By definition, the user wants to
know something, which means they generally need to do something.
So don’t send them a notification that says, hey, we
synced some things in the background.
Don’t annoy in user, use alert once and don’t make sure they’re
getting buzzed crazy while they’re standing up on stage,
presenting about notifications. Make sure it is representative
of what you want. If you’re a chat app, maybe children group
notification behavior. So, respect the user, otherwise,
they might just turn off your notifications and
they might choose to uninstall your app, which would be far
worse. There are platform features that
deliberately enable notifications being turned off.
For instance, notifications are being posted and the users maybe
keep swiping it away and so now, in P, we prompt the user, do you
really care about this notification? Do you really
want to see this content? This acts on channels if you’re not
describing your channels correctly, this can lead to some
confusion and perhaps some lost notifications.
So, let’s look more at notifications channels. They
provide granular control. Channels are the way to empower
them. So let’s look at how to use them. First of all, they’re
now required on all apps, as we’ve — they’re required on API
26 and that should be everywhere. They help the user
categorize. So they help you categorize your notifications
that help the user to interact with them.
And finally, they allow the user to customize their settings.
So, the user has the final say. So, if you think something’s
important and they don’t think it is, they can tell you this.
So, let’s look at the best practices. Again, you should
allow the users to manage their notifications through the
channel creation. You should allow them to maybe deep link
into settings to change these things, if they’re expressing an
interest of working with your notifications channels, perhaps
they want to be able to change the importance of something.
So, setting the right importance level for a notification channel
seems like an obvious one, but it’s so easy to overlook.
User settings, you should respect the user settings, but
back them up when you can and don’t try to abuse them by
deleting and re-creating. Other don’ts are only using one
channel. This is a clear notification smell, if you like.
If you’ve only got one channel in your application, there’s
probably something else you need to be looking at. If you
provide poor descriptions for users so they don’t understand
the uses of a channel, they won’t be able to able. If you
use wrong or blocked channels. They’re trying to communicate to
you that that don’t like this content and you should respect
that. Spamming the user with
notification channels is not the best way to proceed.
Choosing your channels carefully can help. Think of your users
when you choose your channels. Think of the user and how they
might want to interact with your app. For instance, it’s a bad
idea to create channels around your importance level. This
isn’t what notification channels are for. You should group them
around categories like tagging in a photo, tagged posts.
You should also think about creating notifications when
there’s more control needed. For instance, if I’m on a chat
app and have a general channel the all chat notifications
coming in but then I express an interest in controlling a family
chat group, you should create a channel and allow the user to
dive deeper and have more control.
Lazily creating, that is if they don’t receive a message through
your app, you don’t need to create the channel for that.
And then, the user can provide feedback to you, to say, look,
this is useful or this isn’t useful and you should listen to
that. So, in Android P, we added broadcasts for listening
in to blocking — to blocking or changing state of your
notification channels. You should understand those and you
should react to them. You should maybe back them up so the
next time you create a channel on a different device, it makes
sense. You can query these APIs at
runtime, also, to find out how the user interacts with your
channels. So, now look at what’s new in
notifications. Thank you, Paul. Okay. Now
let’s look at what else is new in notifications in Android 9.
We added updates to make the notifications easier to read and
scan through. We added more padding notifications and we
went back to using the rounded corners at the top and bottom.
They love the smooth app opening animation you are seeing here on
the slide instead of closing the notification and opening the
app, now the notification goes smoothly in the app. You need
to make sure you’re starting your activity directly and your
active starts quickly. Since, for most users, the
notification they care about the most are the ones connecting
them with the people they care, so we enhanced our messaging
style. Messaging experience, by adding a new person class. Once
you use API 28 and if you’re using MessagingStyle in
notification, we have now moved people’s Avatar to the left of
the notification and you can set that Avatar by using the set
icon method. We also added support for images
and stickers in the messages notification by using set data,
you can add image in your messages notification directly.
The other feature that I love, on Android, is direct reply.
But come times when I’m replying to a notification, I would
accidentally tap on the notification and that will open
the app and my response is lost. Android 9, you can help the user
with this. By retrieving the draft, you can populate the
response in your app. So, make the user experience better,
delight them. If you already support smart
reply in your app, we highly-recommend you use this
API to also display them in your notification. Instead of
replying to the notification, user can now just tap one of
them and to reply. Okay. Here’s an example we have
for using the new APIs. First, we’re going to create an
instance — person instance here. So, we’re going do use
the person. We are setting the name, the URI, the icon for this
person and this is going to represent the center in the
message. And then, we’re going to pass that to this message
that we’re creating here. As you can see, we’re passing the
instance of the person, not like before where we were passing the
name of that person. We also want to include the image, so
we’re using the set data method to include that image.
And then after that, we’re adding this message with another
message so we’re adding two messages into this
MessagingStyle notification and we’re setting the style into our
notification. Okay. So, here’s a quick
summary of some of the dos and don’ts when you’re using
MessagingStyle. First, please use MessagingStyle for messages.
And this also applies for if you’re using Android Auto or
Android Wear. If you’re ascending messaging
notifications, please use MessagingStyle. In the past,
we’ve seen developers switching between MessagingStyles in order
to that big image expansion presentation. But now, with set
data method, you don’t need to do that. You can use
MessagingStyle and this will create a consistent experience
for the user. And it’s always good to add the
icon for the people in the notification so we
highly-recommend you use that to add the Avatar. If you don’t
set it, we’ll use the initial of the person’s name. And finally,
if your app supports smart reply, please add that into your
notification so you’re creating a better experience for in user.
And here’s a few things you want to avoid. There are a lot of
good reasons to Auto-cancel a notification. In order to give
the user a clean and up-to-date notification jar, after they
reply to a messaging notification, this is not one of
those cases. You would want to keep that notification there so
if the user wants to reply to this conversation and reply
afterwards so please don’t cancel that and let the user
swipe away when they’re finished with the conversation.
The other bad behavior that we’ve seen in the past is some
developers are setting this in order to achieve a visual
presentation. Please don’t do that. There are two reasons.
One is because it will break Android 9 on the presentation
and the other — and the other reason is because a person
without a name is not a real person.
[Laughter] So, up until now, we’ve talked
about how you can reach the user — how you can help your user
connect with people they care about and how you can make your
notification a better experience for the user but I want to hit a
pause here and look at app usage from the other side. Since as
much as I want to get that notification from my friend and
family, I still need need time away from the device.
So, to help user with this, we announced the digital well-being
at I/O this year. If you have a device running Android 9, I
highly-recommend you download it from the Play store and sign up
for beta. So, this is what digital
well-being will show us. It provides an overview of our app
usage and provides a dashboard that shows our time spending on
each app and the number of notifications we’ve received. I
personally love to use it to learn where I’m spending my time
but sometimes I would see some apps are sending me
notifications unexpectedly. One question you might have is how
are the notifications counted? The goal is to track user interruptinterruptions. Any
updates that’s visible to the user are counted as one. If
you’re sending a notification to the block channel, that is not
counted here. So, in this case, I saw this app
is sending me lots of notifications so I got curious.
I went into the dashboard and I opened that to see the hourly
breakdown. And as you can see here, I got a notification every
hour that day and even at 4:00 a.m. in the morning, I got eight
notifications. So if this notification are high
importance, I would be woken up in the middle of the night.
Thankfully, that’s not the case. But if these notifications are
push notifications and they are sent uses high priority message,
which means this app is constantly waking up a deep-dose
device. If I want to have good battery in the morning, I might
uninstall this app. For now, I will turn on D&D so I don’t get
disturbed. It provides ways for users to
disconnect and reduce interruption. You can turn it
on by flipping your device, which is super convenient. But
what if this is super important notification that the user
actually wants to receive? So for those, here’s the few advice
for you. First, set the right category to your notification.
As we can see here, in the do not disturb setting, user can
choose what to block and what to allow and set exceptions on
calls, reminders and events. If your notification belongs to one
of them, please tag your notification as such.
Here, I listed a few categories which corresponds to the
exceptions on the other side. As I said, if your notification
belongs to one of these categories, please let us know
by tagging them. The other advice that we have is
if this is a notification coming from another person, please tag
your notification. As you can see here, in the digtell — user
can choose who they want to get notified from. So, please add
that person in your notification and add the associated URI, if possible. You’ll bypass
Do Not Disturb. When user turn it on, they really don’t want to
be disturbed. So if you’re sending a notification that’s
not expected, that will really annoy them. So please don’t
abuse these APIs. This brings us back to the code
we have at the beginning. Whenever you’re sending a
notification, don’t annoy the user, respect them, empower
them, delight them and connect them to the people they care
about. Thank you. [Applause]
[Applause] [Applause] [Applause]

3 thoughts on “Android Dev Summit 2018 Livestream | Day 1, Theater 1

  1. Thanks for tuning into #AndroidDevSummit livestream for Day 1. Be sure to come back tomorrow for Day 2! Both Theater 1 and Theater 2 live streams start at 9:30 AM (PST)

    All recorded sessions will be available in this playlist → http://bit.ly/ADS18-Sessions

    Livestreams from Day 1 are still available:
    Day 1, Theater 1: https://www.youtube.com/watch?v=Wkl9GmluS7E
    Day 1, Theater 2: https://www.youtube.com/watch?v=OraUYXz1yFM

    Livestreams for Day 2:
    Day 2, Theater 1 → https://www.youtube.com/watch?v=UljafaxRcEE
    Day 2, Theater 2 → https://www.youtube.com/watch?v=mtEALHDWsSs

    Take a look at the event schedules here:
    Day 1 Schedule → https://developer.android.com/dev-summit/schedule/day1
    Day 2 Schedule → https://developer.android.com/dev-summit/schedule/day2

  2. Nice video

  3. Video Starts at 23:00 minuets
    Give a like if you appreciate it

Leave a Reply

Your email address will not be published. Required fields are marked *