The post Continuous Linting in Android appeared first on Big Nerd Ranch.
]]>Matt Compton recently described how to create custom Lint detectors for Android. This is a great way to enforce your team’s code design patterns, but it can be a big undertaking.
For example, when we built our custom .jar file that included all of our custom Lint checks, we had to ensure that each member on our team built the .jar and added it to his or her local ~./android/lint
directory. We also had to trust that each developer actually ran the Lint checks periodically as part of their normal development process. That’s a lot to ask, especially when we already have so much to think about. We found that the best way to enforce our custom Lint checks (as well as the built-in checks) is to run Lint as part of our continuous integration build process.
I should note that I’ll talk about running things on Travis CI, but because we are using Gradle, most of this should apply to Jenkins as well.
Let’s start by refining our build.gradle
file to describe the behavior we want when running Lint. The Android Gradle plugin lets us specify a number of lint-related parameters. Here, we will choose to generate an HTML report, choose to abort (i.e. fail) the build when Lint detects an error, and also treat all warnings as errors.
apply plugin: 'com.android.application'
android {
...
lintOptions {
htmlReport true
htmlOutput file("lint-report.html")
abortOnError true
warningsAsErrors true
}
}
Now we can just add the “lint” task to our travis.yml
file and run Lint alongside the rest of our build process.
language: android
jdk:
- oraclejdk8
android:
components:
...
script:
- ./gradlew clean assembleDebug test lint
If Lint detects a single error (or warning, since we are treating warnings as errors), the entire Travis build will fail just as if we had a failing test. This is great if we are starting with a brand-new Android project because it will keep our Lint error and warning count at zero.
But what about an existing project? We can’t just fix every existing Lint warning and error, then turn on Lint checking. We need a way to benefit from Lint on a project with existing errors and warnings.
When we ran Lint before, it performed all 200 built-in Lint checks, but we can also ask Lint to run only a subset of checks. Then we can fix all existing instances of a specific Issue and add a Lint check just for that Issue. This will ensure that we don’t add any violations in the future.
We can utilize the check
attribute in our lintOptions
.
apply plugin: 'com.android.application'
android {
...
lintOptions {
htmlReport true
htmlOutput file("lint-report.html")
warningsAsErrors true
abortOnError true
check [IDs of Issues to run]
}
}
This will ignore all Lint checks except the ones listed. As you might guess, this list will get pretty long and bloat your build.gradle
file as you check more Issues. You can clean that up by extracting those IDs to a different Gradle file.
So let’s update our build.gradle
file:
apply plugin: 'com.android.application'
apply from: 'lint-checks.gradle'
android {
...
lintOptions {
htmlReport true
htmlOutput file("lint-report.html")
warningsAsErrors true
abortOnError true
check lintchecks
}
}
We can then list our Lint checks in lint-checks.gradle
:
ext.lintchecks = [
'ExportedReceiver',
'UnusedResources',
'GradleDeprecated',
'OldTargetApi',
'ShowToast',
...
] as String[]
We haven’t yet added our custom Lint checks to the CI server, so let’s do that now. Using your favorite version control tool, include the custom Lint .jar in the project repo. I’m going to name and locate our .jar as [PROJECT_ROOT]/lint_rules/lint.jar
. To let Travis know about this .jar, we have to set the ANDROID_LINT_JARS
environment variable.
We can reduce clutter in the build.gradle
file by running Lint from within a shell script. The first thing we need to do is call that shell script from the travis.yml
file:
language: android
jdk:
- oraclejdk8
android:
components:
...
script:
- ./gradlew clean assembleDebug test
- ./scripts/lint_script.sh
We can then set the environment variable and call the Lint Gradle task. Inside lint_script.sh
:
# file name and relative path to custom lint rules
CUSTOM_LINT_FILE="lint_rules/lint.jar"
# set directory of custom lint .jar
export ANDROID_LINT_JARS=$(pwd)/$CUSTOM_LINT_FILE
# run lint
./gradlew clean lint
That’s all there is to it! Now all members of our team will have our custom Lint rules applied run when they trigger a Travis build. There’s no need for each team member to add the .jar file to ~./android/lint
on a local machine and run Lint locally.
Let’s be honest: This still takes a lot of time. We tried to reduce the number of existing Lint Issues on a project by running only a subset of Lint checks, but we found that we were still asking a lot from our team. Somebody had to take the time to fix all occurrences of a specific Issue and then add that Issue to the check
list.
This isn’t very realistic. It would be nice if we could set the existing error (or warning) count as a threshold and bar team members from exceeding that threshold. It would be even nicer if the threshold would go down when somebody took the time to fix Lint errors or warnings. Luckily, I’ve written a script that does just that.
The script fits into an existing travis.yml
file the same way we extracted Lint checks to lint_script.sh
. The script will be run at every build, or wherever you choose to trigger it.
When you first add the script to a project, it will run all Lint checks, including your custom checks, and establish a “baseline” error or warning count. You will then be unable to exceed this count in future pull requests (or master merges, or wherever you call this script). Any developer who submits code that increases the error or warning count will cause the Travis build to fail. The Travis console will output:
$ ./scripts/lint-up.sh
======= starting Lint script ========
running Lint…
Ran lint on variant defaultFlavorDebug: 8 issues found
Ran lint on variant defaultFlavorRelease: 8 issues found
Wrote HTML report to file:/home/travis/build/.../lint-report.html
Wrote XML report to /home/travis/build/.../lint-results.xml
BUILD SUCCESSFUL
Total time: 1 mins 33.394 secs
found errors: 8
found warnings: 0
previous errors: 3
previous warnings: 0
FAIL: error count increased
The command "./scripts/lint-up.sh" exited with 1.
If a developer takes time to reduce the count, then a new baseline will be established. In this way, the total error and warning count will trend toward zero.
This is a great way to get Lint running on your existing project, instead of saying, “Next time we’ll add Lint from the beginning.” You can check out the script on Github. Happy Linting!
The post Continuous Linting in Android appeared first on Big Nerd Ranch.
]]>The post Testing Android Product Flavors with Robolectric appeared first on Big Nerd Ranch.
]]>In a previous blog post, I discussed integrating Android Studio, Gradle and Robolectric to perform unit testing as part of Android development. Some inquisitive commenters wanted to know if this setup supported testing multiple product flavors. The answer is the same as with most Robolectric-related topics: “Yes, but…” Since Roboletric 3.0 was released this week, now is a good time to explore what’s needed to set everything up.
If you’re unfamiliar with product flavors, Javier Manzano has written a great explanation of product flavors and how to use them. He cites the documentation’s definition:
A product flavor defines a customized version of the application build by the project. A single project can have different flavors which change the generated application.
If you are familiar with product flavors, you’re also probably familiar with build types. If not, I point again to the documentation:
A build type allows configuration of how an application is packaged for debugging or release purpose. This concept is not meant to be used to create different versions of the same application. This is orthogonal to Product Flavor.
Build types define how our application is packaged, meaning that they are independent of product flavor. The combination of build types and product flavors create build variants. These build variants represent the set of different ways our app is built and how it behaves. With all these different builds, we want to run tests on each variation, and possibly even different tests for different variations.
Let’s introduce a quick example to illustrate this. We’ll start with our original example but add “free” and “paid” product flavors with unique application ids. Furthermore, let’s add debug and release build types. This gives us four build variants:
To accomplish this, we update our build.gradle
file:
apply plugin: 'com.android.application'
android {
compileSdkVersion 22
buildToolsVersion "22.0.1"
defaultConfig {
applicationId "com.example.joshskeen.myapplication"
minSdkVersion 16
targetSdkVersion 22
versionCode 1
versionName "1.0"
}
buildTypes {
release {
minifyEnabled false
proguardFiles getDefaultProguardFile('proguard-android.txt'), 'proguard-rules.pro'
}
debug {
applicationIdSuffix ".debug"
}
}
productFlavors {
free {
applicationId "com.example.joshskeen.myapplication.free"
}
paid {
applicationId "com.example.joshskeen.myapplication.paid"
}
}
}
I’ve forked our original example and added these changes. Our project structure now looks like this:
Let’s say that the release flavors are the ones we put on the Play Store, and we give the debug flavors to our QA team so they can see the logs. Let’s also say that our paid version has some fancy feature, and our free version has a button to sign up for the paid version. As you can imagine, we want different tests for each of the variants, so let’s write some Robolectric unit tests.
In our example, we want to ensure that logging is happening on our debug builds. It doesn’t make sense to run this test on our release types. In fact, we don’t want to run this test on release builds because it would (hopefully) fail.
@Test
public void sendingWebRequestCreatesLogStatement() {
if (BuildConfig.DEBUG) {
// assert that logs are printed
}
}
We may also want to test our flavors separately. Inside our tests, we check the product flavor by checking the application id:
@Test
public void clickingUpgradeButtonLaunchesUpgradeActivity() {
String applicationId = BuildConfig.APPLICATION_ID;
if (applicationId.equals("com.example.myapplication.free")) {
// assert that button click does something
}
}
@Test
public void fancyFeatureDoesSomething() {
String applicationId = BuildConfig.APPLICATION_ID;
if (applicationId.equals("com.example.myapplication.paid")) {
// assert that feature does something
}
}
Out of the box, Android Studio wants to help you. When we run our tests, Android Studio will build each variant of our application and run our tests against each variant. This is one of the major benefits of running Gradle tests instead of JUnit tests.
In our case, our test suite is run four times, for our four build variants.
In a perfect world, this would actually happen. In reality, you’ll get a nice error when you try. Using Robolectric’s RobolectricGradleTestRunner
causes a problem for these product flavors:
no such label com.example.joshskeen.myapplication.free.debug:string/app_name
android.content.res.Resources$NotFoundException: no such label com.example.joshskeen.myapplication.free.debug:string/app_name
at org.robolectric.util.ActivityController.getActivityTitle(ActivityController.java:104)
I’ll save you the trouble of diving through the code yourself. The issue is that Robolectric can’t find where Android Studio stashed your generated R.java file. It lives here:
But Robolectric is looking for it in the com.example.joshskeen.myapplication.free.debug
directory for the free/debug build variant, and similarly-named directories for other build variants.
In order to understand the problem, we need to understand the difference between package name and application ID. The Build Tools Team has written a good description. Basically, you can think of the application ID as the outward-facing name of your application for unique identification, and the package name as the internal name of your application for organization of your .java files.
When I initially drafted this post, the then-current release candidate of Robolectric 3.0 confused the two. But they are not alone. Javier Manzano’s post mixes the two. Even the Build Tools documentation I linked to above makes the same mistake:
The manifest of a test application is always the same. However due to flavors being able to customize the package name of an application, it is important that the test manifest matches this package. To do this, test manifests are always generated.
As we’ve seen, the flavors don’t customize the package name; they customize the application id. Understanding the difference is the key to fixing this problem.
Luckily for us, Robolectric has caught the problem and issued a fix as part of version 3.0. To use this new feature, you’ll need to annotate each of your test classes to include the packageName
configuration value:
@RunWith(RobolectricGradleTestRunner.class)
@Config(constants = BuildConfig.class, sdk = 21, packageName = "com.example.joshskeen.myapplication")
public class MyActivityTest {
...
}
This allows RobolectricGradleTestRunner
to distinguish between an application ID, which is different from the package name.
If you are using the previous stable version (2.4) of Robolectric and the base RobolectricTestRunner
, you can continue to use this TestRunner. Support for specifying application id was added in this commit.
Now that we have our tests running successfully for different build variations, I want to go back and address how we’ve structured our tests. In the above test sample code, we had a single set of tests that was run for all build variations but executed different behavior inside the test with if
statements. This isn’t a very elegant solution: We’ll end up with a bunch of “empty” tests this way.
In our example, the clickingUpgradeButtonLaunchesUpgradeActivity
test would automatically pass for the paid version. What is the value in this? Nothing. Furthermore, this will slow down our test suite by running setup()
unnecessarily. Luckily, we can separate our test code the same way we separate our product flavor code. By creating test folders that match our product flavor folders, we can run tests specific to product flavors. We can create testPaid
and testFree
directories, and put only the tests we want run on those flavors inside them. We can remove the if statement from our tests and they become pretty standard:
app/src/testFree/java/com.example.joshskeen.application/MyActivityTest.java
@Test
public void clickingUpgradeButtonLaunchesUpgradeActivity() {
// assert that button click does something
}
app/src/testPaid/java/com.example.joshskeen.application/MyActivityTest.java
@Test
public void fancyFeatureDoesSomething() {
// assert that feature does something
}
To test our debug builds, we can also add a testDebug
directory, and remove the if statement from these tests as well
app/src/testDebug/java/com.example.joshskeen.application/MyActivityDebugTest.java
@Test
public void sendingWebRequestCreatesLogStatement() {
// assert that logs are printed
}
It is important to note that when we run free debug or paid debug tests, two test files will be accessed: (free test OR paid test) and debug test. Therefore, we have to rename our debug test file. Here I’ve chosen to call it MyActivityDebugTest.java
, but the name choice is up to you. Our entire directory structure now looks like this:
I should note that if you run the test suites for all of your build variants and one of those suites has a failing test, Android Studio will stop without proceeding to the other variants. This makes it a bit hard to figure out the source of the failing test. Is the test failing on just this one variant? Or on others as well? The only way we can know is to manually run the remaining suites, one at a time.
We can now leverage the benefits of building multiple product flavors and build types inside Android Studio without sacrificing the benefits of test-driven Android development, enabling us to continue to deliver high-quality products on this platform.
The post Testing Android Product Flavors with Robolectric appeared first on Big Nerd Ranch.
]]>The post Google I/O 2015: Let’s Imagine “What If” appeared first on Big Nerd Ranch.
]]>At I/O 2015, Google’s engineers announced an amazing array of new projects and technologies, in addition to updating us on the progress of existing projects. We Android Nerds were impressed with the new features coming in Android M, and the new developer tools in Android Studio.
And while those things certainly interest us in our day-to-day development, we also really geek out over the big ideas that are way down the road. One of my favorite things to do during I/O is to think about the possibilities these new technologies present, how the technologies might be combined, and what these technologies might ultimately mean for Android.
Project Jacquard is an effort to manufacture textile-based touch surfaces, much like our current phones and tablets, but flexible, stretchable and capable of withstanding all of the abuse we expect from modern textiles.
Project Jacquard introduces the possibility of new gesture detectors in Android. All of our UI Events in Android are based on interacting with a rigid surface. Our fingers touch, drag and lift. But with textile-based touch surfaces, we’ll have so many new gestures we can detect. Imagine all the interactions we already have with fabrics. We could see UIs responding to smear, pinch, palm press or crumple gestures. Imagine a drawing app working in conjunction with Project Jacquard. At any time, the user could clear her drawing canvas by crumpling the textile!
And we can think about more than just clothes—we can consider any surface that is textile-based. Imagine a multitouch remote control for your TV, sewn right into the arm of your sofa. What about touch controls sewn into the door panel of your car? Imagine swiping an area of the door panel, indistinguishable from the rest of the panel, which unlocks the doors or rolls down the window!
The real big idea behind Project Jacquard is not the multitouch sensor; it’s conductive yarn in general. Because Google is driving this technology forward, they are solving a lot of general problems associated with mass producing electronics embedded in washable, stretchable, durable textiles.
Google’s goal was, of course, capacitive touch matrix, but the sky really is the limit. Ivan Poupyrev was right when he called Project Jacquard a raw material. Imagine sewing an induction charging loop into the pocket of your jeans, which is wired to an embedded battery. Every time you put your phone back in your pocket, it starts recharging!
Project Soli is an attempt to capture hand gestures independently of a physical device. We already have a large hand motion vocabulary, so why can’t we observe those gestures without a touch device? The goal is to replace all input with gestures of the hand, such as sliding the thumb over the fingers or making a twisting motion of the thumb and forefinger.
I anticipate this will give us additional UI events in Android. I can imagine setting an OnTwistListener on a dial view or SlideListener on a slider view. Imagine: instead of bug reporting with rage shake, we can listen for fist pump gestures!
The number of hand gestures is really endless. It would be amazing if the API allowed developers to implement their own gesture recognizers, perhaps by providing a training data set. Like the example of flicking a virtual soccer ball, your game app might want to implement a two-finger flick. If this gesture wasn’t recognized by the API, you could train the algorithm by providing your own sample data.
Project Tango has integrated 3D mapping and localization hardware and software into a handheld device. The user can build 3D maps of her environment simply by navigating through the environment, capturing video. The user can then “replay” that 3D map later via the built-in positioning sensors.
The possibilities that this technology presents are endless. Google has demoed some of the capabilities of Project Tango before, and expanded on those capabilities this year, as seen in this demo video. What is amazing is that the Project Tango form factor has moved down from tablet size to smart phone size. While this is still very early and only a few development partners will get one of these phones, it’s important to see the direction they are taking. I really expect to see some of these technologies make it into consumer smartphones.
For us as Android developers, we will have to deal with the fact that users will be generating a lot of 3D content. I’m excited to think about the UI patterns that will display and handle all of that content. If you had an app that cataloged people’s 3D room captures, would it make sense to list all of those videos as flat 2D Material cards? Maybe not.
I anticipate more augmented reality interactions with connected devices. Google owns Nest, and Nest works with dozens of connected devices, so I imagine greater cross-pollination between Project Tango and these devices.
Imagine that you’re sitting in the living room and you want to check the status of your Nest thermostat or Whirlpool refrigerator in the kitchen. Instead of pulling up the Nest or Whirlpool apps, you point your phone’s camera into the kitchen and the screen will show an augmented reality window displaying your entire kitchen with status bubbles hovering over each device!
Any company that has built an app to help users interact with the physical world can benefit from this technology. There are probably hundreds of apps by companies that already have brick and mortar stores. These apps are designed to capture more customers who might not come into the physical store. Eventually, this distinction between in-store and online customers will blur, and these apps will service more people. Imagine an app that relies on augmented reality to navigate a customer to the correct aisle and shelf to find the product he is looking for.
Project Abacus hopes to remove the need for single-mode authentication (e.g., passwords, pattern recognition, facial recognition, fingerprint recognition) and fuse each of these modes into a multi-modal recognition which could provide an “authentication score.”
I can definitely see this being included in the Auth Play Services API. Just like FusedLocationProvider handles all of the sensor data available on a device to locate a user in physical space, we could see a FusedAuthenticationProvider that will handle all sensor data available on a device to help authenticate a user. A single call into Google Play Services could provide real-time authentication of the user of your application.
As developers, we would benefit from the most secure combination of sensor data available to each user. By relying on this Play Services API, our apps would get instant benefit as Google tweaked and improved the authentication algorithm. I could even imagine a parallel to setPriority() where we decide how secure an authentication we need from the service.
I also think it would be crazy to incorporate Project Jacquard into the multi-modal authentication. What if a Project Jacquard jacket had additional sensors around the jacket to interpret the body geometry of the wearer? This body geometry data could be fed into the authentication algorithm and “unlock” the touch surface only once the wearer had been verified.
If you are at all interested in these projects, I encourage you to read up on them. Each project has its own page on ATAP’s site. Most of what I presented here are out-there ideas and I don’t see them coming any time soon, but one thing I love about Google I/O is that it lets me put on my Jules Verne hat and ask “what if?”
The post Google I/O 2015: Let’s Imagine “What If” appeared first on Big Nerd Ranch.
]]>I’m calling it. We’ve been monitoring the state of testing in the Android world for some time now, waiting for the day when testing would be fully baked into the Android development cycle. Well, that day has finally arrived.
The post Triumph! Android Studio 1.2 Sneaks In Full Testing Support appeared first on Big Nerd Ranch.
]]>I’m calling it. We’ve been monitoring the state of testing in the Android world for some time now, waiting for the day when testing would be fully baked into the Android development cycle. Well, that day has finally arrived.
I wrote about setting up unit testing in Android Studio back in January. If you were committed enough to get through the whole post, you’ll remember how tedious the setup was. We knew that better testing tools were in the works—the measures I described in my previous post were just a stopgap.
Today, all of those workarounds are completely unnecessary, thanks to the Android Tools team. Android Studio 1.2, which is currently in beta, removes any need for third-party workarounds or custom hacks to Robolectric. The groundwork was introduced in Android Studio 1.1, which was released back in February. It included an experimental setting to enable unit testing. Last week with the beta release of 1.2, that setting is no longer experimental. It’s baked right in to Android Studio. While the Nerds here at the Ranch are quite surprised that nary a word was mentioned in the release notice for v1.2, unit testing support is here to stay.
Today I’d like to walk through the greatly simplified steps to set up unit testing in Android Studio. I’ll then go back and explain how to transition away from the more complicated approach I described in January.
The unit testing how-to post that accompanied the Android Studio 1.1. release is no longer valid for Android Studio 1.2. The only step we need to take is to select the correct Build Variant Test Artifact:
That’s it! We can now add our test dependencies and get to writing tests. Our build.gradle
file looks like this:
dependencies {
compile fileTree(dir: 'libs', include: ['*.jar'])
...
testCompile 'junit:junit:4.12'
testCompile('org.robolectric:robolectric:3.0-rc2') {
exclude group: 'commons-logging', module: 'commons-logging'
exclude group: 'org.apache.httpcomponents', module: 'httpclient'
}
}
The keen reader will notice that we’re relying on Robolectric’s current release candidate for v 3.0. We’ve tested this new feature in Android Studio against Robolectric 2.4, and it works as well. However, you will need to use a custom RobolectricTestRunner
(see my prior post). Robolectric provides a nice guide on upgrading from 2.4 to 3.0.
A convenient addition to Robolectric 3.0 is an extension of RobolectricTestRunner
specifically designed for Gradle command line or Android Studio, called RobolectricGradleTestRunner
. We can now annotate our test classes:
@RunWith(RobolectricGradleTestRunner.class)
@Config(constants = BuildConfig.class, emulateSdk = 21)
public class MyActivityTest {
@Before
public void setUp() throws Exception {
// setup
}
@Test
public void testSomething() throws Exception {
// test
}
}
Finally, we can run our tests as Gradle Tests from within Android Studio! [Editor’s note: see below for changes in Android Studio.]
That’s all there is to it. Now you have immediate feedback on all passing and failing tests right inside Android Studio.
If you followed my previous post, most of the transition requires removing existing code:
build.gradle
classpath 'com.github.jcandksolutions.gradle:android-unit-test:2.1.1'
build.gradle
:apply plugin: 'android-unit-test'
build.gradle
file:afterEvaluate {
tasks.findByName("assembleDebug").dependsOn("testDebugClasses")
}
RobolectricTestRunner
and use RobolectricGradleTestRunner
:@RunWith(RobolectricGradleTestRunner.class)
@Config(constants = BuildConfig.class, emulateSdk = 21)
public class MyActivityTest { ... }
Can it be all so simple? Yes! Check out Josh Skeen’s demo for a working implementation. It’s time to write some great tests and elevate the quality of your Android apps. Show us how you’re using Robolectric and the new testing support to write great tests.
Updated May 15: Several commenters noted that they had some issues when repeatedly running their tests. Here’s how to avoid the “Test events were not received” error.
We need to have Gradle re-run all of our individual tasks each time we run our tests. This is because Gradle tries to optimize our task by identifying individual subtasks that don’t need to be re-run.
Initial run:
...
:app:compileDebugUnitTestJava
:app:compileDebugUnitTestSources
:app:mockableAndroidJar UP-TO-DATE
:app:assembleDebugUnitTest
...
Re-run:
....
:app:compileDebugUnitTestJava UP-TO-DATE
:app:compileDebugUnitTestSources UP-TO-DATE
:app:mockableAndroidJar UP-TO-DATE
:app:assembleDebugUnitTest UP-TO-DATE
...
Unfortunately, Android Studio needs all of these tasks to be run. Otherwise, it will not know to respond to the running of our tests, and we’ll get a “Test events were not received” warning. We can force Gradle to re-run all tasks by updating our run configurations:
Updated December 2015: Several commenters noted that they had some issues finding the “Run -> Gradle” test option in newer versions of Android Studio. Here’s what’s changed and how to fix it.
Android Studio has merged the JUnit and Gradle test options into just JUnit, so you just need to select Run 'MyActivityTest'
Once you run a test, you’ll probably hit the error:
java.lang.RuntimeException: build/intermediates/bundles/debug/AndroidManifest.xml not found or not a file; it should point to your project's AndroidManifest.xml
All you need to do is update JUnit’s working directory. You can do this per-test or per-project, but I prefer to just do it for all projects as an Android Studio default. Open up Run/Debug Configurations, navigate to Defaults -> JUnit. Then select Configuration tab. Then next to Working directory, select MODULE_DIR.
Nothing else from the initial post needs to change. You’ll still rely on RobolectricGradleTestRunner
.
The post Triumph! Android Studio 1.2 Sneaks In Full Testing Support appeared first on Big Nerd Ranch.
]]>We love to teach here at Big Nerd Ranch, but we don’t just teach classes. We write blog posts and speak at conferences, too. For all of these activities, we inevitably have code we want to share. Sometimes the best way for us to do that is via a demo app.
The post How to Write Great Demo Apps appeared first on Big Nerd Ranch.
]]>We love to teach here at Big Nerd Ranch, but we don’t just teach classes. We write blog posts and speak at conferences, too. For all of these activities, we inevitably have code we want to share. Sometimes the best way for us to do that is via a demo app.
This post describes some best practices we’ve come up with (rather informally) about crafting the most helpful demo app we can.
Empathy is at the root of all of these tips. When we teach, we make an effort to put ourselves in our students’ shoes. We remember what it’s like to learn something new, especially when the material is presented as a working example. We remember which demos have helped us. We also remember the ways that some demos actually made the material harder to learn.
In our demos, we focus on a single topic. We cover a lot of ground when we teach, so breaking a big idea into smaller parts is necessary.
For example, if we are demonstrating a new feature in Android or iOS, we present just that one topic. We don’t write demo apps that showcase the newest UI elements in Material design and introduce RecyclerView and discuss some new feature of RxJava. If we want to share knowledge about these disparate topics, we create a demo app for each.
Not all students are at the same skill level. While one student may have 10 years of development experience, another may be only a year in.
When it comes to creating a demo app, we try to err on the side of underestimating our audience’s experience and choose to present as clearly as possible. Our goal is to help a range of skilled developers to benefit from our demos.
Our brains do a good job of picking out new things we don’t understand and glossing over things that look familiar. This is especially true when reading code.
If an instructor is demonstrating some new topic in a bootcamp, we want the code bits that relate to that topic to stand out. One way to do this is to make all of the non-essential parts “disappear” by relying on those coding solutions that are most familiar.
Let’s say we’re demonstrating some new feature in RxJava, and we build a dummy fragment and view to visualize the results. If we throw in an additional advanced approach, such as applying Butterknife annotations to the widgets, it’s only going to clutter the relevant RxJava details. Anyone who isn’t familiar with Butterknife is going to see those annotations and get distracted by wondering, “What is this? What does it have to do with RxJava?”
Instead of throwing in a bunch of extraneous information, we stick to good old-fashioned findViewById()
. This way, students will see findViewById()
, recognize it and then ignore it. They can then continue scanning for code new to them.
Your entire demo app can’t be perfect. You will inevitably have to cut corners to set up some framework that is ancillary to the topic you’re demonstrating. A demo app is different from a production app; cutting corners is OK. Whenever you save time by cutting those corners or doing something you really shouldn’t, make sure you point it out. If you fail to do so, you risk two things:
More experienced developers will see your nasty code, and you will lose credibility with them. They will wonder about the value of the important parts of your demo. If you did a hacky job of setting up a model store to provide some dummy backing data, how good could your solution to the real problem actually be? Let those experienced audience members know that you recognize that you are cutting corners.
You don’t want your hacky solution in a demo to become a real solution in a novice’s production code. Novices might inspect your code, see that it works technically and then implement it as part of a project. We try to be good teachers and make sure that students aren’t picking up any bad habits.
The documentation techniques we use for a production apps and libraries don’t have the same audience as a demo app. GitHub is great at formatting a README.md file for easy viewing. This acts as a great table of contents for our demos. It gives us a chance to provide the audience with a high-level overview of what we are trying to showcase. Including screenshots, animated gifs and step-by-step setup instructions can really add value. Use them liberally.
Javadoc is great when you know what you’re looking for. They are terrible, though, for someone learning something new. You aren’t building documentation for an API that developers will use every day. Tailoring your documentation to less-experienced developers is incredibly helpful.
In production projects, there is a fine line between too much and too little commenting in code. In production, you run the risk of your comments becoming out of sync with the code as your team refactors. However, a demo project is usualy updated so infrequently that this isn’t really a problem. Remember that you are explaining something new to somebody. Walking the student through code, line by line, can be really helpful.
In production situations, developers have so much more context when it comes to reading and understanding code. They may already have a high-level overview of what the code is supposed to do. They may have heard other team members talking about the problems the code is trying to solve. The team may have already agreed on some design patterns that this code relies on. In-production comments serve a different purpose. They provide simple, high-level guidance.
For a developer seeing a demo project for the first time, none of these bits help them build a context about the workings of the code. They are both new to the project and new to the technique or library being showcased, so it can be really helpful to use lots of comments to make the logic and flow more obvious.
If your demo is best understood by actually interacting with it, rather than just looking at code, do your audience a favor and put your app out there. Build the project and put it on the Google Play store or Apple Store, then link to it in your README.
Because students may not take the time to clone your repo, import it onto an IDE, then build and deploy it to a local device, you should do whatever you can to help your audience start interacting with your app as quickly as possible.
These are just some techniques we use to write great demo apps. Let us know what you like to see in a good demo or share a link to your favorite demo app.
The post How to Write Great Demo Apps appeared first on Big Nerd Ranch.
]]>The post All in Together: Android Studio, Gradle and Robolectric appeared first on Big Nerd Ranch.
]]>UPDATE: This post has been superseded by a newer version here.
In May of 2014, we made the switch to Android Studio as our preferred Android IDE, migrating away from Eclipse in our Android bootcamps and book.
As we have started new app development projects for clients, we transitioned away from IntelliJ, our previous IDE of choice. This shift has allowed us to build a uniform skillset, enhancing both our teaching and consulting. However, we still had trouble integrating some of the tools and techniques that made our past projects so successful. Android Studio recently moved out of beta, so now is a good time to share about our successes so far.
In our consulting projects, we’ve benefited from building a solid set of unit tests with Robolectric, for a few reasons:
We’ve spent a lot of brainpower thinking about different design architectures and patterns. All of the many patterns we discuss rely on implementing some separation of concerns. While we haven’t come to a decision about which architecture is best for us—and this is always subject to change—we realized that Android Studio modules would help us achieve this separation.
We’ve put together a set of best practices for setting up a project that uses Android Studio, Gradle and Robolectric.
Our consulting projects are usually architected in multiple layers, with the model layer coded in pure Java. This means that there’s no dependency on the Android framework. Breaking this layer out into its own Android Studio module provides us with several benefits:
Setting up a Java module in Android Studio is straightforward. Under “Project Structure,” click “+” and then select “Java Library.” Android Studio will generate the folder structure for this module and provide us with a separate build.gradle
file. For this “core” module, the build.gradle
file is relatively simple:
apply plugin: 'java'
dependencies {
compile fileTree(dir: 'libs', include: ['*.jar'])
testCompile 'junit:junit:4.+'
}
To include the core
module in our app
module, we add it as a local dependency:
apply plugin: 'com.android.application'
android {...}
dependencies {
compile project(':core')
}
One caveat worth mentioning: Any test classes of the core
module in the /core/src/test directory will not be compiled into the .jar, which satisfies the dependency in the app
module. If we need the core
module to provide any test helpers or test doubles, we will have to add them as a separate dependency.
Robolectric and Android Studio/Gradle do not work together out of the box. We found two different solutions to integrate them. Both solutions work, but we found one to be superior.
The first solution is to use Robolectric’s provided Gradle plugin. However, there is a drawback: Robolectric and Android Studio each utilize a different version of Junit. This causes a mismatch when compiling:
!!! JUnit version 3.8 or later expected:
java.lang.RuntimeException: Stub!
at junit.runner.BaseTestRunner.<init>(BaseTestRunner.java:5)
at junit.textui.TestRunner.<init>(TestRunner.java:54)
at junit.textui.TestRunner.<init>(TestRunner.java:48)
at junit.textui.TestRunner.<init>(TestRunner.java:41)
Android Studio builds the dependency .iml file for us, so there is no way to prioritize these dependencies by manually setting the order of the entries. The Robolectric documentation says:
Android Studio aggressively re-writes your dependencies list (your .iml file) and subverts the technique used above to get the Android SDK to the bottom of the classpath. You will get the dreaded Stub! exception every time you re-open the project (and possibly more often). For this reason we currently recommend IntelliJ; we hope this can be solved in the future.
This hangup doesn’t entirely prevent us from using Robolectric with Gradle. We just lose the ability to rely on the IDE to speed up running tests and debugging. We are forced to use Gradle from the command line. We get very poor information on failing tests. Furthermore, we lose the ability to debug tests and set breakpoints.
Luckily, another solution exists. JC&K Solutions has written a Gradle plugin to integrate with Robolectric, and Evan Tatarka has written an Android Studio plugin to integrate the JC & K Gradle plugin with Android Studio. This solution allows us to run Robolectric unit tests within Android Studio.
To start, we need to include the JC&K Gradle plugin to our root build.gradle
file:
buildscript {
repositories {
jcenter()
}
dependencies {
classpath 'com.android.tools.build:gradle:1.0.0'
classpath 'com.github.jcandksolutions.gradle:android-unit-test:2.1.1'
}
}
allprojects {
repositories {
jcenter()
}
}
Next, we need to apply the Gradle plugin in our app’s build.gradle
file:
apply plugin: 'com.android.application'
android {...}
apply plugin: 'android-unit-test'
dependencies {
// core android studio module
compile project(':core')
// testing
testCompile 'org.robolectric:robolectric:2.4'
testCompile 'junit:junit:4.+'
}
Then we need to install the Android Studio plugin. From within Android Studio, navigate to Settings -> Plugins -> Browse Repositories… and search for “Android Studio Unit Test.”
Now is a good time to talk about some Gradle parameters that have been mismatched or left unexplained in other tutorials I’ve seen. The directory where our tests live is called a source set. When we create a new project in Android Studio, the IDE defaults to naming this directory androidTest
. See the user guide for more details. However, the Android Studio Unit Test plugin will be looking for our tests in the test
source set. To address this, we could choose to create an alias to remap this source set by adding to our app’s build.gradle
file:
apply plugin: 'com.android.application'
android {...}
apply plugin: 'android-unit-test'
sourceSets {
androidTest.setRoot('src/test')
}
dependencies {...}
However, it is easier, and clearer, to instead rename the androidTest source set (i.e., rename the app/src/androidTest
directory to app/src/test
).
I have also seen several mismatches when naming the Gradle dependency configurations inside the dependencies
block. The Java Gradle plugin is looking for a testCompile
configuration. All test-related dependencies should be applied to that configuration. There is no need for additional configuration dependencies for androidTestCompile
or instrumentTestCompile
.
apply plugin: 'com.android.application'
android {...}
apply plugin: 'android-unit-test'
dependencies {
...
// testing
testCompile 'org.robolectric:robolectric:2.4'
testCompile 'junit:junit:4.+'
// these aren’t getting used
androidTestCompile 'some.other.library'
instrumentTestCompile 'additional.library'
}
As of Android Studio 0.8.9, the test classes are no longer compiled as part of the assembleDebug
task. We have to manually compile them by setting a task dependency. We accomplish this by adding to our app’s build.gradle
file:
apply plugin: 'com.android.application'
android {...}
apply plugin: 'android-unit-test'
afterEvaluate {
tasks.findByName("assembleDebug").dependsOn("testDebugClasses")
}
dependencies {...}
For more information, check out this GitHub issue.
The last thing to do is configure Robolectric to properly locate our app’s AndroidManifest.xml
file. This linking doesn’t happen by default. There are two ways to set this up. The first way is to annotate each test class, hard-linking the test to the manifest:
@RunWith(RobolectricTestRunner.class)
@Config(manifest="./src/main/AndroidManifest.xml")
public class MyActivityTest {
...
}
The second option is to subclass RobolectricTestRunner
to point to the manifest.
public class CustomTestRunner extends RobolectricTestRunner {
public CustomTestRunner(Class<?> testClass) throws InitializationError {
super(testClass);
}
@Override
protected AndroidManifest getAppManifest(Config config) {
String appRoot = "../path/to/app/src/main/";
String manifestPath = appRoot + "AndroidManifest.xml";
String resDir = appRoot + "res";
String assetsDir = appRoot + "assets";
AndroidManifest manifest = createAppManifest(Fs.fileFromPath(manifestPath),
Fs.fileFromPath(resDir),
Fs.fileFromPath(assetsDir));
manifest.setPackageName("com.my.package.name");
// Robolectric is already going to look in the 'app' dir ...
// so no need to add to package name
return manifest;
}
}
And then use this CustomTestRunner
for each test class. You’ll note that because we use a custom test runner, we have to manually tell Robolectric which API to emulate. We are currently limited to API levels 16, 17 or 18.
@Config(emulateSdk = 18)
@RunWith(CustomTestRunner.class)
public class MyActivityTest {
...
}
I will leave it up to you to decide which of these methods is preferable.
Things are getting good, and looking better now. We have multiple Android Studio modules compiled into a single project. We have Gradle building our entire project for us. We have our Robolectric unit tests running smoothly, all within the IDE. Now it’s time to get down to business and build another sweet Android application!
I’d like to thank Josh Skeen and Ross Hambrick for their help with this post. Check out Josh’s sample repository for setting up Gradle + Android Studio + Robolectric.
The post All in Together: Android Studio, Gradle and Robolectric appeared first on Big Nerd Ranch.
]]>