The post TalkBack Crash Course appeared first on Big Nerd Ranch.
]]>TalkBack is Google’s screen reader for Android devices. It’s hard to understand Android’s accessibility issues without experiencing them yourself. Take 5 minutes to read this article, download this cheatsheet, and then go explore your TalkBack app for yourself. You might be surprised by what you find.
What it sounds like: It reads out what’s on the screen.
When a screen reader is active, touches to the screen activate its responses. It acts like a go-between to explain what you’re pointing at. It also provides a gesture language to tell it how to interact with the thing you last pointed at. There are also TalkBack gestures for controlling the device in general, like triggering the Back button.
You can touch anywhere on the screen, listen to what the screen reader says, and if you’ve touched a button or something else you can interact with, ask the screen reader to click it for you by double-tapping.
Imagine you have finished typing an email. Now you need to click the Send button. It could take a long time to find the button just by probing the screen and listening to what is at each touch point.
So there’s an alternative. The screen reader keeps an item in focus. Touching the screen places the focus on the touched item. But from there, you can “look around” that point by swiping left and right. This works like using Tab and Shift-Tab to navigate a form in your browser.
This notion of “focus” also lets you act on the current focus: click a button, start editing in a text field, or nudge a slider. Unlike the normal touch gestures used to do these things, TalkBack’s gestures are addressed to the screen as a whole. You can double-tap anywhere on the home screen to click a focused button.
To make this easier in future, you may want to configure a volume key shortcut.
Head back into Settings and turn off TalkBack Android.
To make toggling TalkBack on and off easier, you can enable the suspend and resume shortcut in the “Miscellaneous” section of TalkBack Settings.
TalkBack Android is controlled entirely by one finger.
Gestures with two or more fingers will not be handled by TalkBack. They’ll be sent directly to the underlying view. Two or more fingers will “pierce the veil”, so you can pinch-to-zoom or scroll the same as ever.
Touch, listen. Touch somewhere else, listen again. You can also touch-and-drag to more rapidly explore the screen.
How is this useful?
Google’s keyboard supports a variant of explore-by-touch:
This combines the “find” and “activate” gestures to speed up typing.
Some third-party keyboards follow Google’s example.
Others do not – sometimes, by choice; other times, seemingly, out of ignorance.
Maybe you’re wondering what swiping up and down does now. This lets you tweak what left and right swipes do by changing the navigation settings. Instead of moving element to element, they can navigate a more specific list of things, like “all headings” or “all links”.
Swipes are also how you scroll:
And they provide a way to reliably jump focus around the screen:
As a bonus, you can use the local context menu (more on this below) to ask TalkBack to read all the links in a block of text, without you having to cursor through the list yourself.
For this, you’ll use angle gestures. These go one direction, then 90 degrees in another direction.
These also let you trigger some other system-level actions, like showing the notifications.
For example, you may want to only focus on headings or links.
(If you’ve used iOS VoiceOver, this is kind of like some of the Rotor options.)
Quickly swiping out and then back to where you started in a continous motion either jumps focus or scrolls the screen.
(Though if a slider is focused, its thumb “scrolls” rather than the screen.)
If you know the item you want to focus is near the top or bottom of the screen, these gestures can help you focus that item faster.
You can also build muscle memory for the controls in an app relative to these anchor points.
You can also use two fingers to scroll like always, because two-finger touches
are ignored by TalkBack.
Actions like Back, Home, and Overview once had hardware buttons.
They still occupy a privileged place in the UI.
TalkBack also gives them pride of place: they have their own dedicated gestures.
The angle gestures equivalent to the hardware buttons involve swiping to the left:
Angle gestures that involve swiping to the right are more peculiar to TalkBack:
TalkBack isn’t the only assistive tech available on Android. Here are several other unique ways people might be interacting with your app:
It navigates a virtual tree of accessibility nodes. Luckily, SDK classes take care of building these nodes in most cases. Tweaking the tree can improve the experience, though. And if you’re building a custom view, or abusing a stock one, you’ll need to work a bit to make it accessible.
TalkBack will send performClick()
and performLongClick()
as needed.
For more, dig into the android.view.accessibility
documentation and follow the links from there.
For yet more, Google has published the TalkBack and Switch Access source code. Included is a test app that exercises the functionality of both. Playing with this test app would be a great way to see everything these tools can do.
The post TalkBack Crash Course appeared first on Big Nerd Ranch.
]]>The post Use Flutter to deliver the same experience across multiple platforms appeared first on Big Nerd Ranch.
]]>Our goal for this year [2020] is that you should be able to run flutter create: fluttler run and have your application run on Web browsers, macOS, Windows, Android, Fuchsia, and iOS, with support for hot reload, plugins, testing, and release mode builds. We intend to ensure that our Material Design widget library works well on all these platforms. (The Flutter Roadmap)
Flutter guarantees consistency by owning the entire user experience, rather than deferring to per-platform UI toolkit components. Like a game engine, it takes control of both drawing and event handling and handles them, both input and output, itself. This makes a marked contrast from React Native, which instead marshals native platform views into rendering and event handling on its behalf. This enables Flutter to reliably render content without dropping any frames and with every pixel under its control. A wide array of widgets is available, and their behavior will change when you update your app, not in response to platform changes. This gives you great control, at one main cost: apps using Flutter do not automatically update to track changes in system styles and behaviors. You can adopt Material Design to mitigate the impact of that caveat because Material Design apps follow that UI standard, rather than any specific platform’s.
This is an intentional tradeoff: Flutter’s bet is that the future is more consistently-branded experiences across platforms, where that consistency is owed first to the brand, secondly, if at all, to the platform. Its vision is “to provide a portable toolkit for building stunning experiences wherever you might want to paint pixels on the screen” (“Announcing Flutter 1.20” for one example, though restatements of this vision are many).
Flutter’s community seems small next to React Native, but large next to Multiplatform Kotlin. Its community is certainly very vocal and visible; blog posts, conferences, library updates, and other events and publications continue to stream out. Its “own the stack” approach does more to guarantee consistency across platforms than React Native can provide, and unlike Multiplatform Kotlin, it can readily share UI code across platforms. Also unlike the situation with Multiplatform Kotlin vs Kotlin/JVM, most Dart libraries also work with Flutter, so you won’t find yourself stuck using less-tested packages for common needs. Its hybrid compilation approach and framework design give developers rapid build-and-deploy with stateful hot-reload during build and test while guaranteeing end users fast apps with a consistent latency in release builds. (This consistency results from using ahead-of-time compilation without runtime JIT compilation. Using AOT compilation to native binary code speeds code loading because the code has already been processed for easy loading and running. Not using JIT avoids variation in performance and latency, because there is no JIT compiler variously optimizing and de-optimizing various codepaths based on the specific details of what code has been run when since app launch.)
I worried that custom rendering would lead to broken accessibility support. In fact, its accessibility support is solid: it builds and maintains a “Semantics tree” to represent accessibility elements as a core part of its rendering pipeline. There’s even automated test support for checking some standard accessibility guidelines, such as text contrast. Dynamic Type support is baked into the Flutter framework. I have not had a chance to investigate how well the stock UI components respect accessibility preferences like Reduce Motion or Bold Text, but those preferences are readily accessible, so it would be easy to accommodate them yourself.
I also worried about Flutter’s localization support, because localization is often overlooked. But the Dart Intl
package has robust i18n support, including handling plurals and gender in building localized strings. Number formatting is rather complete. Time support is weak, in that time zones beyond UTC and Local are not supported, and calendar math (nor support for non-Gregorian calendars) is not provided. Overall, it’s a judicious subset of ICU. It’s not as automatic or comprehensive as iOS’s localization system, which also handles resolving localized non-string assets, and automatically selects and loads the appropriate locale configuration on your behalf, but all the pieces are there. And the community is filling gaps; for example, timezone delivers a zone-aware DateTime
, while buddhist_datetime_dateformat handles formatting dates per the Buddhist calendar.
Code can be readily shared across platforms, including UI code. Accommodating platform differences, such as by varying the core information architecture, is not any more difficult than an if/else. You can get yourself into trouble with plugins, which are packages with platform-specific native code, but Flutter’s federated plugins approach serves to make clear which platforms are supported, and even to allow third-parties to provide support for additional platforms. This means that if you hit on a plugin that could be supported on a platform you need but isn’t yet, you could readily implement and publish the support for the plugin on that platform.
“Across platforms” primarily means “across iOS 8+ and Android 4.1 Jellybean and later (API level 16+)”: As of July 2020, Flutter Web is in beta, Flutter macOS is in alpha, and Flutter Linux and Windows are pre-alpha. That said, Flutter’s stated aim is to become “a portable UI framework for all screens”, and it is making steady progress towards that aim. The Flutter team is making visible progress in public with clear goals and rationale. And I was impressed at the team’s response time to third-party contributions: I had several documentation PRs merged in the week I spent with Flutter.
Unlike React Native, which often has had an iOS-first bias, Flutter’s bias is towards Android, or rather, the Android design language. The Material Design widgets are available and work consistently across platforms; iOS is not stinted there. But documentation and examples for using the Cupertino widget set that reproduces the iOS look and feel is harder to come by, and I had trouble getting it to play nice with Dark Mode. If you’re going full-on branded and effectively building your own widget set, you’re going to be on even ground across both platforms, and it might even prove easier than using the first-party toolkit for those platforms.
I didn’t worry about the Dart language, which is used in writing Flutter apps. It’s thoroughly inoffensive, it has some nice touches, and the ecosystem features a solid static analysis, testing, and packaging story. But if you’re coming from Swift, Kotlin, or TypeScript, Dart will feel familiar, and you’ll be productive very quickly. And if you’re coming from Swift, you’ll be pleased to find async/await support. The biggest tripping points I ran into were:
Flutter seems like the best tool to date for delivering the “same app” across all platforms. It’s a young tool with some rough edges, but I have yet to encounter a multi-platform tool that won’t cut you at times; Flutter’s gotchas seem mostly peripheral, and its rendering approach guarantees a consistency that’s hard to come by otherwise.
The team behind Flutter is responsive and has a clear vision for where the framework is going, including a public roadmap. The actual work happens in public on GitHub, not in a private tree, so it’s easy to follow.
Flutter is both easy to work with and easy to contribute to. The community and project are well-organized, and you probably won’t spend a lot of time flailing around for a library thanks to the Flutter Favorites program.
If you need someone to build an app for multiple platforms, give Big Nerd Ranch a ring.
The post Use Flutter to deliver the same experience across multiple platforms appeared first on Big Nerd Ranch.
]]>The post Module Exports vs. Export Default: Why Not Both? appeared first on Big Nerd Ranch.
]]>module.exports
. The ES6 module system adds a new flavor of export on top of this, the default export.
A great example to illustrate this is this minimal module:
export const A = 'A' export default A
At first glance, you might think that A
has been exported twice, so you might want to remove one of these exports.
But it wasn’t exported twice. In the ES6 module world, this rigs it up so you can both do import A from './a'
and get the default export bound to A
, or do import { A } from './a'
and get the named export bound to A
.
This is equivalent to the CommonJS:
const A = 'A' module.exports = { A, default: A, }
Exposing it both ways means that if there is also export const B = 'B'
, the module consumer can write import { A, B} from './a'
rather than needing to do import A, { B } from './a'
, because they can just grab the named A
export directly alongside the named B
export.
(It’s also a fun gotcha that you can’t use assignment-style destructuring syntax on the default export, so that export default { A, B, C }
can only be destructured in a two-step of import Stuff from './module'; const { A, B } = Stuff
. Exporting A
, B
, and C
directly as export { A, B, C }
in addition to as part of the default export erases this mismatch between assignment destructuring and import syntax.)
If there’s a main function and some helpers, you might export the main function as the default export, but also export all the functions so you can reuse them or test them in isolation.
For example, a module exporting an Express handler as its default might also export the parseRequestJson
and buildResponseJson
de/serializer functions that translate from the JSON data transport format into model objects and back. This would allow directly testing these transformations, without having to work at a remove through only the Express handler.
In the case where the module groups related functions with no clear primary one, like an API module for working with a customer resource ./customer
, you might either omit a default export, or basically say “it’s indeed a grab bag” and export it both ways:
export const find = async (options) => { /* … */ } export const delete = async (id) => { /* … */ } export default { find, delete,
If you similarly had APIs for working with ./product
, this default export approach would simplify writing code like:
import customer from './resources/customer' import product from './resources/product' export const productsForCustomer = async (customerId) => { const buyer = await customer.find(customerId) const products = await Promise.all( buyer.orders .map { order => order.productIds } .map { productId => product.find(productId) } ) return products }
Effectively, all the functions are named with the expectation that they’ll be used through that default export – they expect to be “anchored” to an identifier that provides context (“this function is finding a customer”) for their name. (This sort of design is very common in Elm, as captured in the package design guideline that “Module names should not reappear in function names”. Their reasoning behind this applies equally in JavaScript, so it’s worth reading the two paragraphs.)
If you hadn’t provided a default export with all the functions from both resources, you’d instead have had to alias the imports:
import { find as findCustomer } from './resources/customer' import { find as findProduct } from './resources/product' export const productsForCustomer = async (customerId) => { const buyer = await findCustomer(customerId) const products = await Promise.all( buyer.orders .map { order => order.productIds } .map { productId => findProduct(productId) } ) return products }
The downsides of this are:
The upside is:
This could be fixed by the module author embedding the module name in each exported identifier, at the cost of the author having to repeat the module name in every blessed export.
default
.The post Module Exports vs. Export Default: Why Not Both? appeared first on Big Nerd Ranch.
]]>The post When nullability lies: A cautionary tale appeared first on Big Nerd Ranch.
]]>How does a non-null property wind up null
in Kotlin? Let’s find out!
Kotlin’s nullable reference handling is great. It distinguishes String?
, which is either null
or a String
, from String
, which is always some String
.
If you have:
data class Invitation(val placeName: String)
then you can trust that the property getter for placeName
will never return null
whenever you’re working with an Invitation
.
That class declares, “The placeName
property is a String
that can’t be null
.”
Need more background? Mark Allison will walk you through a concrete example in The Frontier screencast “Kotlin Nullability”.
Kotlin works hard to ensure this:
null
could wind up in a variable with non-null type triggers an error.
If you try passing a null
directly, the compiler will flag it as an error. The code:
Invitation(null)
yields the compiler error:
error: null can not be a value of a non-null type String
Invitation(null)
^
null
in the property setter. This lets it catch less obvious cases, like ones caused by Java not distinguishing null from not-null.null
in by laundering it through the Java interop:
val mostLikelyNull = System.getenv("not actually an environment variable")
Invitation(mostLikelyNull)
Your sneaky code will compile fine, but when run, it triggers an exception:
java.lang.IllegalStateException: mostLikelyNull must not be null
Kotlin’s promise: No more defensive null-checks. No more lurking null pointer exceptions. It’s beautiful.
You try to write a null
into a non-null property, Kotlin will shoot you down.
That’s the theory. But I ran into a case where, all that aside, Kotlin’s “not null” guarantee wound up being violated in practice.
I’ve got a Room entity like so:
@Entity(tableName = "invitation")
data class Invitation(
@SerializedName("name")
@ColumnInfo(name = "device_name")
val placeName: String
)
Room sees that placeName
is a not-null String
and not a maybe-null String?
, and it generates a schema where the device_name
column has a NOT NULL
constraint.
But somehow, I wound up with a runtime exception where that constraint was violated:
android.database.sqlite.SQLiteConstraintException:
NOT NULL
constraint failed:invitation.device_name
(code 1299)
My app asked Room to save an Invitation
with a null
placeName
. Somehow, it got around all of Kotlin’s defenses!
It got worse: The exception left the database locked. The UI stopped updating. Database queries started piling up. Logcat showed reams of messages like:
W/SQLiteConnectionPool: The connection pool for database '/data/user/0/some.app.id.here/databases/database' has been unable to grant a connection to thread 20598 (RxCachedThreadScheduler-27) with flags 0x1 for 120.01101 seconds.
Connections: 1 active, 0 idle, 0 available.
Requests in progress:
executeForCursorWindow started 127006ms ago - running, sql="SELECT * FROM invitation"
That request had started over two minutes ago!
In the end, Android had mercy, and put the poor app out of its misery:
--------- beginning of crash
E/AndroidRuntime: FATAL EXCEPTION: pool-2-thread-2
android.database.sqlite.SQLiteDatabaseLockedException: database is locked (code 5): retrycount exceeded
The exception that shouldn’t have been possible had left the database locked, and ultimately, the app had crashed.
This data was read in from an API call. So the bogus data probably came from there. And, indeed, the corresponding field proved to be missing from the API response.
But why did this not error out at that point? How did an Invitation
ever get created with a null
placeName
property in the first place? Kotlin told us that would be impossible, but exception logging doesn’t lie.
Where did things go wrong?
Nope, it was Retrofit2’s little helper: Gson.
There’s one more actor in this drama: Gson.
Gson makes slurping in JSON as objects painless.
Gson’s “Finer Points with Objects” says:
This implementation handles nulls correctly.
- While serializing, a null field is omitted from the output.
- While deserializing, a missing entry in JSON results in setting the corresponding field in the object to its default value: null for object types, zero for numeric types, and false for booleans.
If you slurp in {}
, Gson will apparently poke a null value into your not-null field. How?
Well, let’s step back and ask: How did this work in the first place, when we didn’t encounter bogus data? Gson’s docs on “Writing an Instance Creator” say:
While deserializing an Object, Gson needs to create a default instance of the class. Well-behaved classes that are meant for serialization and deserialization should have a no-argument constructor.
- Doesn’t matter whether public or private
But this data class doesn’t have a no-args constructor. It’s not “well-behaved.” And yet, it was working fine up till now.
Gson expects either a no-args constructor (which our data class won’t provide) or a registered deserializer. This scenario has neither. How did this ever work in the first place? What’s handling the deserialization for us?
Nosing around in the debugger shows GSON winds up using a ReflectiveTypeAdapterFactory
, which relies on its ConstructorConstructor
.
The factory ultimately falls back on sneaky, evil, unsafe allocation mechanisms rather than telling the developer to fix their code:
// finally try unsafe
return newUnsafeAllocator(type, rawType);
And UnsafeAllocator
is documented to “[d]o sneaky things to allocate objects without invoking their constructors.” It has strategies to exploit Sun Java and the Dalvik VM pre- and post-Gingerbread. These let it build an object without providing any constructor args. On post-Gingerbread Android, it boils down to calling the (private, undocumented) method ObjectInputStream.newInstance()
.
There’s the answer: Gson handles classes that are poorly behaved with regard to deserialization by doing a bad, bad thing. It sneaks behind their back and creates them using what amounts to a backdoor no-args constructor. All their fields start out as null.
Then, if it’s reading valid JSON, Gson makes it right: All the fields that need populating get populated. When it all works, no-one’s the wiser. And when it didn’t work in Java before widespread reliance on nullability annotation, it was probably still fine – null
inhabits all types, and it’s not too terribly surprising when another one sneaks in.
For a Kotlin programmer, this is bad news.
Kotlin doesn’t check for nulls on read, only on write. Gson sneaking around the expected ways of building your object can leave a bomb waiting to go off in your codebase: An impossible scenario – a property declared as never null winding up null – happens, and the language ergonomics push back on trying to address that.
To work around this, you write code that looks unnecessary: you null-check a property declared as never null. The compiler warns that the is-null branch will never be taken. You’re going to probably really want to listen to that warning, but if you do, you reintroduce a crasher. Paper that over with some comments, and maybe reduce the urge to “fix” it by tossing on a @Suppress("SENSELESS_COMPARISON")
.
The compiler warns, “Condition ‘invitation.placeName != null’ is always ‘true’”:
But luckily, it doesn’t optimize the branch away, because…
My debugger shows it ain’t. Thanks, Gson + under-specced backend!
The fix is to make sure any classes you hand to Gson for deserialization either have a no-args constructor or have all their fields marked nullable. Don’t trust data from outside your app!
Use separate Entity
classes with Room. Sanity-check your data after parsing, and handle when insanity comes knocking at the door with grace.
Would trading out Gson for Moshi have avoided this issue?
It turns out, it wouldn’t. But Moshi’s docs both call out the issue and suggest coping strategies. You’ll find this warning and advice in the README section “Default Values & Constructors”:
If the class doesn’t have a no-arguments constructor, Moshi can’t assign the field’s default value, even if it’s specified in the field declaration. Instead, the field’s default is always 0 for numbers, false for booleans, and null for references. […]
This is surprising and is a potential source of bugs! For this reason consider defining a no-arguments constructor in classes that you use with Moshi, using @SuppressWarnings(“unused”) to prevent it from being inadvertently deleted later […]. (emphasis added)
The post When nullability lies: A cautionary tale appeared first on Big Nerd Ranch.
]]>The post React Native Is Native appeared first on Big Nerd Ranch.
]]>React Native apps are native apps. It’s a heck of a coup they’ve pulled off, and while I have my concerns around adopting the technology, “Is it native?” isn’t one of them.
I suspect whether you agree with me hinges on what we each understand by “native”. Here’s what I have in mind:
Overall: Capable of achieving the same ends as any app developed using the platform’s preferred tooling by fundamentally the same mechanisms.
I claim React Native meets that bar.
I’ve spent most of my years as a professional programmer working on Mac & iOS apps. From my Apple-native point of view, React Native is a very elaborate way to marshal UIViews and other UIKit mechanisms towards the usual UIKit ends:
Well, about that one more language. Let’s talk about animation jank and asynchrony.
What is “jank”? It’s jargon for what happens when it’s time for something to show up on screen, but your app can’t render the needed pixels fast enough to show that something. As Shawn Maust put it back in 2015 in “What the Jank?”:
“Jank” is any stuttering or choppiness that a user experiences when there is
motion on the screen—like during scrolling, transitions, or animations.
The difference in language drives to something that may seem less than native at first glance. You see, there’s a context switch between UIKit-land and React Native JavaScript-action-handler-land, and at a high enough call rate – like, say, animation handlers that are supposed to run at the frame rate – the time taken in data marshaling and context switching can become noticeable.
Native apps aren’t immune from animation jank. It feels like there’s a WWDC session or three every year on how not to stutter when you scroll. But the overhead inherent in the technical mechanism eats some of your time budget, which means you get to sweep less inefficiency in your app code under Moore’s rug.
Native apps also aren’t immune from blocking rendering entirely. Do a bulk-import into Core Data on the main thread, parse a sufficiently large (or malicious) XML or JSON document on the main thread, or run a whole network request on the main thread, and the system watchdog will kill your app while leaving behind a death note of “8badf00d”. React Native’s context switch automatically enforces the best practice of doing work off the main thread: React Native developers naturally fall into the “pit of success” when it comes to aggressively pushing work off the main thread.
How do you deal with the time taken by a function call? You do less work, or you do work on the other side of the bridge.
Or you surface that gap, that asynchrony, in your programming model with:
Apple’s frameworks are rife with these mechanisms. Your standard IBAction-to-URLSession-to-spinner-to-view-update flow has a slow as a dog HTTP call in the middle. React Native’s IBAction-to-JSCore-to-view-update flow has a tiny little RPC bridge in the middle that often runs fast enough that you can pretend it’s synchronous. By the end of 2018, you may not even have to pretend – React Native will directly support synchronous cross-language calls where that’s advantageous.
React Native apps with their action handlers in JavaScript are no less native than an iOS app with their action handlers on a server on the other side of an HTTP API.
If you’ve worked on the common “all the brains are in our serverside API” flavor of iOS app, this should sound familiar. It should sound doubly familiar if that serverside API happens to be implemented in Node.js.
And, indeed, running the same language both serverside and clientside makes it a lot easier to change up which side of the pipe an operation happens on. (Such are the joys of isomorphic code, and it’s a small reason some are excited about Swift on the Server.)
React Native uses the same underlying mechanisms and benefits as much from Apple’s work on UIKit as does any other iOS app. React Native apps are native – perhaps even more native than many “iOS app as Web API frontend” apps!
The post React Native Is Native appeared first on Big Nerd Ranch.
]]>CircleCI's Test Summary feature puts this info front-and-center so you can respond directly to the test failure without anything getting in your way. The trick is to feed CircleCI your test info the way it expects.
The post Producing a CircleCI Test Summary with Fastlane appeared first on Big Nerd Ranch.
]]>The heart of Continuous Integration is running tests.
Whenever a test fails, you want to know why ASAP so you can correct it.
Whenever a CI build fails, you want to see that failing test and how it failed.
CircleCI’s Test Summary feature puts this info front-and-center so you can
respond directly to the test failure without anything getting in your way.
The trick is to feed CircleCI your test info the way it expects.
The build log might be fine to start.
You expand the failing step, scroll to the end of the page, and then scroll up till you hit the test failure.
This is not too bad. At first.
But with a big enough project, the build and test logs grow too long to view in-place on the web page.
Then you find yourself downloading the log file first.
Sometimes the failing test isn’t really that near the end of the file.
Then you’re fumbling around trying to find it.
Across a lot of developers on a long project,
this time and friction adds up.
If you’re building an iOS app, and you copy-paste the
Example Configuration for Using Fastlane on CircleCI,
you should luck into something that works.
But you’ll want to better understand what the Test Summary feature
is looking for if:
CircleCI’s Collecting Test Metadata doc
calls out one big thing:
The store_test_results
step reference
calls out another:
This subdirectory name is used to identify the test suite.
There’s one more requirement that I haven’t seen documented anywhere, though:
xml
extension.The rest of the filename doesn’t seem matter for the test summary,
but if you have the wrong path extension,
you won’t see any test summary.
You’ll wind up with a directory layout like:
/Users/distiller/project/
└── fastlane
└── test_output
└── xctest
└── junit.xml
3 directories, 1 file
This ticks all the boxes:
junit.xml
xctest
test_output
(Fastlane only produces a single test report,
so the nesting of report-in-folder-in-folder admittedly looks a little silly.)
Scan provides a lot of config knobs.
You can view a table of the full list and their default values by running
fastlane action scan
.
We need to arrange three things:
Conveniently enough, Scan has three config settings, one for each of those
needs.
Scan also happens to have three different ways
to set those three options:
scan()
In your Fastfile, you can set them using keyword arguments to the
scan
method call:
scan(
# … other arguments …
output_types: 'junit',
output_files: 'junit.xml',
output_directory: './fastlane/test_output/xctest')
If you’re invoking fastlane
directly,
you can set them using CLI options:
fastlane scan
--output_directory="./fastlane/test_output/xctest"
--output_types="junit"
--output_files="junit.xml"
Because Ruby is a true descendant of Perl,
TMTOWTDI,
so you could also configure Scan using environment variables:
env
SCAN_OUTPUT_DIRECTORY=./fastlane/test_output/xctest
SCAN_OUTPUT_TYPES=junit
SCAN_OUTPUT_FILES=junit.xml
fastlane scan
(You could also set those environment variables in the environment
stanza in
your CircleCI config. Six one way, half-dozen the other.)
Now you have Fastlane Scan writing its test report using the JUnit format into
a *.xml
file under a suggestively-named subdirectory.
To get CircleCI to actually process this carefully arranged data,
you’ll need tell the store_test_results
step to snarf everything at and under
fastlane/test_output
.
That’s right: not just the xctest
subdirectory that holds the test report
XML, but the directory.
Add this step to the pipeline that runs scan
:
- store_test_results:
path: "./fastlane/test_output"
At some point, you’ll probably want to be able to look at the test report
yourself, as well as the overall build logs.
You can send both of those on up to CircleCI as build artifacts using a couple
store_artifacts
steps:
- store_artifacts:
path: "./fastlane/test_output"
destination: scan-test-output
- store_artifacts:
path: ~/Library/Logs/scan
destination: scan-logs
You’re not limited to just one artifact or just one test output.
In fact, handling multiple kinds of test output is precisely why there’s the
folder-in-folder nesting.
Say you wanted to have CircleCI call out SwiftLint nits.
You could drop this snippet into your jobs
list:
lint:
docker:
- image: dantoml/swiftlint:latest
steps:
- checkout
- run:
name: Run SwiftLint
command: |
mkdir -p ./test_output/swiftlint
swiftlint lint --strict --reporter junit | tee ./test_output/swiftlint/junit.xml
- store_test_results:
path: "./test_output"
- store_artifacts:
path: "./test_output"
The key links in the chain here are:
./test_output/
./test_output/swiftlint/
.xml
file: ./test_output/swiftlint/junit.xml
store_test_results
at that “ALL the tests” directory: path: "./test_output/"
Any output you can massage into meeting those requirements,
you can cadge CircleCI into calling out in your Test Summary.
There you have it:
store_test_results
build step.The post Producing a CircleCI Test Summary with Fastlane appeared first on Big Nerd Ranch.
]]>The post Growing a Code Review Culture appeared first on Big Nerd Ranch.
]]>Big Nerd Ranch esteems code review. We’ve seen it pay off time and again. It is core to our workflow and process. If you want to experience the benefits in your team, here’s what that means in practice for everyone involved.
Leaders foster a culture of review as top priority. There are good reasons for this, as elaborated by Glen D. Sanford in light of their time at Twitter. Those reasons can be summarized as:
Authors need to create PRs that are intended to be reviewed.
In practice, this means:
It can be instructive to compare these principles to the SOLID principles. As with the structure of code, so with the structure of changes to that code.
Reviewers need to take the responsibility seriously. Review is an opportunity to have a lasting effect on both code and team.
In practice, this means:
If a team feels that reviews are rubber-stamps en route to landing changes, there will be trouble. Reviews will be reduced to unwanted busy-work.
If a team is planning work without allowing time for code review, there will be trouble. Reviews will be rushed. They might convert into rubber-stamps as a way to leave breathing room for other planned work.
“Done” includes a code review. If people feel there isn’t time to review work done, then they will be landing half-baked work. Taking on less work helps here. Kanban’s limits on work-in-progress can effectively require reviews be completed to free up space for further development.
(If PRs are piling up, you are headed for a headache of merge conflicts that everyone involved will have forgotten how to resolve, never mind review. That is a warning sign in and of itself, and it can emerge with or without a review culture.)
It’s also important for people to have realistic expectations about the time review can take. Worked three days on a PR? Expect it to take at least three days to review.
Or better, don’t work for three days before submitting something to be reviewed! The adjustment in perspective from “a PR finishes everything about something” to “a PR pushes the project to a slightly better state” can take time, but it also can unlock a lot of process improvements from planning to estimating to development and testing to, yes, reviewing. Issuing a PR each day keeps the chaos at bay.
Interested in nurturing a code review culture in your organization? Reach out to Big Nerd Ranch today to talk about how we can work with your team to raise the bar.
Thanks to my colleagues who precipitated this post and contributed content and feedback: Josh Justice, Dan Ra, and Evan McCoy.
The post Growing a Code Review Culture appeared first on Big Nerd Ranch.
]]>The post Write Better Code Using Kotlin’s Require, Check and Assert appeared first on Big Nerd Ranch.
]]>Good code makes its context plain. At a glance, you can see what it needs to succeed, and what happens when it does. Mastering Kotlin’s common language for codifying your functions’ assumptions and promises will help you write code you can change with confidence. You will catch any bugs sooner, and you will spend less time debugging.
Kotlin has three functions for capturing execution context:
require(Boolean)
throws IllegalArgumentException
when its argument is false. Use it to test function arguments.check(Boolean)
throws IllegalStateException
when its argument is false. Use it to test object state.assert(Boolean)
throws AssertionError
when its argument is false (but only if JVM assertions are enabled with -ea
). Use it to clarify outcomes and check your work.These functions give Kotlin programmers a common language. If you do not use these functions, you will probably reinvent them.
Bare exceptions and errors are little help. So each function has another variation.That variation takes a lazy message closure as final argument. Use that message to jumpstart debugging by reporting relevant values. You will see these variations used soon.
These three functions look very similar, but each has its specific purpose. Examples of using each alone, then all together, will make that clear.
A function makes assumptions about:
Direct Inputs: These are function arguments. Maybe you need an Int
to be non-negative. Maybe you need a File
to be readable. Before you begin working with your arguments, check that they are valid with require
.
Indirect Inputs: These are often object state. Sometimes certain functions only make sense to call if other functions have been called already. A socket needs to connect to a host before it makes sense to read from or write to it. You check these conditions using check
.
To check assumptions about function arguments, use require
:
fun activate(index: Int) {
// Argument Assumption: |index| is a non-negative integer.
require(index > 0) { "Int |index| must be non-negative. index=$index" }
…
}
fun load(from: File): String {
// Argument Assumption: |from| is a readable file.
require(from.canRead()) { "File |from| must be readable. file=$from canRead=${from.canRead()}" }
…
}
To check assumptions about things that are not function arguments, use check
:
class Socket {
var isConnected: Boolean = false
var connectedHost: Host? = null
fun connect(to: Host, result: (isConnected: Boolean) -> Void) {
// Starting State Assumption: |this| is not already connected.
check(!isConnected) {
"|Socket.connect| cannot be called after a successful call to |Socket.connect|. "+
"socket=$this to=$to connectedHost=$connectedHost"
}
…
}
fun write(blocks: Blocks): Int {
// Starting State Assumption: |this| is connected.
check(isConnected) {
"|Socket.connect| must succeed before |socket.write| can be called. "+
"socket=$this blocks=$blocks"
}
…
}
}
We write code to do something. When that something is to return a value, our promise is the return type. But return types often do not tell the whole story. And when that something is to change other state, our promise is secret.
assert
verifies your function did its job:
fun activate(index: Int) {
…
// Ending State Promise: The pump at |index| is now active.
assert(pump[index].isActive) { "Failed to activate pump index=$index" }
}
Kotlin gives us tools to write clear code. Clear code says what it knows. It does not keep it secret.
You often use require
, check
and assert
in the same places in a function:
fun anyFunction(arg: Arg): Result {
// Starting State Assumption: XXX
check(internalStateIsSane) {
"Say what you expected. Log |this| and |args| as well as the failing internal state."
}
// Argument Assumption: XXX
require(arg.isSane) {
"Say what you expected. Log |arg| and the values used in the failed check."
}
…
// Ending State Promise: XXX
assert(result.isSane) {
"Say what you expected. Log |result| and the failed check's output."
}
result
}
As shown, the pattern is:
Before anything else, check the starting state with check
. If any of these checks fails, the arguments do not even matter – the function should never have been called!
Next, check the function arguments with require
. If an argument turns out to be invalid, it is best to catch that before changing anything or doing any other work, since the function call will fail anyway.
In the middle, do the actual work of your function.
Lastly, assert the function did what it was supposed to do. Sometimes that means checking that some objects have a new state. Sometimes that means checking that the return value is reasonable.
In all cases, write your failure message to jumpstart debugging. If your first question when a check fails will be, “What was the value of something
?” then the message should answer that question.
Checking things can grow tiresome. The way out is more precise types: As Yaron Minsky says, “Make illegal states unrepresentable.” For example, a require(intValue >= 0)
check can be eliminated by using a type whose values can only represent non-negative integers. But that is a topic for another day.
Curious to better know Kotlin? Stay updated with our Kotlin Programming books & bootcamps. Our two-day Kotlin Essentials course delivers in spades, while our Android Essentials with Kotlin course will set you on the right path for Android development.
The post Write Better Code Using Kotlin’s Require, Check and Assert appeared first on Big Nerd Ranch.
]]>The post Why Associated Type Requirements Become Generic Constraints appeared first on Big Nerd Ranch.
]]>Objective-C had protocols. They name a set of messages. For example, the UITableViewDataSource
protocol has messages for asking the number of sections and the number of rows in a section.
Swift has protocols. They too name a set of messages.
But Swift protocols can also have associated types. Those types play a role in the protocol. They are placeholders for types. When you implement a protocol, you get to fill in those placeholders.
Associated types are a powerful tool. They make protocols easier to implement.
For example, Swift’s Equatable
protocol has a function to ask if a value is equal to another value:
static func ==(lhs: Self, rhs: Self) -> Bool
This function uses the Self
type. The Self
type is an associated type. It is always filled in with the name of the type that implements a protocol. (Not convinced Self
is an associated type? Jump to the end of the article, then come back.) So if you have a type struct Name { let value: String }
, and you add an extension Name: Equatable {}
, then Equatable.Self
in that case is Name
, and you will write a function:
static func ==(lhs: Name, rhs: Name) -> Bool
Self
is written as Name
here, because you are implementing Equatable
for the type Name
.
Equatable
uses the associated Self
type to limit the ==
function to only values of the same type.
NSObjectProtocol
also has a method isEqual(_:)
. But because it is an Objective-C protocol, it cannot use a Self
type. Instead, its equality test is declared as:
func isEqual(_ object: Any?) -> Bool
Because an Objective-C protocol cannot restrict the argument to an associated type, every implementation of the protocol suffers. It is common to begin an implementation by checking that the argument is the same type as the receiver:
func isEqual(_ object: Any?) -> Bool {
guard let other = object as? Name
else { return false }
// Now you can actually check equality.
Every implementation of isEqual(_:)
has to check this. It must check each and every time it is called.
Implementers of Equatable
never have to check this. It is guaranteed once and for all, for every implementation, through the Self
associated type.
Protocol ‘SomeProtocol’ can only be used as a generic constraint because it has Self or associated type requirements.
Associated types are a powerful tool. That power comes at a cost:
error: protocol 'Equatable' can only be used as a generic constraint because it has Self or associated type requirements
Code that uses a protocol that relies on associated types pays the price. Such code must be written using generic types.
Generic types are also placeholders. When you call a function that uses generic types, you get to fill in those placeholders.
When you look at generic types versus associated types, the relationship between caller and implementer flips:
Consider a function checkEquals(left:right:)
. This does nothing but defer to Equatable’s ==
:
func checkEquals(
left: Equatable,
right: Equatable
) -> Bool {
return left == right
}
The Swift compiler rejects this:
error: repl.swift:2:7: error: protocol 'Equatable' can only be used as a generic constraint because it has Self or associated type requirements
left: Equatable,
^
error: repl.swift:3:8: error: protocol 'Equatable' can only be used as a generic constraint because it has Self or associated type requirements
right: Equatable
^
What if Swift allowed this? Let us do an experiment.
Pretend you have two different Equatable types, Name
and Age
.
Then you could write code like this:
let name = Name(value: "")
let age = Age(value: 0)
let isEquals = checkEquals(name, age)
This is nonsense! There are two ways to see this:
==
would checkEquals
call in the last line? Name
’s? Age
’s? Neither applies. These are only ==(Name, Name)
and ==(Age, Age)
, because Equatable declares only ==(Self, Self)
. To call either Name’s or Age’s ==
would break type safety.Equatable
type is not a type alone. It has a relationship to another type, Self
. If you write checkEquals(left: Equatable, right: Equatable)
, you only talk about Equatable
. Its associated Self
type is ignored. You cannot talk about “Equatable
” alone. You must talk about “Equatable
where Self
is (some type)”.This is subtle but important. checkEquals
looks like it will work. It wants to compare an Equatable
with an Equatable
. But Equatable
is an incomplete type. It is “equatable for some type”.
checkEquals(left: Equatable, right: Equatable)
says that left
is “equatable for some type” and right
is “equatable for some type”. Nothing stops left
from being “equatable for some type” and right
from being “equatable for some other type”. Nothing makes left
and right
both be “equatable for the same type”.
Equatable.==
needs its left
and right
to be the same type. This makes checkEquals
not work.
Discover why a code audit is essential to your application’s success!
checkEquals
cannot know what “some type” should be in “Equatable
where Self
is (some type)”. Instead, it must handle every group of “Equatable
and Self
type”: It must be “checkEquals for all types T
, where T
is ‘Equatable
and its associated types’”.
You write this in code like so:
func checkEquals<T: Equatable>(
left: T,
right: T
) -> Bool {
return left == right
}
Now, every type T
that is an Equatable
type – this includes its associated Self
type – has its own checkEquals
function. Instead of having to write checkEquals(left: Name, right: Name)
and checkEquals(left: Age, right: Age)
, you use Swift’s generic types to write a “recipe” for making those types. You have walked backwards into the “Extract Generic Function” refactoring.
Writing checkEquals
using NSObjectProtocol
instead of Equatable
does not need generics:
import Foundation
func checkEquals(
left: NSObjectProtocol,
right: NSObjectProtocol
) -> Bool {
return left.isEqual(right)
}
This is simple to write. It also allows us to ask stupid questions:
let isEqual = checkEquals(name, age)
Is a name
even comparable with an age
? No. So isEqual
evaluates to false
. Name.isEqual(_:)
will see obj
is not a kind of Name
. Name.isEqual(_:)
will return false
then. But unlike Equatable.==
, every single implementation of isEqual(_:)
must be written to handle such silly questions.
Associated types make Swift’s protocols more powerful than Objective-C’s.
An Objective-C’s protocol captures the relationship between an object and its callers. The callers can send it messages in the protocol; the implementer promises to implement those messages.
A Swift protocol can also capture the relationship between one type and several associated types. The Equatable
protocol relates a type to itself through Self
. The SetAlgebra
protocol relates its implementer to an associated Element
type.
This power can simplify implementations of the protocol. To see this, you contrasted implementing Equatable’s ==
and NSObjectProtocol’s isEqual(_:)
.
This power can complicate code using the protocol. To see this, you contrasted calling Equatable’s ==
and NSObjectProtocol’s isEqual(_:)
.
Expressive power can complicate. When you write a protocol, you must trade the value of what you can say using associated types against the cost of dealing with them.
I hope this article helps you evaluate the protocols and APIs you create and consume. If you found this helpful, you should check out our Advanced Swift bootcamp.
Self
acts like an associated type. Unlike other associated types, you do not get to choose the type associated with Self
. Self
is automatically associated with the type implementing the protocol.
But the error message talks about a protocol that “has Self or associated type requirements”. This makes it sound like they are different things.
This is hair-splitting. But a hair in the wrong place distracts. I went to find an answer. I have found it for you in the source code for the abstract syntax tree used by the Swift compiler. A doc comment on AssociatedTypeDecl
says:
Every protocol has an implicitly-created associated type ‘Self’ that
describes a type that conforms to the protocol.
Case closed: Self
is an associated type.
The post Why Associated Type Requirements Become Generic Constraints appeared first on Big Nerd Ranch.
]]>The post Throws Helps Readability appeared first on Big Nerd Ranch.
]]>Swift’s error-related syntax calls attention to possible errors through try
and throws
. The do
/catch
syntax clearly separates the happy path (no errors) from the sad path (errors):
func exampleSyncUsageOfThrows() -> Bool {
do {
/* happy path */
let cookie = try ezbake()
eat(cookie)
return true
} catch {
/* sad path */
return false
}
}
Because throws
is “viral”, you’re forced to address it one way or another, even if it’s by deciding to flip your lid when you hit an error by using the exploding try!
.
Swift’s error-related syntax is great when every line of code executes one after another, synchronously. But it all goes to heck when you want to pause between steps to wait for an external event, like a timer finishing or a web server getting back to you with a response, or anything else happening asynchronously.
Let’s try that example again, only scheduling the cookie-baking for later, and then waiting for the cookie to cool before scarfing it:
func exampleAsyncDoesNotPlayNiceWithThrows(completion hadDinner: @escaping (Bool) -> Void) {
ezbakeTomorrow { cookie, error in
// hope you don't forget to check for an error first!
// also hope you like optional unwrapping
guard error == nil, let cookie = cookie else {
return hadDinner(false)
}
wait(tillCool: cookie) { coolCookie, error in
guard error == nil, let coolCookie = coolCookie else {
// dog snarfed cookie?
return hadDinner(false)
}
eat(coolCookie)
hadDinner(true)
}
}
}
This approach of calling a completion closure with parameters for both the desired result and the failure explanation all marked optional is common across Cocoa APIs as well as third-party code. Correctly unpacking those arguments relies heavily on convention. That is to say, it relies heavily on you being very careful not to shoot yourself in the foot.
Because both the success value (cookie
) and the failure value (error
) might not be present, both end up being optionals. That means you end up with four cases to consider, of which two should probably never happen:
cookie
but no error
. This is unambiguous.error
but no cookie
. This is similarly unambiguous.error
AND cookie
. If you’re following classic Cocoa style, this gets lumped in with the success case, so that a successful run could, before ARC, leave error
pointing at fabulously uninitialized data or scratch errors that didn’t happen. (As you might imagine, that convention gets messed up pretty often.)error
nor cookie
. This is probably a bug in whatever’s giving you this output. But, alas, you still have to deal with it as a possibility.Result
is a popular enumeration for cleaning this up. It looks something like:
enum Result<Value> {
case success(Value)
case failure(Error)
}
This addresses all the weirdness with the conventional approach:
case
exhaustiveness ensures the error is on your radar.Just as do
/catch
lets you clearly separate handling a successful result from a failure, so does Result through switch
/case
:
func exampleAsyncLikesResult(completion hadDinner: @escaping (Bool) -> Void) {
ezbakeTomorrow { result in
switch result {
case let .success(cookie):
wait(tillCool: cookie) { result in
switch result {
// look ma, no optionals!
case let .success(coolCookie):
eat(coolCookie)
hadDinner(true)
case let .failure(_):
hadDinner(false)
}
}
case let .failure(_):
hadDinner(false)
}
}
}
Result achieves the aims of do/catch
/throw
for async code. But it can also be used for sync code. This leads to competition between Result and throws for the synchronous case:
func exampleSyncUsageofResult() {
return
ezbake()
.map({ eat($0) })
.isSuccess
}
That’s…not so pretty. It would get even uglier if there was a sequence of possibly failing steps:
// this mess…
func exampleUglierSyncResult() {
return
open("some file")
.flatMap({ write("some text", to: $0) })
.map({ print("success!"); return $0 })
.flatMap({ close($0) })
.isSuccess
}
// …translates directly to this less-mess
func exampleSyncIsLessUglyWithTry() {
do {
let file = try open("some file")
let stillAFile = try write("some text", to: file)
print("success!")
try close(stillAFile)
return true
} catch {
return false
}
}
It’s kind of easy to lose the flow in all that syntax, plus it sounds like you have a funky verbal tic with the repeated map
and flatMap
. You also have to keep deciding between (and distracting your reader with the distinction between) map
and flatMap
.
That suggests a rule of thumb: stick with throws
for synchronous code. Applying that even to mixed sync (within the body of completion callbacks) and async (did I mention completion callbacks?) code allows to play to the strengths of both throws
and Result
.
First, here’s a mechanical translation of the earlier exampleAsyncLikesResult
function:
func exampleMechanicallyBridgingBetweenAsyncAndSync(completion hadDinner: @escaping (Bool) -> Void) {
ezbakeTomorrow { result in
do {
let cookie = try result.unwrap()
wait(tillCool: cookie) { result in
do {
let coolCookie = try result.unwrap()
eat(coolCookie)
hadDinner(true)
} catch {
hadDinner(false)
}
}
} catch {
hadDinner(false)
}
}
}
Each completion accepts a Result
, but in working with it, it immediately returns to using the Swift try/throw/do/catch syntax.
try
has a try?
variant that allows to clean this up even more nicely. This is more like the code you’d likely write in the first place when using this style:
func exampleNicerBridgingBetweenAsyncAndSync(completion hadDinner: @escaping (Bool) -> Void) {
ezbakeTomorrow { result in
guard let cookie = try? result.unwrap()
else { return hadDinner(false) }
wait(tillCool: cookie) { result in
guard let coolCookie = try? result.unwrap
else { return hadDinner(false) }
eat(coolCookie)
hadDinner(true)
}
}
}
This relies on some simple helper functions to bridge between Result
and throws
.
Result.unwrap() throws
goes from Result
to throws
: The caller of an async method that delivers a Result
can then use result.unwrap()
to bridge back from Result
into something you can try
and catch
. unwrap()
is a throwing function that throws if it’s .failure
and otherwise just returns its .success
value. We saw plenty of examples earlier.
static Result.of(trying:)
goes from throws
to Result
: The implementation of async methods can use Result.of(trying:)
to wrap up the result of running a throwing closure as a Result
; this helper runs its throwing closure and stuffs any caught error in .failure
, and otherwise, wraps the result up in .success
.
This is used to implement async functions delivering a result. Since the running example delivered a Boolean, you haven’t seen this used yet. Here’s a small example:
func youComplete(me completion: @escaping (Result<MissingPiece>) -> Void) {
doSomethingAsync { boxOfPieces: Result<PieceBox> in
let result = Result.of {
let box = try boxOfPieces.unwrap()
let piece = try findMissingPiece(in: box)
return piece
}
completion(result)
}
}
What these functions are called varies across Result
implementations (I’m eagerly awaiting the Swift version of what Promises/A+ delivered for JavaScript), but whatever your Result
calls them, use them! (And if they aren’t there, you can readily write your own.)
For a concrete example of implementing these, as well as the variation in names, check out antitypical/Result’s versions:
Result.unwrap() throws
: Result.dematerialize() throwsstatic Result.of(trying:)
: Result.init(attempt:)So that’s the bottom line:
Result
as your completion callback argument.do
/catch
to work with potential errors.The post Throws Helps Readability appeared first on Big Nerd Ranch.
]]>