Jeremy W. Sherman - Big Nerd Ranch Wed, 16 Nov 2022 21:02:05 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.5 TalkBack Crash Course https://bignerdranch.com/blog/talkback-crash-course/ https://bignerdranch.com/blog/talkback-crash-course/#respond Fri, 18 Feb 2022 14:00:00 +0000 https://nerdranchighq.wpengine.com/blog/talkback-crash-course/ TalkBack is Google's screen reader for Android devices. It's hard to understand accessibility issues without experiencing them yourself. Take 5 minutes to read this article, download its cheatsheet, and then go explore your app with fingers and ears for the first time. You might be surprised by what you find.

The post TalkBack Crash Course appeared first on Big Nerd Ranch.

]]>

TalkBack is Google’s screen reader for Android devices. It’s hard to understand Android’s accessibility issues without experiencing them yourself. Take 5 minutes to read this article, download this cheatsheet, and then go explore your TalkBack app for yourself. You might be surprised by what you find.

What is a screen reader?

What it sounds like: It reads out what’s on the screen.

It mediates your interaction with the screen

When a screen reader is active, touches to the screen activate its responses. It acts like a go-between to explain what you’re pointing at. It also provides a gesture language to tell it how to interact with the thing you last pointed at. There are also TalkBack gestures for controlling the device in general, like triggering the Back button.

It tells you what you touch

You can touch anywhere on the screen, listen to what the screen reader says, and if you’ve touched a button or something else you can interact with, ask the screen reader to click it for you by double-tapping.

It maintains a current focus

Imagine you have finished typing an email. Now you need to click the Send button. It could take a long time to find the button just by probing the screen and listening to what is at each touch point.

So there’s an alternative. The screen reader keeps an item in focus. Touching the screen places the focus on the touched item. But from there, you can “look around” that point by swiping left and right. This works like using Tab and Shift-Tab to navigate a form in your browser.

This notion of “focus” also lets you act on the current focus: click a button, start editing in a text field, or nudge a slider. Unlike the normal touch gestures used to do these things, TalkBack’s gestures are addressed to the screen as a whole. You can double-tap anywhere on the home screen to click a focused button.

Getting Started with TalkBack

How do I turn TalkBack on?

  • Open the “Settings” application.
  • Select “Accessibility” from the “System” section, way down at the bottom.
  • Select “TalkBack” from the “Services” section.
  • Toggle it to “On”.

To make this easier in future, you may want to configure a volume key shortcut.

How do I turn TalkBack off?

Head back into Settings and turn off TalkBack Android.

To make toggling TalkBack on and off easier, you can enable the suspend and resume shortcut in the “Miscellaneous” section of TalkBack Settings.

How do I use TalkBack?

TalkBack Android is controlled entirely by one finger.

Gestures with two or more fingers will not be handled by TalkBack. They’ll be sent directly to the underlying view. Two or more fingers will “pierce the veil”, so you can pinch-to-zoom or scroll the same as ever.

How do I look around?

  • Forward: Swipe right with one finger anywhere on the home screen to move the focus forward. This is “forward” in the sense of text, so right/down.
  • Back: Swipe left with one finger anywhere on the screen to move the focus back.

How do I explore what’s on the screen by touch?

Touch, listen. Touch somewhere else, listen again. You can also touch-and-drag to more rapidly explore the screen.

How is this useful?

  • Jump to known location: If you know the widget you want to interact with is somewhere in the middle of the screen, this can get you close to it, and you can swipe the focus around from there.
  • Hack around busted accessibility: Not everyone tests their apps using TalkBack. Being able to aim the focus “by hand” can break you out of a focus loop or let you aim focus at something you can’t otherwise tab to.
  • Skimming: Touch-and-drag can be faster than pushing the focus around with separate swipes.

How do I type?

Google’s keyboard supports a variant of explore-by-touch:

  • Slide your finger around till you hear the letter you want.
  • Lift your finger. This types the letter.

This combines the “find” and “activate” gestures to speed up typing.

Some third-party keyboards follow Google’s example.
Others do not – sometimes, by choice; other times, seemingly, out of ignorance.

How do I interact with the focus?

  • Click: Double-tap anywhere on the screen with one finger.
  • Long Click: Double-tap and hold anywhere on the screen with one finger.

How do I navigate a long webpage?

Maybe you’re wondering what swiping up and down does now. This lets you tweak what left and right swipes do by changing the navigation settings. Instead of moving element to element, they can navigate a more specific list of things, like “all headings” or “all links”.

  • Next Navigation Setting: Swipe down
  • Previous Navigation Setting: Swipe up

Swipes are also how you scroll:

  • Scroll Forward: Swipe right then left
  • Scroll Backward: Swipe left then right

And they provide a way to reliably jump focus around the screen:

  • Focus First Item on Screen: Swipe up then down
  • Focus Last Item on Screen: Swipe down then up

As a bonus, you can use the local context menu (more on this below) to ask TalkBack to read all the links in a block of text, without you having to cursor through the list yourself.

What about hardware buttons?

For this, you’ll use angle gestures. These go one direction, then 90 degrees in another direction.

These also let you trigger some other system-level actions, like showing the notifications.

Gesture Cheatsheets

The Basic D-Pad: Moves

  • Simple swipes affect focus.
    Swipe left and right to move focus between items.
    Swipe up and down to change the kind of item to focus on.

For example, you may want to only focus on headings or links.
(If you’ve used iOS VoiceOver, this is kind of like some of the Rotor options.)

Back-And-Forth: Jumps

Quickly swiping out and then back to where you started in a continous motion either jumps focus or scrolls the screen.
(Though if a slider is focused, its thumb “scrolls” rather than the screen.)

  • Swipe up then back to focus the first item on the screen.
    Swipe down then back to focus the last item on the screen.

If you know the item you want to focus is near the top or bottom of the screen, these gestures can help you focus that item faster.
You can also build muscle memory for the controls in an app relative to these anchor points.

  • Swipe left then back to scroll up or to move a slider left.
    Swipe right then back to scroll down or to move a slider right.

You can also use two fingers to scroll like always, because two-finger touches
are ignored by TalkBack.

Angle: System Buttons

Actions like Back, Home, and Overview once had hardware buttons.

They still occupy a privileged place in the UI.

TalkBack also gives them pride of place: they have their own dedicated gestures.

The angle gestures equivalent to the hardware buttons involve swiping to the left:

  • Home: Swipe up then left
  • Back: Swipe down then left
  • Overview: Swipe left then up

Angle gestures that involve swiping to the right are more peculiar to TalkBack:

  • Notification Drawer: Swipe right then down
    • This lets you pull the drawer down without worrying about feeling for the top of the physical screen.s
  • Local Context Menu: Swipe up, then right
    • The contents of this menu depend in part on the current focus.
    • For example, if you’ve focused on an unlabeled image, you’ll see an option to give it a custom label.
    • But they also provide another way to change the kind of item focused.
      Changing the target item using the menu can be faster than making repeated single swipes up or down.
  • Global Context Menu: Swipe down then right
    • If you want to tell TalkBack to read a chunk of content, or reread something, or even copy the last thing it read to the clipboard, this is the menu for you.
    • It also has convenient shortcuts to TalkBack Settings (where you can change gesture behavior, amongst other things) and an option to Dim Screen.

Try this at home!

  • Configure your own actions for currently-unassigned gestures
  • Use the local context menu to set a custom label for an unlabeled widget
  • Use the global context menu to “Dim screen”. This will black out the whole screen, so you’re forced to lean on TalkBack to interact with your device.
  • Use Google’s Accessibility Scanner app on the home page of your app. What issues does it find?
  • Explore the keyboard shortcuts available with a hardware keyboard.

Other Assistive Tech

TalkBack isn’t the only assistive tech available on Android. Here are several other unique ways people might be interacting with your app:

  • BrailleBack: like TalkBack, but for Braille displays
  • Switch Access: could be one switch, or two like in a sip-puff straw device
  • Voice Access: hands-free control

For the More Curious: How does TalkBack work?

It navigates a virtual tree of accessibility nodes. Luckily, SDK classes take care of building these nodes in most cases. Tweaking the tree can improve the experience, though. And if you’re building a custom view, or abusing a stock one, you’ll need to work a bit to make it accessible.

TalkBack will send performClick() and performLongClick() as needed.

For more, dig into the android.view.accessibility documentation and follow the links from there.

For yet more, Google has published the TalkBack and Switch Access source code. Included is a test app that exercises the functionality of both. Playing with this test app would be a great way to see everything these tools can do.

The post TalkBack Crash Course appeared first on Big Nerd Ranch.

]]>
https://bignerdranch.com/blog/talkback-crash-course/feed/ 0
Use Flutter to deliver the same experience across multiple platforms https://bignerdranch.com/blog/use-flutter-to-deliver-the-same-experience-across-multiple-platforms/ Tue, 25 Aug 2020 15:45:27 +0000 https://www.bignerdranch.com/?p=4481 Flutter is ideal when you want a consistent experience across platforms and the more platforms you target, the more value you'll derive. Read more to discover just how Flutter works to bring a consistent experience to your users across multiple platforms.

The post Use Flutter to deliver the same experience across multiple platforms appeared first on Big Nerd Ranch.

]]>
Flutter is ideal when you want a consistent experience across platforms. The more platforms you target, the more value you’ll derive from Flutter. That value derives, not only from consistency in behavior and presentation, but also from consistency in implementation: unlike React Native, Flutter explicitly aims to deliver the same experience on multiple platforms using a single codebase:

Our goal for this year [2020] is that you should be able to run flutter create: fluttler run and have your application run on Web browsers, macOS, Windows, Android, Fuchsia, and iOS, with support for hot reload, plugins, testing, and release mode builds. We intend to ensure that our Material Design widget library works well on all these platforms. (The Flutter Roadmap)

Flutter guarantees consistency by owning the entire user experience, rather than deferring to per-platform UI toolkit components. Like a game engine, it takes control of both drawing and event handling and handles them, both input and output, itself. This makes a marked contrast from React Native, which instead marshals native platform views into rendering and event handling on its behalf. This enables Flutter to reliably render content without dropping any frames and with every pixel under its control. A wide array of widgets is available, and their behavior will change when you update your app, not in response to platform changes. This gives you great control, at one main cost: apps using Flutter do not automatically update to track changes in system styles and behaviors. You can adopt Material Design to mitigate the impact of that caveat because Material Design apps follow that UI standard, rather than any specific platform’s.

This is an intentional tradeoff: Flutter’s bet is that the future is more consistently-branded experiences across platforms, where that consistency is owed first to the brand, secondly, if at all, to the platform. Its vision is “to provide a portable toolkit for building stunning experiences wherever you might want to paint pixels on the screen” (“Announcing Flutter 1.20” for one example, though restatements of this vision are many).

Flutter’s community seems small next to React Native, but large next to Multiplatform Kotlin. Its community is certainly very vocal and visible; blog posts, conferences, library updates, and other events and publications continue to stream out. Its “own the stack” approach does more to guarantee consistency across platforms than React Native can provide, and unlike Multiplatform Kotlin, it can readily share UI code across platforms. Also unlike the situation with Multiplatform Kotlin vs Kotlin/JVM, most Dart libraries also work with Flutter, so you won’t find yourself stuck using less-tested packages for common needs. Its hybrid compilation approach and framework design give developers rapid build-and-deploy with stateful hot-reload during build and test while guaranteeing end users fast apps with a consistent latency in release builds. (This consistency results from using ahead-of-time compilation without runtime JIT compilation. Using AOT compilation to native binary code speeds code loading because the code has already been processed for easy loading and running. Not using JIT avoids variation in performance and latency, because there is no JIT compiler variously optimizing and de-optimizing various codepaths based on the specific details of what code has been run when since app launch.)

Accessibility support is solid

I worried that custom rendering would lead to broken accessibility support. In fact, its accessibility support is solid: it builds and maintains a “Semantics tree” to represent accessibility elements as a core part of its rendering pipeline. There’s even automated test support for checking some standard accessibility guidelines, such as text contrast. Dynamic Type support is baked into the Flutter framework. I have not had a chance to investigate how well the stock UI components respect accessibility preferences like Reduce Motion or Bold Text, but those preferences are readily accessible, so it would be easy to accommodate them yourself.

Localization support is not bad

I also worried about Flutter’s localization support, because localization is often overlooked. But the Dart Intl package has robust i18n support, including handling plurals and gender in building localized strings. Number formatting is rather complete. Time support is weak, in that time zones beyond UTC and Local are not supported, and calendar math (nor support for non-Gregorian calendars) is not provided. Overall, it’s a judicious subset of ICU. It’s not as automatic or comprehensive as iOS’s localization system, which also handles resolving localized non-string assets, and automatically selects and loads the appropriate locale configuration on your behalf, but all the pieces are there. And the community is filling gaps; for example, timezone delivers a zone-aware DateTime, while buddhist_datetime_dateformat handles formatting dates per the Buddhist calendar.

Codesharing is nigh total, including UI code

Code can be readily shared across platforms, including UI code. Accommodating platform differences, such as by varying the core information architecture, is not any more difficult than an if/else. You can get yourself into trouble with plugins, which are packages with platform-specific native code, but Flutter’s federated plugins approach serves to make clear which platforms are supported, and even to allow third-parties to provide support for additional platforms. This means that if you hit on a plugin that could be supported on a platform you need but isn’t yet, you could readily implement and publish the support for the plugin on that platform.

Platform support favors Material Design on mobile OSs

“Across platforms” primarily means “across iOS 8+ and Android 4.1 Jellybean and later (API level 16+)”: As of July 2020, Flutter Web is in beta, Flutter macOS is in alpha, and Flutter Linux and Windows are pre-alpha. That said, Flutter’s stated aim is to become “a portable UI framework for all screens”, and it is making steady progress towards that aim. The Flutter team is making visible progress in public with clear goals and rationale. And I was impressed at the team’s response time to third-party contributions: I had several documentation PRs merged in the week I spent with Flutter.

Unlike React Native, which often has had an iOS-first bias, Flutter’s bias is towards Android, or rather, the Android design language. The Material Design widgets are available and work consistently across platforms; iOS is not stinted there. But documentation and examples for using the Cupertino widget set that reproduces the iOS look and feel is harder to come by, and I had trouble getting it to play nice with Dark Mode. If you’re going full-on branded and effectively building your own widget set, you’re going to be on even ground across both platforms, and it might even prove easier than using the first-party toolkit for those platforms.

Dart is an effective tool

I didn’t worry about the Dart language, which is used in writing Flutter apps. It’s thoroughly inoffensive, it has some nice touches, and the ecosystem features a solid static analysis, testing, and packaging story. But if you’re coming from Swift, Kotlin, or TypeScript, Dart will feel familiar, and you’ll be productive very quickly. And if you’re coming from Swift, you’ll be pleased to find async/await support. The biggest tripping points I ran into were:

  • Dart requires you write semicolons.
  • Dart uses C-style SomeType thing rather than Pascal-style thing: SomeType type declarations.
  • Dart uses final/const vs var rather than let/var or val/var. (The final/const distinction resembles const vs constexpr in C++.)
  • The cascade syntax with . . is hard to look up unless you happen to guess it’s a descendant of the Smalltalk syntax of a similar name that was used similarly. That said, it reads naturally enough that it’s not much of a stumbling block.

Summary

Flutter seems like the best tool to date for delivering the “same app” across all platforms. It’s a young tool with some rough edges, but I have yet to encounter a multi-platform tool that won’t cut you at times; Flutter’s gotchas seem mostly peripheral, and its rendering approach guarantees a consistency that’s hard to come by otherwise.

The team behind Flutter is responsive and has a clear vision for where the framework is going, including a public roadmap. The actual work happens in public on GitHub, not in a private tree, so it’s easy to follow.

Flutter is both easy to work with and easy to contribute to. The community and project are well-organized, and you probably won’t spend a lot of time flailing around for a library thanks to the Flutter Favorites program.

If you need someone to build an app for multiple platforms, give Big Nerd Ranch a ring.

The post Use Flutter to deliver the same experience across multiple platforms appeared first on Big Nerd Ranch.

]]>
Module Exports vs. Export Default: Why Not Both? https://bignerdranch.com/blog/default-exports-or-named-exports-why-not-both/ Tue, 12 Nov 2019 20:57:00 +0000 https://nerdranchighq.wpengine.com/?p=3924 In NodeJS's CommonJS module system, a module could only export one object: the one assigned to module.exports. The ES6 module system adds a new flavor of export on top of this, the default export. This allows you to both communicate the module's primary function while still providing access to ancillary functions.

The post Module Exports vs. Export Default: Why Not Both? appeared first on Big Nerd Ranch.

]]>
In NodeJS’s CommonJS module system, a module could only export one object: the one assigned to module.exports. The ES6 module system adds a new flavor of export on top of this, the default export.

A minimal ES6 module

A great example to illustrate this is this minimal module:

export const A = 'A'
export default A

At first glance, you might think that A has been exported twice, so you might want to remove one of these exports.

But it wasn’t exported twice. In the ES6 module world, this rigs it up so you can both do import A from './a' and get the default export bound to A, or do import { A } from './a' and get the named export bound to A.

Its CommonJS equivalent

This is equivalent to the CommonJS:

const A = 'A'
module.exports = {
  A,
  default: A,
}

Why expose a symbol as both default and named exports?

Exposing it both ways means that if there is also export const B = 'B', the module consumer can write import { A, B} from './a' rather than needing to do import A, { B } from './a', because they can just grab the named A export directly alongside the named B export.

(It’s also a fun gotcha that you can’t use assignment-style destructuring syntax on the default export, so that export default { A, B, C } can only be destructured in a two-step of import Stuff from './module'; const { A, B } = Stuff. Exporting AB, and C directly as export { A, B, C } in addition to as part of the default export erases this mismatch between assignment destructuring and import syntax.)

Why use default exports at all?

  • Simplify usage: Having a default export simplifies import when the person importing the module just wants the obvious thing from there. There’s simply less syntax and typing to do.
  • Signal intent: A default export communicates the module author’s understanding about what the primary export is from their module.

Intent examples

Example: Express handler: Main and helpers

If there’s a main function and some helpers, you might export the main function as the default export, but also export all the functions so you can reuse them or test them in isolation.

For example, a module exporting an Express handler as its default might also export the parseRequestJson and buildResponseJson de/serializer functions that translate from the JSON data transport format into model objects and back. This would allow directly testing these transformations, without having to work at a remove through only the Express handler.

Example: API binding: Related functions with no primary

In the case where the module groups related functions with no clear primary one, like an API module for working with a customer resource ./customer, you might either omit a default export, or basically say “it’s indeed a grab bag” and export it both ways:

export const find = async (options) => { /* … */ }
export const delete = async (id) => { /* … */ }
export default {
  find,
  delete,

Anchored API increases context

If you similarly had APIs for working with ./product, this default export approach would simplify writing code like:

import customer from './resources/customer'
import product from  './resources/product'
export const productsForCustomer = async (customerId) => {
  const buyer = await customer.find(customerId)
  const products = await Promise.all(
    buyer.orders
    .map { order => order.productIds }
    .map { productId => product.find(productId) }
  )
  return products
}

Effectively, all the functions are named with the expectation that they’ll be used through that default export – they expect to be “anchored” to an identifier that provides context (“this function is finding a customer”) for their name. (This sort of design is very common in Elm, as captured in the package design guideline that “Module names should not reappear in function names”. Their reasoning behind this applies equally in JavaScript, so it’s worth reading the two paragraphs.)

Unanchored API requires aliasing and repetition

If you hadn’t provided a default export with all the functions from both resources, you’d instead have had to alias the imports:

import { find as findCustomer } from './resources/customer'
import { find as findProduct } from  './resources/product'
export const productsForCustomer = async (customerId) => {
  const buyer = await findCustomer(customerId)
  const products = await Promise.all(
    buyer.orders
    .map { order => order.productIds }
    .map { productId => findProduct(productId) }
  )
  return products
}

The downsides of this are:

  • The API consumer’s aliasing workload scales linearly with the number of identifiers they want to use.
  • Different consumers may alias them to different names, which makes code written against the API less uniform (and harder to rename through search-and-replace).

The upside is:

  • It’s clear from the import list precisely which identifiers you’re importing.

This could be fixed by the module author embedding the module name in each exported identifier, at the cost of the author having to repeat the module name in every blessed export.

Summary

  • Default exports, from a CommonJS module point of view, amount to sugar for exporting and importing an identifier named default.
  • There are good reasons to use both default and named exports.
  • You can make your codebase more uniform and readable by taking advantage of default exports in consuming and designing APIs.

The post Module Exports vs. Export Default: Why Not Both? appeared first on Big Nerd Ranch.

]]>
When nullability lies: A cautionary tale https://bignerdranch.com/blog/when-nullability-lies-a-cautionary-tale/ https://bignerdranch.com/blog/when-nullability-lies-a-cautionary-tale/#respond Mon, 03 Dec 2018 09:00:00 +0000 https://nerdranchighq.wpengine.com/blog/when-nullability-lies-a-cautionary-tale/ When a Kotlin non-null property turns up null, you know something has gone subtly, yet terribly, wrong. You trusted that type system, and it let you down. But how? How does the impossible happen? We'll get to the bottom of this!

The post When nullability lies: A cautionary tale appeared first on Big Nerd Ranch.

]]>

How does a non-null property wind up null in Kotlin? Let’s find out!

Review: How Kotlin Handles Nullable References

Kotlin’s nullable reference handling is great. It distinguishes String?, which is either null or a String, from String, which is always some String.
If you have:

data class Invitation(val placeName: String)

then you can trust that the property getter for placeName will never return null whenever you’re working with an Invitation.

That class declares, “The placeName property is a String that can’t be null.”

Need more background? Mark Allison will walk you through a concrete example in The Frontier screencast “Kotlin Nullability”.

Kotlin works hard to ensure this:

  • At compile-time: During compilation, an obvious case where a null could wind up in a variable with non-null type triggers an error.

    If you try passing a null directly, the compiler will flag it as an error. The code:

    Invitation(null)
    

    yields the compiler error:

    error: null can not be a value of a non-null type String
    Invitation(null)
               ^
    
  • At run-time: Kotlin also guards against null in the property setter. This lets it catch less obvious cases, like ones caused by Java not distinguishing null from not-null.
    So, if you try sneaking a null in by laundering it through the Java interop:

    val mostLikelyNull = System.getenv("not actually an environment variable")
    Invitation(mostLikelyNull)
    

    Your sneaky code will compile fine, but when run, it triggers an exception:

    java.lang.IllegalStateException: mostLikelyNull must not be null

Kotlin’s promise: No more defensive null-checks. No more lurking null pointer exceptions. It’s beautiful.
You try to write a null into a non-null property, Kotlin will shoot you down.

And Yet, a Wild Null Value Appears

That’s the theory. But I ran into a case where, all that aside, Kotlin’s “not null” guarantee wound up being violated in practice.

I’ve got a Room entity like so:

@Entity(tableName = "invitation")
data class Invitation(
  @SerializedName("name")
  @ColumnInfo(name = "device_name")
  val placeName: String
)

Room sees that placeName is a not-null String and not a maybe-null String?, and it generates a schema where the device_name column has a NOT NULL constraint.

But somehow, I wound up with a runtime exception where that constraint was violated:

android.database.sqlite.SQLiteConstraintException: NOT NULL constraint failed: invitation.device_name (code 1299)

My app asked Room to save an Invitation with a null placeName. Somehow, it got around all of Kotlin’s defenses!

It got worse: The exception left the database locked. The UI stopped updating. Database queries started piling up. Logcat showed reams of messages like:

W/SQLiteConnectionPool: The connection pool for database '/data/user/0/some.app.id.here/databases/database' has been unable to grant a connection to thread 20598 (RxCachedThreadScheduler-27) with flags 0x1 for 120.01101 seconds.
    Connections: 1 active, 0 idle, 0 available.

    Requests in progress:
      executeForCursorWindow started 127006ms ago - running, sql="SELECT * FROM invitation"

That request had started over two minutes ago!

In the end, Android had mercy, and put the poor app out of its misery:

    --------- beginning of crash
E/AndroidRuntime: FATAL EXCEPTION: pool-2-thread-2
    android.database.sqlite.SQLiteDatabaseLockedException: database is locked (code 5): retrycount exceeded

The exception that shouldn’t have been possible had left the database locked, and ultimately, the app had crashed.

How does a not-null property wind up null?

This data was read in from an API call. So the bogus data probably came from there. And, indeed, the corresponding field proved to be missing from the API response.

But why did this not error out at that point? How did an Invitation ever get created with a null placeName property in the first place? Kotlin told us that would be impossible, but exception logging doesn’t lie.

Where did things go wrong?

  • Unchecked platform types?
  • Retrofit2?
  • Room?

Nope, it was Retrofit2’s little helper: Gson.

Gson treats “missing” as “null”

There’s one more actor in this drama: Gson.

Gson makes slurping in JSON as objects painless.

Gson’s “Finer Points with Objects” says:

This implementation handles nulls correctly.

  • While serializing, a null field is omitted from the output.
  • While deserializing, a missing entry in JSON results in setting the corresponding field in the object to its default value: null for object types, zero for numeric types, and false for booleans.

If you slurp in {}, Gson will apparently poke a null value into your not-null field. How?

What about when it’s not missing?

Well, let’s step back and ask: How did this work in the first place, when we didn’t encounter bogus data? Gson’s docs on “Writing an Instance Creator” say:

While deserializing an Object, Gson needs to create a default instance of the class. Well-behaved classes that are meant for serialization and deserialization should have a no-argument constructor.

  • Doesn’t matter whether public or private

How does Gson handle poorly-behaved classes?

But this data class doesn’t have a no-args constructor. It’s not “well-behaved.” And yet, it was working fine up till now.

Gson expects either a no-args constructor (which our data class won’t provide) or a registered deserializer. This scenario has neither. How did this ever work in the first place? What’s handling the deserialization for us?

ReflectiveTypeAdapterFactory

Nosing around in the debugger shows GSON winds up using a ReflectiveTypeAdapterFactory, which relies on its ConstructorConstructor.

UnsafeAllocator

The factory ultimately falls back on sneaky, evil, unsafe allocation mechanisms rather than telling the developer to fix their code:

    // finally try unsafe
    return newUnsafeAllocator(type, rawType);

And UnsafeAllocator is documented to “[d]o sneaky things to allocate objects without invoking their constructors.” It has strategies to exploit Sun Java and the Dalvik VM pre- and post-Gingerbread. These let it build an object without providing any constructor args. On post-Gingerbread Android, it boils down to calling the (private, undocumented) method ObjectInputStream.newInstance().

Do a bad thing, then make it right

There’s the answer: Gson handles classes that are poorly behaved with regard to deserialization by doing a bad, bad thing. It sneaks behind their back and creates them using what amounts to a backdoor no-args constructor. All their fields start out as null.

Then, if it’s reading valid JSON, Gson makes it right: All the fields that need populating get populated. When it all works, no-one’s the wiser. And when it didn’t work in Java before widespread reliance on nullability annotation, it was probably still fine – null inhabits all types, and it’s not too terribly surprising when another one sneaks in.

For a Kotlin programmer, this is bad news.
Kotlin doesn’t check for nulls on read, only on write. Gson sneaking around the expected ways of building your object can leave a bomb waiting to go off in your codebase: An impossible scenario – a property declared as never null winding up null – happens, and the language ergonomics push back on trying to address that.

Working Around the Problem

To work around this, you write code that looks unnecessary: you null-check a property declared as never null. The compiler warns that the is-null branch will never be taken. You’re going to probably really want to listen to that warning, but if you do, you reintroduce a crasher. Paper that over with some comments, and maybe reduce the urge to “fix” it by tossing on a @Suppress("SENSELESS_COMPARISON").

The compiler warns, “Condition ‘invitation.placeName != null’ is always ‘true’”:

But luckily, it doesn’t optimize the branch away, because…

My debugger shows it ain’t. Thanks, Gson + under-specced backend!

Fixing the Problem

The fix is to make sure any classes you hand to Gson for deserialization either have a no-args constructor or have all their fields marked nullable. Don’t trust data from outside your app!

Use separate Entity classes with Room. Sanity-check your data after parsing, and handle when insanity comes knocking at the door with grace.

What about Moshi?

Would trading out Gson for Moshi have avoided this issue?

It turns out, it wouldn’t. But Moshi’s docs both call out the issue and suggest coping strategies. You’ll find this warning and advice in the README section “Default Values & Constructors”:

If the class doesn’t have a no-arguments constructor, Moshi can’t assign the field’s default value, even if it’s specified in the field declaration. Instead, the field’s default is always 0 for numbers, false for booleans, and null for references. […]

This is surprising and is a potential source of bugs! For this reason consider defining a no-arguments constructor in classes that you use with Moshi, using @SuppressWarnings(“unused”) to prevent it from being inadvertently deleted later […]. (emphasis added)

The post When nullability lies: A cautionary tale appeared first on Big Nerd Ranch.

]]>
https://bignerdranch.com/blog/when-nullability-lies-a-cautionary-tale/feed/ 0
React Native Is Native https://bignerdranch.com/blog/react-native-is-native/ https://bignerdranch.com/blog/react-native-is-native/#respond Mon, 15 Oct 2018 09:00:00 +0000 https://nerdranchighq.wpengine.com/blog/react-native-is-native/ React Native apps are native apps. It’s a heck of a coup they’ve pulled off, and while I have my concerns around adopting the technology, “Is it native?” isn’t one of them.

The post React Native Is Native appeared first on Big Nerd Ranch.

]]>

React Native apps are native apps. It’s a heck of a coup they’ve pulled off, and while I have my concerns around adopting the technology, “Is it native?” isn’t one of them.

But what is “native”?

I suspect whether you agree with me hinges on what we each understand by “native”. Here’s what I have in mind:

  • Uses the platform’s preferred UI toolkit
  • Wires into the platform’s usual mechanisms for event dispatch (touches, keys, motion, location changes, etc.)

Overall: Capable of achieving the same ends as any app developed using the platform’s preferred tooling by fundamentally the same mechanisms.

I claim React Native meets that bar.

Same Mechanisms Differently Marshaled

I’ve spent most of my years as a professional programmer working on Mac & iOS apps. From my Apple-native point of view, React Native is a very elaborate way to marshal UIViews and other UIKit mechanisms towards the usual UIKit ends:

  • View creation and configuration: Many iOS apps rely on XIB files for their view creation and configuration. If you haven’t looked at a XIB file on disk before, have a look: It’s your UI, rendered in XML. React Native uses hand-written JSX to rig up its UIViews, but that’s a difference more in markup flavor than in kind. (It codegens something nearer manual view creation and configuration code, but that’s also something in vogue amongst some iOS devs.)
  • Layout: It’s not enough to just stuff some views inside some others. You sometimes want them to have a certain shape. iOS devs can use raw AutoLayout constraints, or Ye Olde Visual Format Language, or the newer anchor-based API that leverages the type system, or Snap, or Masonry, or, or… Whatever it is you use, it ultimately boils down to setting the view’s frame. Well, React Native likes to use a flexbox-alike to describe its layout. It, too, boils down to frame updates by way of Yoga.
  • Event Dispatch: You got your UIViews, you got your IBActions. You’re coding your event reaction using JavaScript rather than Swift or Objective-C, but, eh – what’s one more C-family language?

Language & Execution Context

Well, about that one more language. Let’s talk about animation jank and asynchrony.

What is “jank”? It’s jargon for what happens when it’s time for something to show up on screen, but your app can’t render the needed pixels fast enough to show that something. As Shawn Maust put it back in 2015 in “What the Jank?”:

“Jank” is any stuttering or choppiness that a user experiences when there is
motion on the screen—like during scrolling, transitions, or animations.

The difference in language drives to something that may seem less than native at first glance. You see, there’s a context switch between UIKit-land and React Native JavaScript-action-handler-land, and at a high enough call rate – like, say, animation handlers that are supposed to run at the frame rate – the time taken in data marshaling and context switching can become noticeable.

Native apps aren’t immune from animation jank. It feels like there’s a WWDC session or three every year on how not to stutter when you scroll. But the overhead inherent in the technical mechanism eats some of your time budget, which means you get to sweep less inefficiency in your app code under Moore’s rug.

Native apps also aren’t immune from blocking rendering entirely. Do a bulk-import into Core Data on the main thread, parse a sufficiently large (or malicious) XML or JSON document on the main thread, or run a whole network request on the main thread, and the system watchdog will kill your app while leaving behind a death note of “8badf00d”. React Native’s context switch automatically enforces the best practice of doing work off the main thread: React Native developers naturally fall into the “pit of success” when it comes to aggressively pushing work off the main thread.

Asynchrony

How do you deal with the time taken by a function call? You do less work, or you do work on the other side of the bridge.

Or you surface that gap, that asynchrony, in your programming model with:

  • Callbacks
  • Delegates
  • Operations
  • Reactors (Run-Loop Observers)
  • Promises Futures Results Observables Streams Channels

Apple’s frameworks are rife with these mechanisms. Your standard IBAction-to-URLSession-to-spinner-to-view-update flow has a slow as a dog HTTP call in the middle. React Native’s IBAction-to-JSCore-to-view-update flow has a tiny little RPC bridge in the middle that often runs fast enough that you can pretend it’s synchronous. By the end of 2018, you may not even have to pretend – React Native will directly support synchronous cross-language calls where that’s advantageous.

React Native apps with their action handlers in JavaScript are no less native than an iOS app with their action handlers on a server on the other side of an HTTP API.

If you’ve worked on the common “all the brains are in our serverside API” flavor of iOS app, this should sound familiar. It should sound doubly familiar if that serverside API happens to be implemented in Node.js.

And, indeed, running the same language both serverside and clientside makes it a lot easier to change up which side of the pipe an operation happens on. (Such are the joys of isomorphic code, and it’s a small reason some are excited about Swift on the Server.)

Native Is As Native Does

React Native uses the same underlying mechanisms and benefits as much from Apple’s work on UIKit as does any other iOS app. React Native apps are native – perhaps even more native than many “iOS app as Web API frontend” apps!

The post React Native Is Native appeared first on Big Nerd Ranch.

]]>
https://bignerdranch.com/blog/react-native-is-native/feed/ 0
Producing a CircleCI Test Summary with Fastlane https://bignerdranch.com/blog/producing-a-circleci-test-summary-with-fastlane/ https://bignerdranch.com/blog/producing-a-circleci-test-summary-with-fastlane/#respond Mon, 25 Jun 2018 10:00:00 +0000 https://nerdranchighq.wpengine.com/blog/producing-a-circleci-test-summary-with-fastlane/ The heart of Continuous Integration is running tests. Whenever a test fails, you want to know why ASAP so you can correct it. Whenever a CI build fails, you want to see that failing test and how it failed.

CircleCI's Test Summary feature puts this info front-and-center so you can respond directly to the test failure without anything getting in your way. The trick is to feed CircleCI your test info the way it expects.

The post Producing a CircleCI Test Summary with Fastlane appeared first on Big Nerd Ranch.

]]>

The heart of Continuous Integration is running tests.
Whenever a test fails, you want to know why ASAP so you can correct it.
Whenever a CI build fails, you want to see that failing test and how it failed.

CircleCI’s Test Summary feature puts this info front-and-center so you can
respond directly to the test failure without anything getting in your way.
The trick is to feed CircleCI your test info the way it expects.

Why set up Test Summary when you already have a build log?

The build log might be fine to start.
You expand the failing step, scroll to the end of the page, and then scroll up till you hit the test failure.

This is not too bad. At first.

But with a big enough project, the build and test logs grow too long to view in-place on the web page.
Then you find yourself downloading the log file first.

Sometimes the failing test isn’t really that near the end of the file.
Then you’re fumbling around trying to find it.

Across a lot of developers on a long project,
this time and friction adds up.

Don’t CircleCI’s docs cover this already?

If you’re building an iOS app, and you copy-paste the
Example Configuration for Using Fastlane on CircleCI,
you should luck into something that works.

But you’ll want to better understand what the Test Summary feature
is looking for if:

  • Your Test Summary omits info you want in there, like linter output.
  • Test Summary isn’t working for you, and you want to fix it.
  • You’re not building an iOS app using Fastlane, and one of the other example configs doesn’t meet your needs.

What does CircleCI need for a Test Summary?

CircleCI’s Collecting Test Metadata doc
calls out one big thing:

  • Report tests using JUnit’s XML format.

The store_test_results step reference
calls out another:

  • Your test report should be in a subdirectory of another “all the tests” directory.

This subdirectory name is used to identify the test suite.

There’s one more requirement that I haven’t seen documented anywhere, though:

  • The JUnit XML test report file must literally have the xml extension.

The rest of the filename doesn’t seem matter for the test summary,
but if you have the wrong path extension,
you won’t see any test summary.

What does that look like on disk?

You’ll wind up with a directory layout like:

/Users/distiller/project/
└── fastlane
    └── test_output
        └── xctest
            └── junit.xml

3 directories, 1 file

This ticks all the boxes:

  • XML file: junit.xml
  • “Test Suite” directory: xctest
  • “All Test Suites” directory: test_output

(Fastlane only produces a single test report,
so the nesting of report-in-folder-in-folder admittedly looks a little silly.)

How do you get Fastlane Scan to write JUnit XML to that path?

Scan provides a lot of config knobs.
You can view a table of the full list and their default values by running
fastlane action scan.

We need to arrange three things:

  • The report format: JUnit
  • The report filename: junit.xml
  • The directory that report file should be written to: fastlane/test_output/xctest

Conveniently enough, Scan has three config settings, one for each of those
needs.

Scan also happens to have three different ways
to set those three options:

  • In your Fastfile:
    • Using keyword arguments to scan()
  • In your shell:
    • Using option flags
  • Anywhere (but probably in your shell):
    • Using environment variables

Keyword Arguments to scan()

In your Fastfile, you can set them using keyword arguments to the
scan method call:

scan(
  # … other arguments …
  output_types: 'junit',
  output_files: 'junit.xml',
  output_directory: './fastlane/test_output/xctest')

Option Flags to fastlane scan

If you’re invoking fastlane directly,
you can set them using CLI options:

fastlane scan 
  --output_directory="./fastlane/test_output/xctest" 
  --output_types="junit" 
  --output_files="junit.xml"

Environment Variables

Because Ruby is a true descendant of Perl,
TMTOWTDI,
so you could also configure Scan using environment variables:

env 
  SCAN_OUTPUT_DIRECTORY=./fastlane/test_output/xctest 
  SCAN_OUTPUT_TYPES=junit 
  SCAN_OUTPUT_FILES=junit.xml 
  fastlane scan

(You could also set those environment variables in the environment stanza in
your CircleCI config. Six one way, half-dozen the other.)

How do you get CircleCI to process the JUnit XML?

Now you have Fastlane Scan writing its test report using the JUnit format into
a *.xml file under a suggestively-named subdirectory.

To get CircleCI to actually process this carefully arranged data,
you’ll need tell the store_test_results step to snarf everything at and under
fastlane/test_output.

That’s right: not just the xctest subdirectory that holds the test report
XML, but the directory.

Add this step to the pipeline that runs scan:

- store_test_results:
    path: "./fastlane/test_output"

What about the rest of Scan’s output?

At some point, you’ll probably want to be able to look at the test report
yourself, as well as the overall build logs.

You can send both of those on up to CircleCI as build artifacts using a couple
store_artifacts steps:

- store_artifacts:
    path: "./fastlane/test_output"
    destination: scan-test-output
- store_artifacts:
    path: ~/Library/Logs/scan
    destination: scan-logs

What about more than just Scan?

You’re not limited to just one artifact or just one test output.
In fact, handling multiple kinds of test output is precisely why there’s the
folder-in-folder nesting.

Say you wanted to have CircleCI call out SwiftLint nits.
You could drop this snippet into your jobs list:

lint:
  docker:
    - image: dantoml/swiftlint:latest

  steps:
    - checkout

    - run:
        name: Run SwiftLint
        command: |
          mkdir -p ./test_output/swiftlint
          swiftlint lint --strict --reporter junit | tee ./test_output/swiftlint/junit.xml

    - store_test_results:
        path: "./test_output"
    - store_artifacts:
        path: "./test_output"

The key links in the chain here are:

  • Create an “ALL the tests” directory: ./test_output/
  • Create a “test suite” directory: ./test_output/swiftlint/
  • Write JUnit output into a .xml file: ./test_output/swiftlint/junit.xml
  • Aim store_test_results at that “ALL the tests” directory: path: "./test_output/"

Any output you can massage into meeting those requirements,
you can cadge CircleCI into calling out in your Test Summary.

Conclusion

There you have it:

  • Quickly debug CI build failures
  • By putting relevant test details in prime real estate
  • By feeding a folder two steps up from a carefully located, precisely named test report file
  • To CircleCI’s store_test_results build step.

The post Producing a CircleCI Test Summary with Fastlane appeared first on Big Nerd Ranch.

]]>
https://bignerdranch.com/blog/producing-a-circleci-test-summary-with-fastlane/feed/ 0
Growing a Code Review Culture https://bignerdranch.com/blog/growing-a-code-review-culture/ https://bignerdranch.com/blog/growing-a-code-review-culture/#respond Mon, 23 Oct 2017 09:55:53 +0000 https://nerdranchighq.wpengine.com/blog/growing-a-code-review-culture/ Big Nerd Ranch esteems code review. We've seen it pay off time and again. It is core to our workflow and process. If you want to experience the benefits in your team, here's what that means in practice for everyone involved.

The post Growing a Code Review Culture appeared first on Big Nerd Ranch.

]]>

Big Nerd Ranch esteems code review. We’ve seen it pay off time and again. It is core to our workflow and process. If you want to experience the benefits in your team, here’s what that means in practice for everyone involved.

Leaders Set the Stage

Leaders foster a culture of review as top priority. There are good reasons for this, as elaborated by Glen D. Sanford in light of their time at Twitter. Those reasons can be summarized as:

  • Keep the cache hot: The longer it takes to close out a review, the more time everyone involved has to forget what the code being reviewed was even supposed to do. Relearning that takes time. Avoiding relearning saves time.
  • Keep the cycle time short: The details of code are not the only thing you can forget. A review is a discussion. If that discussion becomes drawn out, you can forget the context of the discussion around the code, as well.
  • Minimize open loops/balls in the air: (Choose your preferred metaphor bingo square.) As would-be changes stack up, there’s more chance for conflicts, and more might-bes to deal with. The faster a review process completes, the sooner you convert a maybe into fact.

Authors Must Remember Their Audience

Authors need to create PRs that are intended to be reviewed.

In practice, this means:

  • Keeping your PR small and coherent: A small PR means there is less code to review; a coherent PR means reduces the number of potential concerns a reviewer might have by concentrating the effect of your changes.
  • Keeping your commits small and coherent: When few lines are changed per commit, you get more opportunities to explain those changes through a commit message. The commentary–to–code ratio rises, which gives you more chances to make the changes easier for your reader to understand.
  • Writing commit messages that tell a readable story: Commits do not happen in isolation. Your reviewer will see the commits in order from first to last. If those commits tell a clear story of how you moved the system from point A to point B, the reviewer will have a far easier time navigating and assimilating your changes.
  • Contextualizing your changes: Git can tell your reviewer which files and classes you changed. It can’t tell them why you made that change. Sure, you renamed a parameter, but why? How did you choose the new name?
  • Keeping code clear: Clear code is easier to read than convoluted.

It can be instructive to compare these principles to the SOLID principles. As with the structure of code, so with the structure of changes to that code.

Reviewers Are Co-Authors

Reviewers need to take the responsibility seriously. Review is an opportunity to have a lasting effect on both code and team.

In practice, this means:

  • A reviewer must know they have that responsibility: Assign someone to review a PR. Don’t just cross your fingers. GitHub’s CODEOWNERS file-glob–based auto-assignment system can automate this.
  • A reviewer should aim to improve both code and coders: Code review is a valuable way to share knowledge, best practices, and style standards. For more experienced developers doing review, this is an opportunity to teach something with a very concrete context. For less experienced developers doing review, this is an opportunity to ask questions with a very concrete context.
  • Asking questions can be more valuable than giving answers: Getting a review that amounts to someone ghostwriting through you can be very frustrating. But a review that elicits an unseen problem through dialogue and solves it through collaboration is exquisite.
  • A reviewer should focus on the issues only a human can catch: Code formatting and layout issues should be handled by automated processes. Let formatters/beautifiers, linters, and spell checkers do their jobs. Machines aren’t going to catch redundant abstractions, missing abstractions, inverted conditions, or mixed abstraction levels within a method’s implementation. Automating what you can reduces reviewer and reviewee burden and avoids drowning valuable review comments in a sea of noisy nitpicking comments.

Reviews Are a Canary for the Whole Process

If a team feels that reviews are rubber-stamps en route to landing changes, there will be trouble. Reviews will be reduced to unwanted busy-work.

If a team is planning work without allowing time for code review, there will be trouble. Reviews will be rushed. They might convert into rubber-stamps as a way to leave breathing room for other planned work.

“Done” includes a code review. If people feel there isn’t time to review work done, then they will be landing half-baked work. Taking on less work helps here. Kanban’s limits on work-in-progress can effectively require reviews be completed to free up space for further development.

(If PRs are piling up, you are headed for a headache of merge conflicts that everyone involved will have forgotten how to resolve, never mind review. That is a warning sign in and of itself, and it can emerge with or without a review culture.)

It’s also important for people to have realistic expectations about the time review can take. Worked three days on a PR? Expect it to take at least three days to review.

Or better, don’t work for three days before submitting something to be reviewed! The adjustment in perspective from “a PR finishes everything about something” to “a PR pushes the project to a slightly better state” can take time, but it also can unlock a lot of process improvements from planning to estimating to development and testing to, yes, reviewing. Issuing a PR each day keeps the chaos at bay.

In Review

  • Leaders must lead their team to prize and prioritize reviews.
  • Authors must not forget their audience. More, smaller PRs written with review in mind will accelerate everything.
  • Reviewers must take co-ownership. Automation can free everyone’s time to focus on real issues.
  • Review is the other half of development, and it can take as much time as development.

Interested in nurturing a code review culture in your organization? Reach out to Big Nerd Ranch today to talk about how we can work with your team to raise the bar.

Thanks to my colleagues who precipitated this post and contributed content and feedback: Josh Justice, Dan Ra, and Evan McCoy.

The post Growing a Code Review Culture appeared first on Big Nerd Ranch.

]]>
https://bignerdranch.com/blog/growing-a-code-review-culture/feed/ 0
Write Better Code Using Kotlin’s Require, Check and Assert https://bignerdranch.com/blog/write-better-code-using-kotlins-require-check-and-assert/ https://bignerdranch.com/blog/write-better-code-using-kotlins-require-check-and-assert/#respond Tue, 26 Sep 2017 10:00:00 +0000 https://nerdranchighq.wpengine.com/blog/write-better-code-using-kotlins-require-check-and-assert/ Codify your functions' assumptions and promises using Kotlin's common language of `require`, `check` and `assert`. Jumpstart your debugging with mind-reading failure messages.

The post Write Better Code Using Kotlin’s Require, Check and Assert appeared first on Big Nerd Ranch.

]]>

Good code makes its context plain. At a glance, you can see what it needs to succeed, and what happens when it does. Mastering Kotlin’s common language for codifying your functions’ assumptions and promises will help you write code you can change with confidence. You will catch any bugs sooner, and you will spend less time debugging.

A Common Language

Kotlin has three functions for capturing execution context:

  • require(Boolean) throws IllegalArgumentException when its argument is false. Use it to test function arguments.
  • check(Boolean) throws IllegalStateException when its argument is false. Use it to test object state.
  • assert(Boolean) throws AssertionError when its argument is false (but only if JVM assertions are enabled with -ea). Use it to clarify outcomes and check your work.

These functions give Kotlin programmers a common language. If you do not use these functions, you will probably reinvent them.

Bare exceptions and errors are little help. So each function has another variation.That variation takes a lazy message closure as final argument. Use that message to jumpstart debugging by reporting relevant values. You will see these variations used soon.

These three functions look very similar, but each has its specific purpose. Examples of using each alone, then all together, will make that clear.

Capture Assumptions

A function makes assumptions about:

  • Direct Inputs: These are function arguments. Maybe you need an Int to be non-negative. Maybe you need a File to be readable. Before you begin working with your arguments, check that they are valid with require.

  • Indirect Inputs: These are often object state. Sometimes certain functions only make sense to call if other functions have been called already. A socket needs to connect to a host before it makes sense to read from or write to it. You check these conditions using check.

Require Arguments Be Valid

To check assumptions about function arguments, use require:

fun activate(index: Int) {
  // Argument Assumption: |index| is a non-negative integer.
  require(index > 0) { "Int |index| must be non-negative. index=$index" }

  
}

fun load(from: File): String {
  // Argument Assumption: |from| is a readable file.
  require(from.canRead()) { "File |from| must be readable. file=$from canRead=${from.canRead()}" }

  
}

To check assumptions about things that are not function arguments, use check:

class Socket {
  var isConnected: Boolean = false
  var connectedHost: Host? = null

  fun connect(to: Host, result: (isConnected: Boolean) -> Void) {
    // Starting State Assumption: |this| is not already connected.
    check(!isConnected) {
      "|Socket.connect| cannot be called after a successful call to |Socket.connect|. "+
      "socket=$this to=$to connectedHost=$connectedHost"
    }

    
  }

  fun write(blocks: Blocks): Int {
    // Starting State Assumption: |this| is connected.
    check(isConnected) {
      "|Socket.connect| must succeed before |socket.write| can be called. "+
      "socket=$this blocks=$blocks"
    }

    
  }
}

Promise Results

We write code to do something. When that something is to return a value, our promise is the return type. But return types often do not tell the whole story. And when that something is to change other state, our promise is secret.

Check Your Work with Assert

assert verifies your function did its job:

fun activate(index: Int) {
  
  // Ending State Promise: The pump at |index| is now active.
  assert(pump[index].isActive) { "Failed to activate pump index=$index" }
}

Conclusion

Kotlin gives us tools to write clear code. Clear code says what it knows. It does not keep it secret.

You often use require, check and assert in the same places in a function:

fun anyFunction(arg: Arg): Result {
  // Starting State Assumption: XXX
  check(internalStateIsSane) {
    "Say what you expected. Log |this| and |args| as well as the failing internal state."
  }

  // Argument Assumption: XXX
  require(arg.isSane) {
    "Say what you expected. Log |arg| and the values used in the failed check."
  }

  

  // Ending State Promise: XXX
  assert(result.isSane) {
    "Say what you expected. Log |result| and the failed check's output."
  }
  result
}

As shown, the pattern is:

  • Before anything else, check the starting state with check. If any of these checks fails, the arguments do not even matter – the function should never have been called!

  • Next, check the function arguments with require. If an argument turns out to be invalid, it is best to catch that before changing anything or doing any other work, since the function call will fail anyway.

  • In the middle, do the actual work of your function.

  • Lastly, assert the function did what it was supposed to do. Sometimes that means checking that some objects have a new state. Sometimes that means checking that the return value is reasonable.

In all cases, write your failure message to jumpstart debugging. If your first question when a check fails will be, “What was the value of something?” then the message should answer that question.

Checking things can grow tiresome. The way out is more precise types: As Yaron Minsky says, “Make illegal states unrepresentable.” For example, a require(intValue >= 0) check can be eliminated by using a type whose values can only represent non-negative integers. But that is a topic for another day.

Curious to better know Kotlin? Stay updated with our Kotlin Programming books & bootcamps. Our two-day Kotlin Essentials course delivers in spades, while our Android Essentials with Kotlin course will set you on the right path for Android development.

The post Write Better Code Using Kotlin’s Require, Check and Assert appeared first on Big Nerd Ranch.

]]>
https://bignerdranch.com/blog/write-better-code-using-kotlins-require-check-and-assert/feed/ 0
Why Associated Type Requirements Become Generic Constraints https://bignerdranch.com/blog/why-associated-type-requirements-become-generic-constraints/ https://bignerdranch.com/blog/why-associated-type-requirements-become-generic-constraints/#respond Wed, 16 Aug 2017 10:00:03 +0000 https://nerdranchighq.wpengine.com/blog/why-associated-type-requirements-become-generic-constraints/ Swift protocols can have associated types, which makes them more powerful than Objective-C protocols. It also makes them more complicated. In this post, learn how Swift protocols balance power and complexity. See an example of code that uses a protocol with associated type, then understand why it has to be generic.

The post Why Associated Type Requirements Become Generic Constraints appeared first on Big Nerd Ranch.

]]>

Objective-C Protocols: Just Messages

Objective-C had protocols. They name a set of messages. For example, the UITableViewDataSource protocol has messages for asking the number of sections and the number of rows in a section.

Swift Protocols: Messages + Associated Types

Swift has protocols. They too name a set of messages.

But Swift protocols can also have associated types. Those types play a role in the protocol. They are placeholders for types. When you implement a protocol, you get to fill in those placeholders.

Associated Types Make Implementing Protocols Easier

Associated types are a powerful tool. They make protocols easier to implement.

Example: Swift’s Equatable Protocol

For example, Swift’s Equatable protocol has a function to ask if a value is equal to another value:

static func ==(lhs: Self, rhs: Self) -> Bool

This function uses the Self type. The Self type is an associated type. It is always filled in with the name of the type that implements a protocol. (Not convinced Self is an associated type? Jump to the end of the article, then come back.) So if you have a type struct Name { let value: String }, and you add an extension Name: Equatable {}, then Equatable.Self in that case is Name, and you will write a function:

static func ==(lhs: Name, rhs: Name) -> Bool

Self is written as Name here, because you are implementing Equatable for the type Name.

Equatable uses the associated Self type to limit the == function to only values of the same type.

Contrast: Objective-C’s NSObjectProtocol

NSObjectProtocol also has a method isEqual(_:). But because it is an Objective-C protocol, it cannot use a Self type. Instead, its equality test is declared as:

func isEqual(_ object: Any?) -> Bool

Because an Objective-C protocol cannot restrict the argument to an associated type, every implementation of the protocol suffers. It is common to begin an implementation by checking that the argument is the same type as the receiver:

func isEqual(_ object: Any?) -> Bool {
    guard let other = object as? Name
    else { return false }

    // Now you can actually check equality.

Every implementation of isEqual(_:) has to check this. It must check each and every time it is called.

Implementers of Equatable never have to check this. It is guaranteed once and for all, for every implementation, through the Self associated type.

Power Has a Price

Protocol ‘SomeProtocol’ can only be used as a generic constraint because it has Self or associated type requirements.

Associated types are a powerful tool. That power comes at a cost:

error: protocol 'Equatable' can only be used as a generic constraint because it has Self or associated type requirements

Code that uses a protocol that relies on associated types pays the price. Such code must be written using generic types.

Generic types are also placeholders. When you call a function that uses generic types, you get to fill in those placeholders.

When you look at generic types versus associated types, the relationship between caller and implementer flips:

  • Associated Types: Writer Knows, Caller Doesn’t:
    When you write a function that uses associated types, you get to fill in the placeholders, so you know the concrete types. The caller does not know what types you picked.
  • Generic Types: Caller Knows, Writer Doesn’t:
    When you write a function that uses generic types, you do not know what type the caller will pick. You can limit the types with constraints. But you must handle any type that meets the constraints. The caller gets to pick that type, and your code needs to work with whatever they pick.

Example: Calling Equatable’s == Forces Use of Generics

Consider a function checkEquals(left:right:). This does nothing but defer to Equatable’s ==:

func checkEquals(
  left: Equatable,
  right: Equatable
) -> Bool {
  return left == right
}

The Swift compiler rejects this:

error: repl.swift:2:7: error: protocol 'Equatable' can only be used as a generic constraint because it has Self or associated type requirements
left: Equatable,
      ^

error: repl.swift:3:8: error: protocol 'Equatable' can only be used as a generic constraint because it has Self or associated type requirements
right: Equatable
       ^

Why? Without Generics, checkEquals Is Nonsense

What if Swift allowed this? Let us do an experiment.

Pretend you have two different Equatable types, Name and Age.
Then you could write code like this:

let name = Name(value: "")
let age = Age(value: 0)
let isEquals = checkEquals(name, age)

This is nonsense! There are two ways to see this:

  • Doing: How do you run this code? What implementation of == would checkEquals call in the last line? Name’s? Age’s? Neither applies. These are only ==(Name, Name) and ==(Age, Age), because Equatable declares only ==(Self, Self). To call either Name’s or Age’s == would break type safety.
  • Meaning: What does this mean? An Equatable type is not a type alone. It has a relationship to another type, Self. If you write checkEquals(left: Equatable, right: Equatable), you only talk about Equatable. Its associated Self type is ignored. You cannot talk about “Equatable” alone. You must talk about “Equatable where Self is (some type)”.

This is subtle but important. checkEquals looks like it will work. It wants to compare an Equatable with an Equatable. But Equatable is an incomplete type. It is “equatable for some type”.

checkEquals(left: Equatable, right: Equatable) says that left is “equatable for some type” and right is “equatable for some type”. Nothing stops left from being “equatable for some type” and right from being “equatable for some other type”. Nothing makes left and right both be “equatable for the same type”.

Equatable.== needs its left and right to be the same type. This makes checkEquals not work.

Discover why a code audit is essential to your application’s success!

Teaching checkEquals to Handle All Equatable+Self Groups

checkEquals cannot know what “some type” should be in “Equatable where Self is (some type)”. Instead, it must handle every group of “Equatable and Self type”: It must be “checkEquals for all types T, where T is ‘Equatable and its associated types’”.

You write this in code like so:

func checkEquals<T: Equatable>(
  left: T,
  right: T
) -> Bool {
  return left == right
}

Now, every type T that is an Equatable type – this includes its associated Self type – has its own checkEquals function. Instead of having to write checkEquals(left: Name, right: Name) and checkEquals(left: Age, right: Age), you use Swift’s generic types to write a “recipe” for making those types. You have walked backwards into the “Extract Generic Function” refactoring.

Example: Calling NSObjectProtocol’s isEqual(_:) Does Not Require Generics

Writing checkEquals using NSObjectProtocol instead of Equatable does not need generics:

import Foundation

func checkEquals(
  left: NSObjectProtocol,
  right: NSObjectProtocol
) -> Bool {
  return left.isEqual(right)
}

This is simple to write. It also allows us to ask stupid questions:

let isEqual = checkEquals(name, age)

Is a name even comparable with an age? No. So isEqual evaluates to false. Name.isEqual(_:) will see obj is not a kind of Name. Name.isEqual(_:) will return false then. But unlike Equatable.==, every single implementation of isEqual(_:) must be written to handle such silly questions.

Trade-Offs

Associated types make Swift’s protocols more powerful than Objective-C’s.

An Objective-C’s protocol captures the relationship between an object and its callers. The callers can send it messages in the protocol; the implementer promises to implement those messages.

A Swift protocol can also capture the relationship between one type and several associated types. The Equatable protocol relates a type to itself through Self. The SetAlgebra protocol relates its implementer to an associated Element type.

This power can simplify implementations of the protocol. To see this, you contrasted implementing Equatable’s == and NSObjectProtocol’s isEqual(_:).

This power can complicate code using the protocol. To see this, you contrasted calling Equatable’s == and NSObjectProtocol’s isEqual(_:).

Expressive power can complicate. When you write a protocol, you must trade the value of what you can say using associated types against the cost of dealing with them.

I hope this article helps you evaluate the protocols and APIs you create and consume. If you found this helpful, you should check out our Advanced Swift bootcamp.

For the More Curious: Is Self an Associated Type?

Self acts like an associated type. Unlike other associated types, you do not get to choose the type associated with Self. Self is automatically associated with the type implementing the protocol.

But the error message talks about a protocol that “has Self or associated type requirements”. This makes it sound like they are different things.

This is hair-splitting. But a hair in the wrong place distracts. I went to find an answer. I have found it for you in the source code for the abstract syntax tree used by the Swift compiler. A doc comment on AssociatedTypeDecl says:

Every protocol has an implicitly-created associated type ‘Self’ that
describes a type that conforms to the protocol.

Case closed: Self is an associated type.

The post Why Associated Type Requirements Become Generic Constraints appeared first on Big Nerd Ranch.

]]>
https://bignerdranch.com/blog/why-associated-type-requirements-become-generic-constraints/feed/ 0
Throws Helps Readability https://bignerdranch.com/blog/throws-helps-readability/ https://bignerdranch.com/blog/throws-helps-readability/#respond Sun, 04 Jun 2017 10:00:00 +0000 https://nerdranchighq.wpengine.com/blog/throws-helps-readability/ The correct answer to "throws or Result?" is a loud BOTH! Learn to make the best of these tools to write clear, maintainable a/sync code through a worked example.

The post Throws Helps Readability appeared first on Big Nerd Ranch.

]]>

Swift’s error-related syntax calls attention to possible errors through try and throws. The do/catch syntax clearly separates the happy path (no errors) from the sad path (errors):

func exampleSyncUsageOfThrows() -> Bool {
    do {
        /* happy path */
        let cookie = try ezbake()
        eat(cookie)
        return true
    } catch {
        /* sad path */
        return false
    }
}

Because throws is “viral”, you’re forced to address it one way or another, even if it’s by deciding to flip your lid when you hit an error by using the exploding try!.

No Async Syntax

Swift’s error-related syntax is great when every line of code executes one after another, synchronously. But it all goes to heck when you want to pause between steps to wait for an external event, like a timer finishing or a web server getting back to you with a response, or anything else happening asynchronously.

The Cocoa Completion Callback Pattern: Everything Is Optional

Let’s try that example again, only scheduling the cookie-baking for later, and then waiting for the cookie to cool before scarfing it:

func exampleAsyncDoesNotPlayNiceWithThrows(completion hadDinner: @escaping (Bool) -> Void) {
    ezbakeTomorrow { cookie, error in
        // hope you don't forget to check for an error first!
        // also hope you like optional unwrapping
        guard error == nil, let cookie = cookie else {
            return hadDinner(false)
        }

        wait(tillCool: cookie) { coolCookie, error in
            guard error == nil, let coolCookie = coolCookie else {
                // dog snarfed cookie?
                return hadDinner(false)
            }

            eat(coolCookie)
            hadDinner(true)
        }
    }
}

This approach of calling a completion closure with parameters for both the desired result and the failure explanation all marked optional is common across Cocoa APIs as well as third-party code. Correctly unpacking those arguments relies heavily on convention. That is to say, it relies heavily on you being very careful not to shoot yourself in the foot.

Everything Is Optional?!

Because both the success value (cookie) and the failure value (error) might not be present, both end up being optionals. That means you end up with four cases to consider, of which two should probably never happen:

  • Success! cookie but no error. This is unambiguous.
  • Failure. error but no cookie. This is similarly unambiguous.
  • Kind of a failure, I think? Like, maybe? Both error AND cookie. If you’re following classic Cocoa style, this gets lumped in with the success case, so that a successful run could, before ARC, leave error pointing at fabulously uninitialized data or scratch errors that didn’t happen. (As you might imagine, that convention gets messed up pretty often.)
  • Super-duper extra-failure. Neither error nor cookie. This is probably a bug in whatever’s giving you this output. But, alas, you still have to deal with it as a possibility.

Result: Better Async Support Through Types

Result is a popular enumeration for cleaning this up. It looks something like:

enum Result<Value> {
  case success(Value)
  case failure(Error)
}

This addresses all the weirdness with the conventional approach:

  • It explicitly has only two cases, so you don’t have to waste time considering the two “these are probably a bug” cases.
  • You can’t mess up the convention and shoot yourself in the foot. You can only get access to either the failure or the success case by design.
  • You can’t just ignore the error case and accidentally code for just the happy path. case exhaustiveness ensures the error is on your radar.

Just as do/catch lets you clearly separate handling a successful result from a failure, so does Result through switch/case:

func exampleAsyncLikesResult(completion hadDinner: @escaping (Bool) -> Void) {
    ezbakeTomorrow { result in
        switch result {
        case let .success(cookie):
            wait(tillCool: cookie) { result in
                switch result {
                // look ma, no optionals!
                case let .success(coolCookie):
                    eat(coolCookie)
                    hadDinner(true)

                case let .failure(_):
                    hadDinner(false)
                }
            }

        case let .failure(_):
            hadDinner(false)
        }
    }
}

Result: Sync, Async, It Just Works?

Result achieves the aims of do/catch/throw for async code. But it can also be used for sync code. This leads to competition between Result and throws for the synchronous case:

func exampleSyncUsageofResult() {
    return
        ezbake()
        .map({ eat($0) })
        .isSuccess
}

That’s…not so pretty. It would get even uglier if there was a sequence of possibly failing steps:

// this mess…
func exampleUglierSyncResult() {
    return
        open("some file")
        .flatMap({ write("some text", to: $0) })
        .map({ print("success!"); return $0 })
        .flatMap({ close($0) })
        .isSuccess
}

// …translates directly to this less-mess
func exampleSyncIsLessUglyWithTry() {
    do {
        let file = try open("some file")
        let stillAFile = try write("some text", to: file)
        print("success!")
        try close(stillAFile)
        return true
    } catch {
        return false
    }
}

It’s kind of easy to lose the flow in all that syntax, plus it sounds like you have a funky verbal tic with the repeated map and flatMap. You also have to keep deciding between (and distracting your reader with the distinction between) map and flatMap.

Leave Result for Async, Switch to Throws When Sync

That suggests a rule of thumb: stick with throws for synchronous code. Applying that even to mixed sync (within the body of completion callbacks) and async (did I mention completion callbacks?) code allows to play to the strengths of both throws and Result.

First, here’s a mechanical translation of the earlier exampleAsyncLikesResult function:

func exampleMechanicallyBridgingBetweenAsyncAndSync(completion hadDinner: @escaping (Bool) -> Void) {
    ezbakeTomorrow { result in
        do {
            let cookie = try result.unwrap()
            wait(tillCool: cookie) { result in
                do {
                    let coolCookie = try result.unwrap()
                    eat(coolCookie)
                    hadDinner(true)
                } catch {
                    hadDinner(false)
                }
            }
        } catch {
            hadDinner(false)
        }
    }
}

Each completion accepts a Result, but in working with it, it immediately returns to using the Swift try/throw/do/catch syntax.

try has a try? variant that allows to clean this up even more nicely. This is more like the code you’d likely write in the first place when using this style:

func exampleNicerBridgingBetweenAsyncAndSync(completion hadDinner: @escaping (Bool) -> Void) {
    ezbakeTomorrow { result in
        guard let cookie = try? result.unwrap()
            else { return hadDinner(false) }

        wait(tillCool: cookie) { result in
            guard let coolCookie = try? result.unwrap
                else { return hadDinner(false) }

            eat(coolCookie)
            hadDinner(true)
        }
    }
}

Bridging Helpers

This relies on some simple helper functions to bridge between Result and throws.

  • Result.unwrap() throws goes from Result to throws: The caller of an async method that delivers a Result can then use result.unwrap() to bridge back from Result into something you can try and catch. unwrap()is a throwing function that throws if it’s .failure and otherwise just returns its .success value. We saw plenty of examples earlier.

  • static Result.of(trying:) goes from throws to Result: The implementation of async methods can use Result.of(trying:) to wrap up the result of running a throwing closure as a Result; this helper runs its throwing closure and stuffs any caught error in .failure, and otherwise, wraps the result up in .success.

    This is used to implement async functions delivering a result. Since the running example delivered a Boolean, you haven’t seen this used yet. Here’s a small example:

func youComplete(me completion: @escaping (Result<MissingPiece>) -> Void) {
    doSomethingAsync { boxOfPieces: Result<PieceBox> in
        let result = Result.of {
            let box = try boxOfPieces.unwrap()
            let piece = try findMissingPiece(in: box)
            return piece
        }
        completion(result)
    }
}

What these functions are called varies across Result implementations (I’m eagerly awaiting the Swift version of what Promises/A+ delivered for JavaScript), but whatever your Result calls them, use them! (And if they aren’t there, you can readily write your own.)

For a concrete example of implementing these, as well as the variation in names, check out antitypical/Result’s versions:

That’s a Wrap

So that’s the bottom line:

  • Use Result as your completion callback argument.
  • Within the completion body (and anywhere else you’re working synchronously), use do/catch to work with potential errors.

The post Throws Helps Readability appeared first on Big Nerd Ranch.

]]>
https://bignerdranch.com/blog/throws-helps-readability/feed/ 0