The post You Want Faster Meetings – Here’s How appeared first on Big Nerd Ranch.
]]>You want shorter and more effective meetings – especially since “Zoom fatigue” is real. Read on.
First, respect time. Respect the client’s time, the project’s time, your teammates’ time, your time. Here are some things I try to consider before and during meetings.
Is a meeting the only way to handle this issue? Could it be handled over Slack? If you’re not sure, first try handling it without a meeting. If that works, great! If it doesn’t, you can then call a meeting and it will be better informed by the prior discussion.
There are steps you can take to ensure a valuable use of everyone’s time before the meeting even takes place.
Invite only those required to be there. If you must invite others, mark them as optional. Ensure the invite contains a description listing who is the meeting leader, the agenda, and relevant documents. It helps people prepare, or even decide if they can skip the meeting or send another in their place.
Schedule for as little time as possible. Not only does that consume less time, but limited time keeps the meeting on track, and prevents a slow start and crammed finish. It’s also easier to schedule a quick meeting than a lengthy one. Use “speedy meetings”: 30-minute meetings go 25 minutes, 60-minute meetings go 50 (your calendar software may support this technique; if not, you should still do it yourself). If you’re in meetings all day, you need breaks to mentally wrap up the prior, change gears, prep for the next. You need to stand up, stretch, go to the bathroom, get a drink or a snack. If you’re in an office, you may need time to walk to another meeting room. Account for these things.
During the meeting, it’s worth considering not just how it runs, but what everyone’s role should be.
Start the meeting on time. Corollary: arrive on time, or earlier. There should be a designated scribe for the meeting, recording notes and publishing them in the publicly designated area. It provides not just a record, but a way for those who couldn’t attend the meeting a way to stay informed.
Begin by stating the agenda and ground rules. For example, “To ensure we get through everything, let’s hold questions until after all the demos.” Stay on agenda and enforce the rules throughout the meeting. Yes, the meeting leader should interrupt when needed to keep the meeting on track. Recognize important derailments and redirect them to a more appropriate time or place.
End the meeting on time (don’t forget “speedy meetings”). In fact, start wrapping up at least 5 minutes before the scheduled end time – figure out this exact clock-time before the meeting starts, watch the clock, and begin wrapping up on time. Wrap-up is a time to summarize, restate action items, and thank people; it’s not a time to open the floor for more questions, but you can provide direction on where questions can be asked. As well, if the meeting is running over but you have another place to be, it’s OK to leave and stick to your schedule even if others are not sticking to theirs.
Second, remember – everyone involved in meetings is human, including you. We are built with limited capacity. Constantly overflowing our capacity day in and day out – that’s how burnout comes to bear. We must moderate our capacity, and remember that when we schedule a meeting, we’re affecting those people’s capacity. We want a productive team, not a burned out one – we must be considerate in our use of meetings.
Fill your calendar with “unavailable” blocks: too early before work, too late after work (helps with family commitments, time zone differences, and enforcing work hours). Schedule your time for lunch. Schedule your external appointments (e.g. doctor), and include time for travel. Schedule focus time, especially if you work on a “maker schedule”. Don’t forget about PTO. If you maintain multiple calendars, keep relevant time blocks synced across the calendars so people can know your full availability. And stick to your boundaries: don’t give up your lunch for a meeting.
Use your calendar’s schedule assistant to find a time when required attendees can meet. Have the means to help those who cannot attend the meeting to catch up (record the meeting? notes? summary post on Slack? Catch up with them in person later?). Or maybe this is a signal to try solving the problem without a meeting.
Don’t apologize, fix it. If you’re always apologizing for being late, you’re acknowledging you have suboptimal behavior. While the apology is appreciated, improving your behavior will go further – especially for yourself.
If meetings always start on time, that becomes the expectation. If rules are set and followed, that becomes the expectation. People will rise to the level of expectation you set for them, so set your standards well. This includes setting your own expectations for yourself. If you fail in holding a good meeting, the solution isn’t necessarily to loosen up (e.g. meeting derailed, ran out of time, make the next meeting longer), but to examine how you failed and can improve to meet the stricter goal (e.g. avoid derailment by enforcing agenda and rules).
Because most of what we say and do is not essential. If you can eliminate it, you’ll have more time and more tranquility. Ask yourself at every moment, “Is this necessary?” -Marcus Aurelius
Time is our most precious commodity. We don’t dislike meetings – we dislike having our precious time wasted. Putting the above techniques into practice with a 90% success rate (because sometimes we will fail) will help time be better spent.
At Big Nerd Ranch, kindness is a core value. It’s important to understand the distinction between being kind and being nice. Some of my advice may not come across as nice, but I’m ok with that – I prefer to manifest kindness, to my teammates, to my clients, to myself, to you. ❤️ Thank you.
The post You Want Faster Meetings – Here’s How appeared first on Big Nerd Ranch.
]]>The post Agile Software Development: Architecture Patterns for Responding to Change – Part 3 appeared first on Big Nerd Ranch.
]]>The approach presented in Part 2 is a good start, but where does PersonsViewControllerDelegate
get implemented? Exactly how and where the delegate protocol is implemented can vary, just like any delegate implementation in typical iOS/Cocoa development. But if we take a fundamental Model-View-Controller (MVC) approach, it would be common for the Controller to implement the delegate. But now we’re getting into terminology overlap, so we’ve taken a slightly different approach with Coordinators, specifically a FlowCoordinator
.
A FlowCoordinator
does what the name says: it coordinates flows within the app. The app has numerous stand-alone ViewController
s for each screen in the app, and the FlowCoordinator
stitches them together along with helping to manage not just the UI flow but also the data flow. Consider a login flow (LoginFlowCoordinator
): the login screen, which could flow to a “forgot password” screen or a sign-up screen, and finally landing on the main screen of the app after successful login. Or a Settings flow (SettingsFlowCoordinator
), which navigates the user in and out of the various settings screens and helping to manage the data flow of the settings. Let’s rework the “show persons and their detail” part of the app to use a FlowCoordinator
:
protocol PersonsViewControllerDelegate: AnyObject { func didSelect(person: Person, in viewController: PersonsViewController) } /// Shows a master list of Persons. class PersonsViewController: UITableViewController { private var persons: [Person] = [] private weak var delegate: PersonsViewControllerDelegate? func configure(persons: [Person], delegate: PersonsViewControllerDelegate) { self.persons = persons self.delegate = delegate } override func tableView(_ tableView: UITableView, didSelectRowAt indexPath: IndexPath) { let selectedPerson = persons[indexPath.row] delegate?.didSelect(person: selectedPerson, in: self) } } // ----- protocol FlowCoordinatorDelegate: AnyObject { } protocol FlowCoordinator { associatedtype DelegateType var delegate: DelegateType? { get set } var rootViewController: UIViewController { get } } // ----- protocol ShowPersonsFlowCoordinatorDelegate: FlowCoordinatorDelegate { // nothing, yet. } class ShowPersonsFlowCoordinator: FlowCoordinator { weak var delegate: ShowPersonsFlowCoordinatorDelegate? var rootViewController: UIViewController { return navigationController } private var navigationController: UINavigationController! private let persons = [ Person(name: "Fred"), Person(name: "Barney"), Person(name: "Wilma"), Person(name: "Betty") ] init(delegate: ShowPersonsFlowCoordinatorDelegate) { self.delegate = delegate } func start() { let personsVC = PersonsViewController.instantiateFromStoryboard() personsVC.configure(persons: persons, delegate: self) navigationController = UINavigationController(rootViewController: personsVC) } } extension ShowPersonsFlowCoordinator: PersonsViewControllerDelegate { func didSelect(person: Person, in viewController: PersonsViewController) { let personVC = PersonViewController.instantiateFromStoryboard() personVC.configure(person: person) navigationController.pushViewController(personVC, animated: true) } }
A FlowCoordinator
protocol defines a typical base structure for a Flow Coordinator. It provides a means to get the rootViewController
, and also a delegate of its own. The FlowCoordinator
pattern does not demand a delegate, but experience has proven it a handy construct in the event the FlowCoordinator
needs to pass information out (e.g. back to its parent FlowCoordinator
).
ShowPersonsFlowCoordinator.start()
s by creating the initial ViewController
: a PersonsViewController
. It is of some debate if initial FlowCoordinator
state should be established within init()
or a separate function like start()
; there are pros and cons to each approach. You can see here we also now have the FlowCoordinator
owning the data source (the array of Person
s), which is a more correct setup. Then the data to display and delegate are injected into the PersonsViewController
immediately after instantiation and before the view loads. Now when a user views a PersonsViewCoordinator
and selects a Person
, its PersonsViewControllerDelegate
is invoked. As ShowPersonsFlowCoordinator
is the delegate, it implements the instantiation of and navigation (flow) to the PersonViewController
to show the Person
in detail.
To implement the other tab, create a ShowGroupsFlowCoordinator
. It start()
s by instantiating the PersonsViewController
, and the delegate didSelect
can push the GroupsViewController
. We’re done. We’ve made the PersonsViewController
have a single responsibility, unaware of its surroundings, with dependencies injected, messages and actions delegated. This creates a thoughtful architecture, delivering quicker, with less complication, and a more robust, reusable codebase.
Stepping back and looking at the application as a whole, there are additional improvements that can be made to help with factoring, flow, and coordination.
Too often, AppDelegate
gets overloaded with application-level tasks, instead of purely UIApplicationDelegate
tasks. Having an AppCoordinator
avoids the “massive App Delegate” problem, enables the AppDelegate
to remain focused on UIApplicationDelegate
-level matters, factoring application-specific handling into the Coordinator. If you’re adopting UIScene
/UISceneDelegate
, you can adopt a similar approach. The AppCoordinator
could own shared resources, such as data sources, as well as owning and establishing the top-level UI and Flows. It might be implemented like this:
@UIApplicationMain class AppDelegate: UIResponder { var window: UIWindow? = { UIWindow(frame: UIScreen.main.bounds) }() private lazy var appCoordinator: AppCoordinator = { AppCoordinator(window: self.window!) }() } extension AppDelegate: UIApplicationDelegate { func application(_ application: UIApplication, didFinishLaunchingWithOptions launchOptions: [UIApplication.LaunchOptionsKey: Any]?) -> Bool { appCoordinator.start() return true } func applicationWillResignActive(_ application: UIApplication) { } func applicationDidEnterBackground(_ application: UIApplication) { } func applicationWillEnterForeground(_ application: UIApplication) { } func applicationDidBecomeActive(_ application: UIApplication) { } func applicationWillTerminate(_ application: UIApplication) { } } // ----- class AppCoordinator: FlowCoordinator { weak var delegate: FlowCoordinatorDelegate? // protocol conformance; the AppCoordinator is top-most and does not have a delegate. private let window: UIWindow var rootViewController: UIViewController { guard let rootVC = window.rootViewController else { fatalError("unable to obtain the window's rootViewController") } return rootVC } private var personDataSource: PersonDataSourceable! private var showPersonsFlowCoordinator: ShowPersonsFlowCoordinator! private var showGroupsFlowCoordinator: ShowGroupsFlowCoordinator! init(window: UIWindow) { self.delegate = nil // emphasize that we do not have a delegate self.window = window establish() } func start() { // Typically a FlowCoordinator will install their first ViewController here, but // since this is the app's coordinator, we need to ensure the root/initial UI is // established at a prior time. // // Still, having this here is useful for convention, as well as giving a clear // point of instantiation and "starting" the AppCoordinator, even if the implementation // is currently empty. Your implementation may have tasks to start. } private func establish() { establishLogging() loadConfiguration() personDataSource = PersonDataSource() // shared data resource showPersonsFlowCoordinator = ShowPersonsFlowCoordinator(dataSource: personDataSource, delegate: self) showPersonsFlowCoordinator.start() showGroupsFlowCoordinator: ShowGroupsFlowCoordinator(dataSource: personDataSource, delegate, self) showGroupsFlowCoordinator.start() // abbreviated code, for illustration. let tabBarController = UITabBarController(...) tabBarController.setViewControllers([showPersonsFlowCoordinator.rootViewController, showGroupsFlowCoordinator.rootViewController], animated: false) window.rootViewController = tabBarController window.makeKeyAndVisible() } } extension AppCoordinator: ShowPersonsFlowCoordinatorDelegate { } extension AppCoordinator: ShowGroupsFlowCoordinatorDelegate { }
Storyboard segues create tight couplings: in the storyboard file itself, in the prepare(for:sender:)
function since it must exist within the ViewController
being transitioned from. We are striving to create loose couplings with flexible routing. Thus, segues generally are avoided with this approach.
The use of dependency injection – that typically the Coordinator
might own a resource and then “pass the baton” in via configure()
and data out via delegation – all of this tends to avoid the use of singletons and the issues they can bring.
I’m not anti-singleton. They must be used carefully, as they can complicate unit testing and make modularity difficult.
That said, I have encountered times using this design where the baton passing was heavy-handed. Some nested child Coordinator
was the only thing that needed some resource, and that resource was owned somewhere at the top of the chain. Then all things in between had to be modified, just to pass the baton down. Such is the trade-off; and is more exception than rule.
This isn’t a perfect solution to all things (as you can see, there’s some variance and adaptability allowed). However, it’s a solution that has worked well for us across a great many projects at Big Nerd Ranch.
As development on Apple platforms evolves, due to technologies like Combine and SwiftUI, we’ll evolve our approaches to enable us to leverage new technology while maintaining strong foundational principles of software development.
Hopefully, it can work well for you and your projects.
The post Agile Software Development: Architecture Patterns for Responding to Change – Part 3 appeared first on Big Nerd Ranch.
]]>The post Agile Software Development: Architecture Patterns for Responding to Change – Part 2 appeared first on Big Nerd Ranch.
]]>ViewController
sWhile the Contact app example is simple, it shows how business logic creeps into display logic and makes reuse difficult. The problem is that the ViewController
knows too much about the world around it. A ViewController
should be designed to be unaware of the world around it.
ViewController
should understand only how to make itself operate and go; it only controls itself.ViewController
needs something from the outside world to make it go, such as the data to display, those things (dependencies) must be provided (injected).ViewController
needs to communicate something back to the outside world, it should use a broadcast mechanism such as delegation.There is also the implication a ViewController
should only publicly expose an API that is truly public. Properties such as IBOutlet
s, delegates, and other data members, along with internal implementation functions should be declared private
. In fact, it’s good practice to default to private
and only open up access when it must (Principle of Least Privilege).
The ViewController
will be implemented in standard ways: in code, via storyboard, or whatever the project’s convention may be.
That which the ViewController
needs to work – dependencies, such as data, helpers, managers, etc. – must be injected at or as close to instantiation time as possible, preferably before the view is loaded (viewDidLoad()
is called). If the ViewController
is instantiated in code, the dependency injection could be performed via an overloaded initializer. If instantiating a ViewController
from a storyboard, a configure()
function is used to inject the dependencies, calling configure()
as soon as possible after instantiation. Even if injection at initialization is possible, adopting configure()
may be desirable to ensure a consistent pattern of ViewController
initialization throughout the app. That which the ViewController
needs to relay to the outside world – for example, the user tapped something – must be relayed to the outside world, preferably via delegation.
Here’s the Contacts code, reworked under this design:
/// Shows a single Person in detail. class PersonViewController: UIViewController { // Person is an implicitly unwrapped optional, because the public API contract for this // class requires a `Person` for the class to operate properly: note how `configure()` // takes a non-optional `Person`.. The use of an Implicity Unwrapped Optional simplifies and // enforces this contract. // // This is not a mandated part of this pattern; just something it enables, if appropriate // for your need. private var person: Person! func configure(person: Person) { self.person = person } override func viewDidLoad() { super.viewDidLoad() title = person.name // for example } } /// Delegate protocol for handling PersonsViewController actions protocol PersonsViewControllerDelegate: AnyObject { func didSelect(person: Person, in viewController: PersonsViewController) } /// Shows a master list of Persons. class PersonsViewController: UITableViewController { private let persons = [ Person(name: "Fred"), Person(name: "Barney"), Person(name: "Wilma"), Person(name: "Betty") ] private weak var delegate: PersonsViewControllerDelegate? func configure(delegate: PersonsViewControllerDelegate) { self.delegate = delegate } override func tableView(_ tableView: UITableView, didSelectRowAt indexPath: IndexPath) { let selectedPerson = persons[indexPath.row] delegate?.didSelect(person: selectedPerson, in: self) } }
PersonsViewController
now supports a delegate. The delegate is provided to the PersonsViewController
by means of a configure()
function, when the PersonsViewController
is instantiated (see Part 3 of this series). Because the delegate
property is an implementation detail, it is declared private
. The public API for accessing (setting) the delegate is the configure()
function. Note the delegate argument is non-optional; the public API contract of this class is that a delegate is required (optionality andweak
ness is an implementation detail of delegate properties).
When the user taps a row, the delegate’s didSelect(person:in:)
function is invoked. How should didSelect(person:in:)
be implemented? However is appropriate for the context: showing the PersonViewController
in the first app tab, and showing a GroupsViewController
in the second app tab. This business logic is up to someone else to decide, not the PersonsViewController
. I’ll show how this comes together in Part 3.
Now the PersonsViewController
is more flexible and reusable in other contexts. Perhaps the next version of the app adds a third tab, showing a list of Persons and tapping shows their parents. We can quickly implement the third tab with the existing PersonsViewController
and merely have a new delegate implementation.
I did not create a delegate for the PersonViewController
because it has no need to communicate with outside code. If PersonViewController
would need to notify outside code of user action (e.g. it supports an Edit mode and the user tapped to edit), then a PersonViewControllerDelegate
would be appropriate to define.
Additionally, the data was simply declared as a private data member of PersonsViewController
. A better solution would be for the data to be injected in via the configure()
function. Perhaps as [Person]
, or perhaps a DataSource
type object that provided the [Person]
from a network request, from a Core Data store, or wherever the app stores its data.
It’s an intentional choice to prefer delegation over notification (Notification
/NotificationCenter
). In general, only one other thing cares to know about the changes to the ViewController
, so delegation provides clear and focused handling. Notification
is global and heavy-handed for our needs. It’s great for fire-and-forget operations, but inappropriate when receipt is required. As well, Notification permits others that may not be directly involved to tap into the event system of the View-Delegate, which is generally not desired.
This isn’t to say you cannot use Notification – you could if truly that was right. On the same token, if you find yourself needing a ViewController
to notify more than one other thing, I would 1. consider your design to ensure this is truly needed and/or there isn’t another way to solve it, 2. consider instead using a MulticastDelegate
type of approach, so it’s still delegation, just to explicit many objects (vs. NotificationCenter
‘s global broadcast).
Consider as well, instead of using delegation or notification, you could provide closures to be executed as handlers for these actions. On one project, I implemented this design using closures because it made sense for what needed to be handled. There’s no hard-and-fast approach: it’s about knowing the options, and when it is and is not appropriate to use each.
Typically, a ViewController
‘s communication with the outside world will be one-way: out-going messages; as such, return types will be Void
. However, it is within the pattern to allow delegate functions to return a value. Perhaps during a ViewController
‘s operation, someone outside must be queried then the ViewController
responds accordingly. It’s perfectly acceptable to support this sort of query via delegation. This is another reason why delegation can work better than notifications. Note: this lends to synchronous queries. If you find the ViewController
needs an async mechanisms via delegation, it may be worth considering another route (including accepting the complexity).
We’re not done yet! In Part 3, I’ll show how to make all of this… flow.
The post Agile Software Development: Architecture Patterns for Responding to Change – Part 2 appeared first on Big Nerd Ranch.
]]>The post Agile Software Development: Architecture Patterns for Responding to Change – Part 1 appeared first on Big Nerd Ranch.
]]>Every software project of sufficient age will undergo change. We cannot predict how it will change, but accepting that change will happen can improve our decision-making process. We have to ship, we have to make money – it’s a business, after all. But resources like time, money, and effort are finite; thus it’s better to be wiser and more efficient in how we spend those resources. There are ways to develop (write) software and architect (plan) code that facilitates responding to change and strengthens stewardship with stakeholders.
This article series explores a coding approach we use at Big Nerd Ranch that enables us to more easily respond to change. It will start by laying out an example and explaining why some “traditional” approaches make change-response difficult. Part 2 will introduce the approach, and Part 3 will complete it.
To help illustrate and understand how we can manage change, let’s look at a simplified Contacts app. I have chosen to use the common Master-Detail style interface, with a TableViewController
showing the master list of Persons. Selecting a Person shows a ViewController
displaying the Person’s details. The implementation is typical and likely familiar:
import UIKit /// My `Person` model. struct Person { let name: String } /// Shows a single Person in detail. class PersonViewController: UIViewController { var person: Person? // Imagine it has functionality to display the Person in detail. } /// Shows a master list of Persons. class PersonsViewController: UITableViewController { let persons = [ Person(name: "Fred"), Person(name: "Barney"), Person(name: "Wilma"), Person(name: "Betty") ] override func tableView(_ tableView: UITableView, didSelectRowAt indexPath: IndexPath) { let selectedPerson = persons[indexPath.row] // `instantiateFromStoryboard()` is poetic convenience for this example let personVC = PersonViewController.instantiateFromStoryboard() personVC.person = selectedPerson navigationController?.pushViewController(personVC, animated: true) } // Other TableView delegate and data source functions omitted for brevity }
When my app is launched, the PersonsViewController
is the initial ViewController
loaded and displayed. A user can tap on a row/cell of a Person, and the app will navigate to display the details of that Person. This is a fairly common scenario in mobile apps.
As it is now, the app has a single view showing a list of Persons; tap a Person and the Person’s detail is shown. The product stakeholders want to expand the app to support a new feature: Groups. To support this new feature, they want a view showing a list of Persons, and when you tap a person it shows a list of the Person’s Group memberships. The app should change from a single-view UI to a tabbed-UI, with the first tab for Persons feature and the second for the Groups feature. How can we implement this change request in a manner that provides good stewardship of time, money, and resources, and also leads to a more robust, more maintainable code base?
Consider the commonalities: both tabs start by showing a list of Persons. Our PersonsViewController
shows a list of Persons, so we can use it to implement both tabs, right? While it does show persons, it’s not able to satisfy our requirements: the data to display is hard-coded within the PersonsViewController
, and the tight coupling to the PersonViewController
doesn’t support showing Group membership.
How can we solve this?
I’ll immediately disqualify three approaches.
First, duplication. This is creating a new class, such as PersonsGroupViewController
that replicates PersonsViewController
in full (perhaps by select-all, copy, paste) and edits tableView(_:didSelectRowAt:)
to transition to a GroupsViewController
instead of PersonViewController
. Duplicating code in such a manner might work but will become a maintenance nightmare to maintain essentially the same code in multiple places.
Second, subclassing. This is creating a common base class which then PersonsViewController
and PersonsGroupViewController
inherit from, varying just in how the didSelect
is handled. This isn’t necessarily a bad option (and in some cases may be the right approach), but in addition to creating additional maintenance overhead, it’s also not quite the correct model. The display of a list of persons is the same regardless of the action taken when tapping a cell, so to subclass just to change the tap-action is a little much.
Third, to expand PersonsViewController
with some sort of “behavior” or “configuration” enum such as:
class PersonsViewController: UITableViewController { enum SelectionMode { case personDetail case groupMembership } var selectionMode: SelectionMode override func tableView(_ tableView: UITableView, didSelectRowAt indexPath: IndexPath) { switch selectionMode { case .personDetail: let personVC = PersonViewController.instantiateFromStoryboard() navigationController?.pushViewController(personVC, animated: true) case .groupMembership: let groupVC = GroupsViewController.instantiateFromStoryboard() navigationController?.pushViewController(groupVC, animated: true) } } } // when filling in tab 1… let personsVC = PersonsViewController.instantiateFromStoryboard() personsVC.selectionMode = .personDetail // when filling in tab 2… let personsVC = PersonsViewController.instantiateFromStoryboard() personsVC.selectionMode = .groupMembership
This approach is problematic because, in addition to making the couplings tighter and more complex, it does not scale well when more modes are added. You might be tempted to think “Oh, it’s just two cases,” but remember we are trying to position the codebase to be to respond to future, and unknown, changes. The reality of codebase maintenance is that, once you establish a pattern, future developers are likely to maintain that pattern. If more changes are required, a future developer is more likely to expand the enum and lead the ViewController
down the road to the ill-fated “Massive View Controller” problem.
There is a better way, which I’ll introduce in part 2.
The post Agile Software Development: Architecture Patterns for Responding to Change – Part 1 appeared first on Big Nerd Ranch.
]]>The post A Quick BNR Guide to Remote Working appeared first on Big Nerd Ranch.
]]>Working remotely is woven into the fabric of our daily lives at Big Nerd Ranch. With over half of our staff being outside of the Atlanta area, and with our Atlanta staff not always in the office, successful remote work is something we do. If you find yourself suddenly having to work remote, we know it can bring challenges and adjustments. Here’s a list of tips, tricks, and best practices that we’ve found to help us be successful in remote work.
Issues pop up from time to time, or things just don’t work the way we want them to.
Just because you’re home doesn’t necessarily mean you’re available. That must be understood, respected, and enforced by everyone. This applies to family, friends, neighbors, anyone living in the household, and yourself.
If you can use (physical) boundaries or ways to signal to others you’re unavailable, do so. For example, a separate room with a closable door; an availability signal light; a sign; whatever works in your context and within your means. Helping others know you’re in a “do not disturb” state helps minimize interruptions and disturbances. But again, it must be understood, respected, and enforced by all – it’s the only way to make it successful and maintain your sanity.
For me, I’m fortunate to have a separate room for an office, with a door. When I’m working and cannot be disturbed, my door is closed. If my door is closed and a family member needs me, they knock and wait for an answer. I may not answer – don’t take it personally; I might be in a meeting or I might be deeply focused on a problem. If I cannot answer, I will come and find you when I can. Rules such as these were established early on in my career of working from home, and they were instrumental in keeping some order and sanity in the household. You will evolve your own.
That said, do NOT overlook the blessings this opportunity provides. One big reason I wanted to work from home was to be with my kids as they grew up. Most of their interruptions were simply to show me the cool thing they did – it required all of 15 seconds and an “I love you, too” to satisfy them and gain hours of uninterrupted work. Besides, I can’t see my kids grow up if I don’t take the time to see them.
There will be challenges to overcome; boundaries help overcome some challenges. Remember to respect and enforce them.
Adjusting to remote work takes time. It’s an adjustment for the company, for your co-workers, and for you. There will be struggles, but there will also be discoveries of new and exciting ways to work. Have patience, give grace, and we’ll all get through this together.
The post A Quick BNR Guide to Remote Working appeared first on Big Nerd Ranch.
]]>The post Git Smudge and Clean Filters: Making Changes So You Don’t Have To appeared first on Big Nerd Ranch.
]]>Sometimes software development requires us to make local changes to files to perform our daily work, but those changes must not be committed back to the source code repository. And in our day-to-day routine, we do the thing that must not be done—we commit a change we didn’t mean to commit. Look, stuff happens, but then we’re incanting esoteric git commands to revert state, and while recoverable, the flow breakage is unwelcome. If you work in such a situation, git smudge and clean filters may be your solution.
I worked on a project where we lacked direct deployment access, which meant build identifiers within the code repository had to remain stable for deployment. Unfortunately, we couldn’t do our daily work with those identifiers and had to maintain constant local changes to some files. Workable, but one of those irritations that add up – elimination would remove friction.
At the root of the repository is a .gitattributes
file. It might look like this:
project.pbxproj filter=munge-project-identifier
The .gitattributes
file affects everyone who clones the repository since it’s committed to the repository. However, the filter definition – what munge-project-identifier
means – is not. If someone’s git config does not define that filter, this attribute won’t do anything. This means that everyone gets the .gitattributes
, but actually applying the filter is opt-in. In my case, the build-to-deployment environment didn’t want these changes, just us developers, so we had to help all developers apply the filter. That is one downside: it’s totally quiet, so failures aren’t readily surfaced.
While it’s permitted to define the filter inline, that’s useful for only the simplest of filters. Furthermore, if multiple developers accessing the codebase all need to apply the filter, it should be easy for everyone to adopt without error. So we use scripts.
Let’s say I had to change an identifier from com.blah.user-thing
to com.blah.user-bnr
. I would create a script for each in a scripts/
folder committed to the repository:
scripts/git-filter-smudge-project-identifier.sh
sed -e 's/com.blah.user-thing/com.blah.user-bnr/'
scripts/git-filter-clean-project-identifier.sh
sed -e 's/com.blah.user-bnr/com.blah.user-thing/'
Using a text editor, edit the $(PROJECTDIR)/.git/config
file to add the smudge and clean filters:
[filter "munge-project-identifier"]
smudge = /Users/hsoi/Documents/BNR/Development/Projects/Fred/code/scripts/git-filter-smudge-project-identifier.sh
clean = /Users/hsoi/Documents/BNR/Development/Projects/Fred/code/scripts/git-filter-clean-project-identifier.sh
Or using git directly:
$ git config --local filter.munge-project-identifier.smudge /Users/hsoi/Documents/BNR/Development/Projects/Fred/code/scripts/git-filter-smudge-project-identifier.sh
$ git config --local filter.munge-project-identifier.clean /Users/hsoi/Documents/BNR/Development/Projects/Fred/code/scripts/git-filter-clean-project-identifier.sh
It’s intentional to use the absolute paths, though it’s possible to support relative paths in the .git/config
file, but there’s more work. These changes are local per developer, so an absolute path is sufficient.
Once all of this is in place:
git status
will show your working copy is clean.com.blah.user-bnr
in my file.com.blah.user-thing
in my remote file.It’s possible that despite the above changes the “old” data still shows and the filter is not applied. Here are a couple of things I’ve tried:
First, double-check that all steps, names, and paths are correct.
Second, try deleting and restoring the file(s) affected by the filter. I would delete the file directly (e.g. go into the Finder and Trash the file), then use git (e.g. git reset
) to restore the file via a git mechanism. This should trigger git’s hook to apply filters.
If there are still problems, or you want to learn more nitty-gritty about git attribute keyword expansion support (what “git smudge and clean” is all about), you can check the official documentation: “Customizing Git Attributes: Keyword Expansion”.
Git smudge and clean filters are a little nugget hidden away in the corner of git esoterica. But once you know about and use them, the friction they remove helps your day run smoother. It’s these sorts of efficiencies that we tend to build into all the work we do. If you’d like to learn more about our process, schedule a chat with one of our friendly Nerds!
Image credit: https://git-scm.com/downloads/logos
The post Git Smudge and Clean Filters: Making Changes So You Don’t Have To appeared first on Big Nerd Ranch.
]]>The post SiriKit Part 4: Custom UI appeared first on Big Nerd Ranch.
]]>Siri is Apple’s intelligent personal assistant. Siri allows you to use your voice to interact with your iOS, watchOS, tvOS, and macOS devices. As with many Apple technologies, Apple has made it easier for developers to integrate their apps with Siri through SiriKit. This series explores SiriKit and how you can use it to expose your app’s functionality through Siri. Part 1 provided the basics. In part 2 we explored Resolve, Confirm, and Handle. Finishing touches were discussed in part 3. Now we’ll look at how a custom UI can strengthen your app’s presence and brand in Siri.
Apple provides so much (Intents) framework, making it relatively easy for developers to expose their app’s functionality through Siri. While this customizes Siri’s behavior to your app, Siri’s UI remains functional but generic. To increase your app’s impact on the user, your app can opt to provide a custom UI for Siri. This optional custom UI can supplement Siri’s UI or completely replace it. You can decide what information to show the user – including showing information Siri may not normally show, like per-user information – and couple it with your app’s branding and familiar UI. Providing a custom UI can make your app’s Siri integration stand out, and it’s done by creating an Intents UI Extension.
Custom Siri UI is provided by creating an Intents UI Extension. The Intents UI Extension works in conjunction with your Intents Extension, providing a view controller which can display information about the interaction. The two extensions do not directly communicate; the UI Extension receives information by way of an INInteraction
object, which will be explained below.
There are a couple of ways to get started:
If you already have an Intents Extension and wish to add a custom UI for it:
If you are creating a new Intents Extension and want custom UI to go with it:
Then, just like in Part 1 with the Intents Extension:
Info.plist
NSExtension
item. If the Info.plist
doesn’t contain one, add one of type dictionary.NSExtensionAttributes
item. If there isn’t one, add one of type dictionary.IntentsSupported
extension attribute (adding one of type array, if needed). Each entry should be a string of the class name of the Intent you support custom UI for, one entry for every supported Intent. For example, if you support a custom UI for your INSendMessageIntent
, there should be an entry of “INSendMessageIntent”.While you are in the Info.plist
file, note an Intents UI Extension has an NSExtensionMainStoryboard
. The initial view controller within that storyboard is the principal class for the extension (if you wish to create your view controller programatically, remove the NSExtensionMainStoryboard
entry and use NSExtensionPrincipalClass
, setting its value to the name of your UIViewController
subclass). Since the principal class is a UIViewController
subclass, you have access to almost all UIKit and UIViewController
functionality. I say “almost” because you can draw anything, have animations, embed child view controllers, but do not use controls nor gesture recognizers because the system prevents delivery of touch events to the custom view.
The UIViewController
conforms to the INUIHostedViewControlling
protocol, which defines the functions for providing the custom interface. There are two functions:
configure(with interaction:, context:, completion:)
configureView(for parameters:, of interaction:, interactiveBehavior:, context:, completion:)
When configureView()
was introduced in iOS 11, configure()
was not deprecated – both approaches remain valid but different ways to customize the Siri UI.
Introduced in iOS 10, configure()
augments the default Siri interface. The principal UIViewController
is instantiated, you configure()
it, then it is installed into the UI alongside the default Siri interface. This leaves open the possibility for duplication of information, with both your and Siri’s UIs providing it. You can have your UIViewController
conform to INUIHostedViewSiriProviding
and suppress some bits of information, but what comes through and what get suppressed varies from Intent to Intent – yes, you will need to do some experimenting to see exactly how your Intent works out.
Using configure()
is a reasonable approach. It’s straightforward, it’s simple, and if you only need to supplement what Siri provides (e.g. adding branding), it may be the perfect solution. Of course, if you need to support iOS 10, it’s your only solution. But if you’re supporting at least iOS 11, and if you need finer control over the UI customization, there is configureView()
configureView()
offers customization of the entire UI, or just selected parts. configureView()
has a parameter parameters: Set<INParameter>
; these are the parameters of the interaction
, such as the recipients and contents of a send message Intent or the description of a workout Intent. With access to individual parameters, you can choose on a per-parameter basis to show your own UI or Siri’s default, suppress showing a parameter, group the parameters in your own way, add non-parameter UI like a header, or even fully replace the Siri UI with your own customized layout.
When Siri displays your UI, it loads your UI extension and instantiates the view controller for each parameter (three parameters? three view controller instances), calling each view controller’s configureView()
. The first time it is called with zero parameters, providing you with an opportunity to add non-parameterized UI (such as a branding banner) or to replace the entire default UI with your own custom UI. Subsequent calls to configureView()
will be in a well-defined and documented order for each Intent parameter.
The increased support for customization is useful, but comes with greater cost. Implementing configureView()
is slightly more complicated because there are more cases to contend with, especially if your UI extension supports multiple Intents. It’s important to know that every time configureView()
is called, it is called on a new instance of your principal UIViewController
! You do not have one view controller instantiated and you configure it for each parameter. For each parameter, a new UIViewController
is instantiated, and is configured and installed for that and only that parameter. Thus, your UIViewController
in your storyboard is not a monolithic view (like you may have with configure()
). You must provide per-parameter views, or views per however you wish the parameters to be grouped and displayed. Furthermore, these extensions have tight memory constraints, so you must balance building your display against runtime realities. Watch “What’s New In SiriKit” from WWDC 2017 for additional explanation.
Again, configureView()
is only available in iOS 11. If your UI Extension implements both configure()
and configureView()
, under iOS 11 only configureView()
will be called.
Like with configure()
, I strongly recommend you spend time with an empty implementation of configureView()
, using the debugger to examine the parameters and their order for your Intent. Learning how the individual parts work will go a long way towards helping you architect the best custom solution for your app.
Let’s add some custom UI to BNRun. There are some nifty workout-oriented emoji, so we’ll use those to jazz things up. I’m going to add some developer-oriented information to the UI, since this is sample code and UI customization allows us to add information Siri doesn’t typically display. The full sample code can be found in the Github repository. You will want to refer to it for complete understanding, as the following snippets are abbreviated for clarity.
First, you should know about INInteraction
.
An INInteraction
object encapsulates information about the Siri interaction. While it can be used in a number of ways and places, it’s the primary way the UI Extension is informed about what’s going on and thus how to configure the UI. The properties you’ll most care about are:
intent: INIntent
: the Intent of the interaction.intentResponse: INIntentResponse?
: the Intent response, if appropriate.intentHandlingStatus: INIntentHandlingStatus
: the state of execution, like .ready
, .success
, or .failure
It’s important to get your information from the INInteraction
object and not by another means, to ensure proper reflection of the user’s interaction.
Let’s first look at configure(with interaction:, context:, completion:)
:
func configure(with interaction: INInteraction, context: INUIHostedViewContext,
completion: @escaping (CGSize) -> Void) {
var viewSize = CGSize.zero
if let startWorkoutIntent = interaction.intent as? INStartWorkoutIntent {
viewSize = configureUI(with: startWorkoutIntent, of: interaction)
}
else if let endWorkoutIntent = interaction.intent as? INEndWorkoutIntent {
viewSize = configureUI(with: endWorkoutIntent, of: interaction)
}
completion(viewSize)
}
Since BNRun supports both Starting and Stopping a workout, we have to look at the interaction
to determine the intent. Once we know, we can configure the UI. Our private configureUI()
takes both the INIntent
and the INInteraction
so the UI extension can extract relevant information for display. IN BNRun, I’m able to construct a Workout
object from an INStartWorkoutIntent
and use that Workout
’s data to fill in my UI. I also use the INInteraction
to provide some additional information in my custom UI.
The last thing that must be done is call the completion, passing a desired size for the view. Note it’s a desired (requested) size, and while Siri will strive to honor it, it may not. Exactly what you provide is up to you. In BNRun, since everything is set up with Auto Layout, I pass view.systemLayoutSizeFitting(UILayoutFittingCompressedSize)
for the desired size. Since UIViewController
conforms to NSExtensionRequestHandling
, you could look at your view controller’s extensionContext
and return its hostedViewMinimumAllowedSize
or hostedViewMaximumAllowedSize
. Or you could calculate and return a specific size. Finally, if for some reason you are not supporting a custom UI for this Intent, return a size of CGSize.zero
; this will tell Siri there is no custom UI and Siri will provide its default UI. I encourage you to experiment with different approaches to see what results they bring and how they can work for you.
Let’s see what it looks like:
That’s nicer than the default UI alone, but note the duplication of information? It’s not horrible, but we can do better.
Here’s what configureView(for parameters:, of interaction:, interactiveBehavior:, context:, completion:)
looks like:
@available(iOS 11.0, *)
func configureView(for parameters: Set<INParameter>, of interaction: INInteraction,
interactiveBehavior: INUIInteractiveBehavior, context: INUIHostedViewContext,
completion: @escaping (Bool, Set<INParameter>, CGSize) -> Void) {
if parameters.count == 0 {
_ = instantiateAndInstall(scene: "HeaderScene", ofType: HeaderViewController.self)
let viewSize = view.systemLayoutSizeFitting(UILayoutFittingCompressedSize)
completion(true, [], viewSize)
}
else {
let startIntentDescriptionParameter = INParameter(for: INStartWorkoutIntent.self,
keyPath: #keyPath(INStartWorkoutIntent.intentDescription))
let endIntentDescriptionParameter = INParameter(for: INEndWorkoutIntent.self,
keyPath: #keyPath(INEndWorkoutIntent.intentDescription))
if parameters.contains(startIntentDescriptionParameter),
let startWorkoutIntent = interaction.intent as? INStartWorkoutIntent {
let viewSize = configureUI(with: startWorkoutIntent, of: interaction)
completion(true, [startIntentDescriptionParameter], viewSize)
}
else if parameters.contains(endIntentDescriptionParameter),
let endWorkoutIntent = interaction.intent as? INEndWorkoutIntent {
let viewSize = configureUI(with: endWorkoutIntent, of: interaction)
completion(true, [endIntentDescriptionParameter], viewSize)
}
else {
completion(false, [], .zero)
}
}
}
As I mentioned above, configureView()
is more complicated but more powerful than configure()
. When no parameters are passed, that’s an opportunity to install a fun custom banner. When parameters are passed, we create our own INParameter
objects, which are objects describing an Intent’s parameters. We match our parameters against the given parameters, and configure the UI accordingly. Finally, just like configure()
the completion must be invoked, passing the desired size of that view (note: this particular parameter’s view – not the entire view), along with a “success” boolean (if this view was successfully configured or not) and the set of parameters this custom view is displaying. That last part allows for some interesting functionality because it enables you to group the display of multiple parameters into a single view. INSendMessageIntent
has two parameters: recipients
and contents
. If you were processing the parameters for that Intent, you would first receive the recipients
parameter and could create a custom view for just recipients. Then you would receive the contents
parameter and could create a custom view for just the contents. But if you wanted to combine the display of recipients and contents into a single view, when processing the recipients
parameter you could create your custom UI and configure it for both parameters, then in the completion pass an array of two INParameter
objects: one for recipients
and one for contents
. Siri will see the contents
parameter was handled and will not call configureView()
for contents
. This sort of parameter grouping provides you with a great deal of flexibility in how you customize your UI.
When it comes to extracting data, you have the INInteraction
and can extract data by digging through its data structures. But you can also use your INParameter
s to extract information directly via INInteraction
’s func parameterValue(for parameter: INParameter) -> Any?
.
Let’s see what it looks like:
That’s much better. But, the emoji? That’s supposed to be a header view, so why is it a footer? Apple documents that the first call to configureView()
has an empty set of parameters, and that what parameters are passed and the order of their passing will be known, stable, and documented. However, it appears there are exceptions. This is why I continue to stress the importance of starting your UI Extension development by running “empty” in the debugger (implement both configure
functions, dump data, return .zero
size) and spending time dumping parameters and examining exactly how your Intent behaves. The sample code has some functionality and suggestions to help with debugging and exploration.
A few links to useful resources, regarding custom Siri UI:
Siri is gaining a more prominant role in controlling Apple technology like AppleTV, CarPlay, HomePod, iPhones, and Apple Watches. Part 1, Part 2, and Part 3 of this series showed how to expose your iOS app’s functionality through Siri. Here we showed you how you can make your iOS app’s Siri experience stand out with custom UI. You now have the knowledge, so go forth and make awesome Siri experiences for your apps and your users.
And if you’re having trouble implementing SiriKit or other features into your iOS or watchOS app, Big Nerd Ranch is happy to help. Get in touch to see how our team can build a Siri-enabled app for you, or implement new features into an existing app.
The post SiriKit Part 4: Custom UI appeared first on Big Nerd Ranch.
]]>The post SiriKit Part 3: Finishing Touches appeared first on Big Nerd Ranch.
]]>Siri is Apple’s intelligent personal assistant. Siri allows you to use your voice to interact with your iOS, watchOS, tvOS and macOS devices. As with many Apple technologies, Apple has made it easier for developers to integrate their apps with Siri through SiriKit. This series explores SiriKit and how you can use it to expose your app’s functionality through Siri. In Part 1 we looked at the basics of SiriKit, and in Part 2 we explored Resolve, Confirm and Handle. Now we’ll take a look at those final details that will help you ship your Siri-enabled iOS app.
As it stands, BNRun contains most of what’s needed to work with Siri. But to allow Siri to work best with BNRun, there are a few final touches that should be implemented.
A user can ask Siri “What can you do?” to discover what they can do with Siri, including what third-party apps can do with Siri. When asked this, Siri will show the app, and selecting the app will show a list of sample phrases. These sample phrases come from the AppIntentVocabulary.plist
that is added to the application’s Base.lproj
(not the extension’s bundle).
Specifically, the IntentPhrases
key must have IntentExamples
entries for every IntentsSupported
key in the Intents extension’s Info.plist
. Additionally, if you have a base AppIntentVocabulary.plist
, every localization of your app/extension must have a localized AppIntentVocabulary.plist
. When apps are submitted to the App Store, the AppIntentVocabulary.plist
(both Base and localized versions) are sent to Siri for processing; those files remain on the server and will be specific to that version of the app. These phrases help the user understand how Siri and your app interact, but they also help Siri itself understand how your app can be invoked.
The other part of the AppIntentVocabulary.plist
is the ParameterVocabularies
. This is an optional array of dictionaries to help Siri understand app-specific terms: terms specific to your app, used by any user of your app. This is useful if an app uses vocabulary in a nonconventional way or a way completely unique to the app. Note these terms are global. If you have user-specific terms – terms unique to this particular app user, like their custom photo tags – you’ll want to look at INVocabulary
.
If you tried running any of the samples, you may have noticed Siri has a hard time with the app name of “BNRun”. To improve Siri’s recognition of the app name, be sure to provide a CFBundleDisplayName
in your application’s Info.plist
. Yes, this value is also displayed under the app’s icon in the iOS home screen, so it’s important to understand how this value interacts with Siri, even if your app doesn’t use Siri. If your app is localized, be sure to include an InfoPlist.strings
file for the app’s localization that includes a localized CFBundleDisplayName
.
Changing “BNRun” to have a CFBundleDisplayName
of “Big Nerd Run” makes it much more natural to interact with Siri. But the reuse of the CFBundleDisplayName
may not be workable for your app. New in iOS 11 is support for alternative app names. In the application’s Info.plist
, the INAlternativeAppNames
is an array of dictionaries that describe other names the app can go by, within the context of Siri. Each dictionary contains INAlternativeAppName
, which is a string containing the alternative app name; and optionally INAlternativeAppNamePronunciationHint
, which is a “sounds like” pronunciation hint for the alternative name.
Siri can be accessed in various ways, including from the lock screen. Depending what your app does, give consideration to the ability to access your supported Intents from the lock screen. Some intents, like INSendPaymentIntent
cannot be directly accessed from the lock screen. If your Intent is one that can be accessible from the lock screen but you don’t want it to be accessible (e.g. you have a “secure” messaging app, so sending messages can only happen from an unlocked device):
Info.plist
.NSExtensionAttributes
of your NSExtension
entry, add IntentsRestrictedWhileLocked
(an array of string values).IntentsSupported
entries that you wish to restrict from the lock screen, add the Intent’s name as an entry for IntentsRestrictedWhileLocked
.When an intent is listed under IntentsRestrictedWhileLocked
, if the user invokes the Intent from the lock screen, the OS will prompt the user to unlock the device before the Intent can proceed to be handled.
IntentsRestrictedWhileLocked
is good for restricting access from the lock screen, but what if you need a little more authorization for your operation? Use Touch ID or Face ID. There’s nothing special here – it’s just standard LAContext
handling. And of course, using a passcode is an acceptable fallback if the device does not support biometrics.
If your app needs to support payments, Apple recommends using Apple Pay (perhaps even the SiriKit Payments Domain) as it is a secure API for processing payments.
One final step before you ship your Siri-enabled app? Test it. All of those IntentPhrases
? Make sure they actually work. Custom vocabulary? App name synonyms? If they’re worth adding to your app, it’s worth verifying they work correctly. It’s also worthwhile to take the time to run through all possible permutations of your supported Intents and their parameters; there’s many ways a user could invoke Siri, so make sure you resolve, confirm and handle the user’s intent with aplomb. Because different users could invoke Siri differently, it’s also worthwhile to beta-test your app to gain as much coverage for the multitude of invocation angles. And yes, explore XCUISiriService
for unit testing your Siri interface.
func testStart500MileWalkWithSiri() {
XCUIDevice.shared.siriService.activate(voiceRecognitionText: "walk 500 miles in Big Nerd Run")
}
If BNRun was a real app, I would take the time to support the full range of the Workout Domain. I hope with what I’ve provided you can see it wouldn’t be too much effort to complete the suport. You can find the latest revision to the sample code, including the vocabularies, app names and a simple unit test in the sample code repository. Use that code as a starting point, finish out the Workout Domain support, such as pausing and resuming the workout. It will be a good way to begin exploring SiriKit.
I recommend watching the WWDC 2016 Session 217 “Introducing SiriKit” video. All WWDC videos regarding Siri and SiriKit are worth watching, but this one especially since it covers best practices and provides handy tips for ensuring a great Siri user experience for your app.
Voice interfaces are only going to grow in power, functionality, and popularity. It’s worthwhile to explore what SiriKit can do today and see how you can make your app Siri-enabled. Even if you cannot take advantage of Siri today, you can think about Apple’s view of the future for iOS and watchOS app development via embedded frameworks and app extensions and begin work to ensure your app can take advantage of the coming technologies.
And if you’re having trouble implementing SiriKit or other features into your iOS or watchOS app, Big Nerd Ranch is happy to help. Get in touch to see how our team can build a Siri-enabled app for you, or implement new features into an existing app.
The post SiriKit Part 3: Finishing Touches appeared first on Big Nerd Ranch.
]]>The post SiriKit Part 2: Resolve, Confirm, Handle appeared first on Big Nerd Ranch.
]]>Siri is Apple’s intelligent personal assistant. Siri allows you to use your voice to interact with your iOS, watchOS, tvOS and macOS devices. As with many Apple technologies, Apple has made it easier for developers to integrate their apps with Siri through SiriKit. This series explores SiriKit and how you can use it to expose your app’s functionality through Siri. In Part 1, we looked at the basics of SiriKit. Here in Part 2, we’ll look at the heart of SiriKit: Resolve, Confirm, and Handle.
Folks at Big Nerd Ranch like to work out, especially by lifting weights and running. Having an app to keep track of our workouts would be useful, so enter BNRun, and its simple sample code:
In Part 1,I mentioned that Siri is limited it what it can do. When deciding to add Siri support to your app, you have to reconcile your app’s functionality against what SiriKit offers in its Domains and Intents. BNRun is a workout app, and SiriKit offers a Workouts Domain, so that’s a good start. Looking at the Intents within the Workout Domain, there is nothing that lends to sets/reps/weight, but there are Intents that lend to cardio workouts, like a run or a swim. So Siri won’t be able to support everything I want to do, but I will use Siri for what it can do. To keep things simple, I’ll focus on Starting and Stopping workouts.
However, before diving into the Intents framework, I have to step back and look at my code against the Intents. Every Intent has different requirements: some are simple and self-contained, others require support from the app and some must have the app do the heavy lifting. It’s essential to read Apple’s documentation on the Intent to know how it can and must be implemented, because this affects how you approach not just your Intent, but your application.
In BNRun and its chosen Intents, the app itself must take care of the heavy lifting. However, the Intents must have some knowledge and ability to work with the app’s data model. As a result, the app’s data model must be refactored into an embedded framework so it can be shared between the app and the extension. You can see this refactoring in phase 2 of the sample code. It’s beyond the scope of this article to talk about embedded frameworks. Just know that an Intents Extension is an app extension and thus prescribes to the features, limitations and requirements of app extensions; this can include using embedded frameworks, app groups, etc. to enable sharing of code and data between your app and your extension.
There are three steps involved in an Intent handler:
When starting a workout, a user could say lots of things:
Siri takes the user’s natural language input, converts it to text, and does the work to determine what the user wants to do. When Siri determines the user wants to do something involving your app, your Intents Extension is loaded. The OS examines the extension’s Info.plist
looking for the NSExtensionPrincipalClass
as the entry point into the extension. This class must be a subclass of INExtension
, and must implement the INIntentHandlerProviding
function: handler(for intent: INIntent) -> Any?
returning the instance of the handler that will process the user’s command. In a simple implementation where the principal class implements the full handler, it might look something like this:
import Intents
class IntentHandler: INExtension /* list of `Handling` protocols conformed to */ {
override func handler(for intent: INIntent) -> Any? {
return self
}
// implement resolution functions
}
While I could implement the whole of my extension within the principal class, by factoring my handlers into their own classes and files, I’ll be better positioned for expanding the functionality of my extension (as you’ll see below). Thus, for the Start Workout Intent, I’ll implement the principal class like this:
import Intents
class IntentHandler: INExtension {
override func handler(for intent: INIntent) -> Any? {
if intent is INStartWorkoutIntent {
return StartWorkoutIntentHandler()
}
return nil
}
}
StartWorkoutIntentHandler
is an NSObject
-based subclass that implements the INStartWorkoutIntentHandling
protocol, allowing it to handle the Start Workout Intent. If you look at the declaration of INStartWorkoutIntentHandling
, you’ll see one only needs to handle
the Intent (required by the protocol): one doesn’t need to resolve
nor confirm
(optional protocol requirements). However, as there are lots of ways a user could start a workout but my app only supports a few ways, I’m going to have to resolve and confirm the parameters.
BNRun supports three types of workouts: walking, running and swimming. The Start Workout Intent doesn’t support a notion of a workout type, but it does support a notion of a workout name. I can use the workout name as the means of limiting the user to walking, running and swimming. This is done by implementing the resolveWorkoutName(for intent:, with completion:)
function:
func resolveWorkoutName(for intent: INStartWorkoutIntent,
with completion: @escaping (INSpeakableStringResolutionResult) -> Void) {
let result: INSpeakableStringResolutionResult
if let workoutName = intent.workoutName {
if let workoutType = Workout.WorkoutType(intentWorkoutName: workoutName) {
result = INSpeakableStringResolutionResult.success(with: workoutType.speakableString)
}
else {
let possibleNames = [
Workout.WorkoutType.walk.speakableString,
Workout.WorkoutType.run.speakableString,
Workout.WorkoutType.swim.speakableString
]
result = INSpeakableStringResolutionResult.disambiguation(with: possibleNames)
}
}
else {
result = INSpeakableStringResolutionResult.needsValue()
}
completion(result)
}
The purpose of the resolve
functions is to resolve parameters. Is the parameter required? Optional? Unclear and needs further input from the user? The implementation of the resolve
functions should examine the data provided by the given Intent, including the possibility the parameter wasn’t provided. Depending upon the Intent data, create a INIntentResolutionResult
to let Siri know how the parameter was resolved. Actually, you create an instance of the specific INIntentResolutionResult
type appropriate for the resolve—in this case, a INSpeakableStringResolutionResult
(the type of result will be given in the resolve
function’s signature).
All results can respond as needing a value, optional, or that this parameter is unsupported. Specific result types might add more contextually appropriate results. For example, with INSpeakableStringResolutionResult
, a result could be success with the name; or if a name was provided but it wasn’t one the app understood, a list is provided to the user. Every result type is different, so check documentation to know what you can return and what it means to return that type. Don’t be afraid to experiment with the different results to see how Siri voices the result to the user.
Important Note! Before exiting any of the three types of Intent-handler functions, you must invoke the completion closure passing your result. Siri cannot proceed until the completion is invoked. Ensure all code paths end with the completion (consider taking advantage of Swift’s defer
).
Once parameters have been resolved, it’s time to confirm the user’s intent can go forward. If BNRun connected to a server, this might be the time to ensure such a connection could occur. In this simple sample, it’s only important to ensure that a Workout
can be constructed from the INStartWorkoutIntent
.
func confirm(intent: INStartWorkoutIntent,
completion: @escaping (INStartWorkoutIntentResponse) -> Void) {
let response: INStartWorkoutIntentResponse
if let workout = Workout(startWorkoutIntent: intent) {
if #available(iOS 11, *) {
response = INStartWorkoutIntentResponse(code: .ready, userActivity: nil)
}
else {
let userActivity = NSUserActivity(bnrActivity: .startWorkout(workout))
response = INStartWorkoutIntentResponse(code: .ready, userActivity: userActivity)
}
}
else {
response = INStartWorkoutIntentResponse(code: .failure, userActivity: nil)
}
completion(response)
}
Notice the use of #available
? iOS 11 changed how the Workouts Domain interacts with the app, providing a better means of launching the app in the background. Check out the WWDC 2017 Session 214 “What’s New In SiriKit” for more information.
Handling the user’s intent is typically the only required aspect of an Intent handler.
func handle(intent: INStartWorkoutIntent,
completion: @escaping (INStartWorkoutIntentResponse) -> Void) {
let response: INStartWorkoutIntentResponse
if #available(iOS 11, *) {
response = INStartWorkoutIntentResponse(code: .handleInApp, userActivity: nil)
}
else {
if let workout = Workout(startWorkoutIntent: intent) {
let userActivity = NSUserActivity(bnrActivity: .startWorkout(workout))
response = INStartWorkoutIntentResponse(code: .continueInApp, userActivity: userActivity)
}
else {
response = INStartWorkoutIntentResponse(code: .failure, userActivity: nil)
}
}
completion(response)
}
While some Intents can handle things within the extension, a workout must be started within the app itself. The iOS 10 way required the creation of an NSUserActivity
, implementing the UIApplicationDelegate
function application(_:, continue userActivity:, restorationHandler:)
just like supporting Handoff. While this works, iOS 11 introduces application(_:, handle intent:, completionHandler:)
to UIApplicationDelegate
that more cleanly handles the Intent. Again, see the WWDC 2017 Session 214 “What’s New In SiriKit” for more information.
class AppDelegate: UIResponder, UIApplicationDelegate {
@available(iOS 11.0, *)
func application(_ application: UIApplication, handle intent: INIntent,
completionHandler: @escaping (INIntentResponse) -> Void) {
let response: INIntentResponse
if let startIntent = intent as? INStartWorkoutIntent,
let workout = Workout(startWorkoutIntent: startIntent) {
var log = WorkoutLog.load()
log.start(workout: workout)
response = INStartWorkoutIntentResponse(code: .success, userActivity: nil)
}
else {
response = INStartWorkoutIntentResponse(code: .failure, userActivity: nil)
}
completionHandler(response)
}
}
With the extension now implementing the three steps of resolve, confirm, and handle, the Intent handler is complete. Now the OS needs to know the Intent exists. Edit the extension’s Info.plist
and add the INStartWorkoutIntent
to the IntentsSupported
dictionary of the NSExtensionAttributes
of the NSExtension
dictionary.
To see how this all comes together, take a look at phase 3 of the sample code.
Since the app supports starting a workout, it should also support stopping a workout. Phase 4 of the sample code adds a StopWorkoutIntentHandler
. The IntentHandler
adds a case for it. StopWorkoutIntentHandler
is implemented, providing confirm
and handle
steps (there are no parameters to resolve
in BNRun). And the Info.plist
appropriately lists the intent.
You should be able to build and run the Phase 4 code, starting and stopping workouts within the app, within Siri, or a combination of the two. Give it a try!
Implementing the resolve, confirm, and handle functions takes care of the heavy lifting required for an app to work with Siri. But before shipping your awesome Siri-enabled app to the world, there are a few more things that need to be done. Those things will be covered in more detail in Part 3.
And if you’re having trouble implementing SiriKit or other features into your iOS or watchOS app, Big Nerd Ranch is happy to help. Get in touch to see how our team can build a Siri-enabled app for you, or implement new features into an existing app.
The post SiriKit Part 2: Resolve, Confirm, Handle appeared first on Big Nerd Ranch.
]]>The post SiriKit Part 1: Hey Siri, How Do I Get Started? appeared first on Big Nerd Ranch.
]]>Siri is Apple’s intelligent personal assistant, and it allows you to use your voice to interact with your iOS, watchOS, tvOS and macOS devices. Already used by millions, this technology will only continue to grow; in fact, MarketsAndMarkets predicts that the speech and voice recognition market will be valued at $18.3 billion by 2023. And in an effort to continue to thrive in this market, Apple has made it easier for developers to integrate their apps with Siri using SiriKit.
This series will explore SiriKit, why it is important and how you can use it to expose your app’s functionality through Siri. We’ll walk through the basics of SiriKit and how to add support to your iOS app, take a deepdive into Intents and then address final touches that make for a good Siri user experience.
Siri strives to feel natural to users by providing a conversational interface; instead of rigid commands, you just talk to Siri. While this makes using Siri easy for the user, it can be quite complicated for developers. Fortunately, Apple alleviates this complexity by handling the conversational aspects—you just provide the functionality. To provide the functionality, developers use SiriKit.
SiriKit is Apple’s toolkit for exposing your app’s functionality through Siri. It was introduced in iOS 10, and despite Siri’s ubiquitous presence across all Apple platforms, as of this writing SiriKit is only available on iOS and watchOS. When someone speaks to Siri, Siri turns the speech into text, turns the text into what the user wants to do (discovers their intent), which leads to an action and results in a response. Working with the user’s intent is the heart of SiriKit.
Siri is limited in what it can do, functioning only within known Domains. A Domain is a category of things Siri knows about, like making VoIP calls, messaging, making payments, working with lists and notes and helping with your workouts. Within each Domain is a series of Intents, which are actions Siri can perform. For example, within the payments Domain there are Intents to send payments, transfer money and pay bills. In the ride booking Domain, there are Intents to request a ride and get the ride status. When considering SiriKit adoption, look at the Domains and their Intents to see which ones make sense for your app.
An Intent often has parameters: when sending a message, to whom it is addressed; when making a reservation, what restaurant and for how many people. Consequently, implementing an Intent has you performing three steps:
Of these three phases, what and how you do them depends upon the specific Intent; be sure to read headers and Apple’s documentation. Part 2 of this series will examine the three phases in-depth. But first things first: let’s add an Intents Extension to our app.
Because interaction with Siri occurs outside of your app, your app’s SiriKit functionality is implemented as an app extension—specifically an Intents App Extension. Let’s create one!
That will add the Siri service to your App ID, and the Siri entitlement to your project.
Next, create an Intents Extension target in your project.
In order to let the system know what Intents the app can handle, edit the extension’s Info.plist.
NSExtension
item. If the Info.plist doesn’t contain one, add one of type dictionary.NSExtensionAttributes
item. If there isn’t one, add one of type dictionary.IntentsSupported
extension attribute, adding one of type array, if needed. Each entry should be a string of the class name of the Intent you support—one entry for every supported Intent. For example, if you support INSendMessageIntent
, there should be an entry of “INSendMessageIntent”.IntentsRestrictedWhileLocked
extension attribute.NSExtensionPointIdentifier
extension attribute should have a value of “com.apple.intents-service”.NSExtensionPrincipalClass
extension attribute should have the correct value via project stationery. More information about this will be discussed later.That’s it! Your app now has the essential elements to work with Siri. Let’s try it out! And yes, you can work with Siri within the iOS Simulator.
When the Siri waveform interface appears, you can begin your conversation with Siri. Say: “Send a message using My Great App”, and watch your Intents Extension be invoked. If you set breakpoints in the IntentHandler.swift
file, you can watch the extension go through the resolve, confirm and handle phases. Notice that depending on what you say to Siri, some parameters may be resolved multiple times.
Of course, as developers, we tend to do the same thing again and again while we develop. If every time you run your Intents Extension, you have to select the Siri app to run and speak your phrase, it can become tiresome. Thankfully, Xcode provides a nice runtime convenience:
If you enter a phase for the “Siri Intent Query,” Xcode will automatically run Siri and use your text as the invoking phrase. If you leave this field blank, the Xcode will prompt you upon running.
Congratulations! You’ve successfully added an Intents Extension to your application, and can begin extending your app’s capabilities with Siri.
The full source code for this (and the entire SiriKit series) can be found here.
Of course, this doesn’t do anything useful or engaging with your app. In Part 2, we’ll do something useful and explore three key notions in working with Siri: resolve, confirm and handle.
And if you’re having trouble implementing SiriKit or other features into your iOS or watchOS app, Big Nerd Ranch is happy to help. Get in touch to see how our team can build a Siri-enabled app for you, or implement this and other new features into an existing app.
The post SiriKit Part 1: Hey Siri, How Do I Get Started? appeared first on Big Nerd Ranch.
]]>