iOS - Big Nerd Ranch Wed, 16 Nov 2022 21:38:53 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.5 Swift Regex Deep Dive https://bignerdranch.com/blog/swift-regex/ https://bignerdranch.com/blog/swift-regex/#respond Thu, 20 Oct 2022 20:20:48 +0000 https://bignerdranch.com/?p=9547 Our introductory guide to Swift Regex. Learn regular expressions in Swift including RegexBuilder examples and strongly-typed captures.

The post Swift Regex Deep Dive appeared first on Big Nerd Ranch.

]]>
Processing strings to extract, manipulate, or search data is a core skill that most software engineers need to acquire. Regular expressions, or regex, are the venerable titan of string processing. Regular expressions are an integral part of many command line tools found in Unix, search engines, word processors, and text editors. The downside of regular expressions is that they can be difficult to create, read and debug. Most programming languages support some form of regex — and now Swift does, too, with Swift Regex.

An exciting and new Regex Builder in Swift Regex gives us a programmatic way of creating regular expressions. This innovative approach to creating often complex regular expressions is sure to be an instant winner with the regex neophyte and aficionado alike. We’ll be digging into Regex Builder to discover its wide-reaching capabilities.

Swift Regex brings first-class support for regular expressions to the Swift language, and it aims to mitigate or outright eliminate many of the downsides of regex. The Swift compiler natively supports regex syntax, which gives us compile time errors, syntax highlighting, and strongly typed captures. Regex syntax in Swift is compatible with Perl, Python, Ruby, Java, NSRegularExpression, and many others.

It should be noted that as of the writing of this article, Swift Regex is still in the open beta period. We’ll be using the Swift Regex found in Xcode 14 beta 6.

Creating a Swift Regular Expression

Swift Regex supports creating a regular expression in several different ways, each of which is useful for different scenarios. First, let’s take a look at creating a compile-time regular expression.

Compile Time Regex


let regex = /\d/


This regular expression will match a single digit. As is typical in regular expression syntax, the expression can be found between two forward slashes; “/<expression>/”. As you can see, this regular expression is a first-class type in Swift and can be assigned directly to a variable. As a Swift type, Xcode will also recognize this regex and provide both compile time checks and syntax highlighting.

Swift has added robust support for regex to a number of common APIs, and using this regular expression couldn’t be easier.


let user = "{name: Shane, id: 123, employee_id: 456}"
let regex = /name: \w+/

if let match = user.firstMatch(of: regex) {
    print(match.output)
}


Which gives us the output:

name: Shane

You may be tempted to use the regular expression [a-zA-Z]+ in order to match a word here. However, using \w+ allows the system to take into account the current locale.

Runtime Regex

Swift Regex also supports creating regular expressions at runtime. Runtime creation of a regular expression has many uses and can be useful for editors, command line tools, and search just to name a few. The expression syntax is the same as a compile time expression. However, they are created in a slightly different manner.


let regex = try Regex(".*\(searchTerm).*")


This regular expression is looking for a specific search term supplied at runtime. Here the regular expression is created by constructing the Regex type with a String representing the regular expression. The try keyword is used since a Regex can throw an error if the supplied regular expression is invalid.

We can again apply this regex using the firstMatch(of:) function as in our first example. Note that this time our regex captures the line that matches by using a regex capture, (, and ).


let users = """
[
{name: Shane, id: 123, employee_id: 456},
{name: Sally, id: 789, employee_id: 101},
{name: Sam, id: 453, employee_id: 999}
]
"""

let idToSearch = 789
let regex = try Regex("(.*id: \(idToSearch).*)")

if let match = users.firstMatch(of: regex) {
    print(match.output[1].substring ?? "not found")
}


Running the example gives us the following output:

{name: Sally, id: 789, employee_id: 101},

We can gain access to any data captured by the regex via output on the returned Regex.Match structure. Here, output is an existential with the first item, at index 0, being the regex input data. Each capture defined in the regex is found at subsequent indexes.

Regex Builder

The innovative and new Regex Builder introduces a declarative approach to composing regular expressions. This incredible new way of creating regular expressions will open the regex door to anyone who finds them difficult to understand, maintain, or create. Regex builder is Swift’s solution to the drawbacks of the regular expression syntax. Regex builder is a DSL for creating regular expressions with type safety while still allowing for ease of use and expressivity. Simply import the new RegexBuilder module, and you’ll have everything you need to create and compose powerful regular expressions.


import RegexBuilder

let regex = Regex {
    One(.digit)
}


This regular expression will match a single digit and is functionally equivalent to our first compile time regex example, /\d/. Here the standard regex syntax is discarded in favor of a declarative approach. All regex operations, including captures, can be represented with RegexBuilder. In addition, when it makes sense, regex literals can be utilized right within the regex builder. This makes for a very expressive and powerful approach to creating regular expressions.

RegexBuilder Example

Let’s take a deeper look into RegexBuilder. In this example, we will use a regex builder to parse and extract information from a Unix top command.


top -l 1 -o mem -n 8 -stats pid,command,pstate,mem | sed 1,12d


For simplicity, we’ll take the output of running this command and assign it to a Swift variable.


// PID    COMMAND       STATE    MEMORY
let top = """
45360  lldb-rpc-server  sleeping 1719M
2098   Google Chrome    sleeping 1679M-
179    WindowServer     sleeping 1406M
106    BDLDaemon        running  1194M
45346  Xcode            running  878M
0      kernel_task      running  741M
2318   Dropbox          sleeping 4760K+
2028   BBEdit           sleeping 94M
"""


As you can see, the top command outputs structured data that is well suited for use with regular expressions. In our example, we will be extracting the name, status, and size of each item. When considering a Regex Builder it is useful to break a larger regex down into smaller component parts which are then concatenated by the builder. First, I’ll present the code, and then we’ll discuss how it works.


// 1
let separator = /\s{1,}/

// 2
let topMatcher = Regex {
    // 3
    OneOrMore(.digit)

    // 4
    separator

    // 5
    Capture(
        OneOrMore(.any, .reluctant)
    )
    separator

    // 6
    Capture(
        ChoiceOf {
            "running"
            "sleeping"
            "stuck"
            "idle"
            "stopped"
            "halted"
            "zombie"
            "unknown"
        }
    )
    separator

    // 7
    Capture {
        OneOrMore(.digit)

        // /M|K|B/
        ChoiceOf {
            "M"
            "K"
            "B"
        }

        Optionally(/\+|-/)
    }
}

// 8
let matches = top.matches(of: topMatcher)
for match in matches {
    // 9
    let (_, name, status, size) = match.output
    print("\(name) \t\t \(status) \t\t \(size)")
}


Running the example gives us the following output:

lldb-rpc-server  		 sleeping 		 1719M
Google Chrome    		 sleeping 		 1679M-
WindowServer     		 sleeping 		 1406M
BDLDaemon        		 running 		 1194M
Xcode            		 running 		 878M
kernel_task      		 running 		 741M
Dropbox          		 sleeping 		 4760K+
BBEdit           		 sleeping 		 94M

Here is a breakdown of what is happening with the code:

  1. From looking at the data, we can see that each column is separated by one or more spaces. Here we define a compile time regex and assign it to the separator variable. We can then use separator within the regex builder in order to match column separators.
  2. Define the regex builder as a trailing closure to Regex and assign it to topMatcher.
  3. A quantifier that matches one or more occurrences of the specified CharacterClassCharacterClass is a struct that conforms to RegexComponent and is similar in function to a CharacterSet. The .digit CharacterClass defines a numeric digit.
  4. Matches the column separator.
  5. Captures one or more of any character. Regex captures are returned in the Output of the regex and are indexed based on their position within the regex.
  6. A capture of one item from the enclosed list of items. ChoiceOf is equivalent to a regex alternation (the | regex operator) and cannot have an empty block. You can think of this as matching a single value of an Enum. Use when there are a known list of values to be matched by the regular expression.
  7. Captures one or more digits followed by one item from the known list of “M”, “K”, or “B” optionally followed by a “+” or “-“. Notice that the Optionally component can take a regex literal as its parameter.
  8. Here we pass our regex as a parameter into the matches(of:) function. We assign the returned value to a variable that will allow use to access the regex output and our captured data.
  9. The output property of the regex returned data contains the entire input data followed by any captured data. Here we are unpacking the the output tuple by ignoring the first item (the input) and assigning each subsequent item to a variable for easy access.

As you can see from this example, the Swift regex builder is a powerful and expressive way to create regular expressions in Swift. This is just a sampling of its capability. So, next, let’s take a deeper look into the Swift regex builder and its strongly typed captures.

Strongly typed captures in Swift RegexBuilder

One of the more unique and compelling features of the Swift regex builder are strongly typed captures. Rather than simply returning a string match, Swift Regex can return a strong type representing the captured data.

In some cases, especially for performance reasons, we may want to exit early if a regex capture doesn’t meet some additional criteria. TryCapture allows us to do this. The TryCapture Regex Builder component will pass a captured value to a transform closure where we can perform additional validation or value transformation. When the transform closure returns a value, whether the original or a modified version, it is assumed valid, and the value is captured. However, when the transform closure returns nil, matching is signaled to have failed and will cause the regex engine to backtrack and try an alternative path. TryCapturetransform closure actively participates in the matching process. This is a powerful feature and allows for extremely flexible matching.

Let’s take a look at an example.

In this example, we will use a regex builder to parse and extract information from a Unix syslog command.

syslog -F '$((Time)(ISO8601)) | $((Level)(str)) | $(Sender)[$(PID)] | $Message'

We’ll take the output of running this command and assign it to a Swift variable.


// TIME                 LEVEL     PROCESS(PID)              MESSSAGE
let syslog = """
2022-06-09T14:11:52-05 | Notice | Installer Progress[1211] | Ordering windows out
2022-06-09T14:12:18-05 | Notice | Installer Progress[1211] | Unable to quit because there are connected processes
2022-06-09T14:12:30-05 | Critical | Installer Progress[1211] | Process 648 unexpectedly went away
2022-06-09T14:15:31-05 | Alert | syslogd[126] | ASL Sender Statistics
2022-06-09T14:16:43-05 | Error | MobileDeviceUpdater[3978] | tid:231b - Mux ID not found in mapping dictionary
"""


Next, we use Swift Regex to extract this data, including the timestamp, a strongly typed severity level, and filtering of processes with an id of less than 1000.

let separator = " | "

let regex = Regex {
    // 1
    Capture(.iso8601(assuming: .current, dateSeparator: .dash))
    // 2
    "-"
    OneOrMore(.digit)
    separator

    // 3
    TryCapture {
        ChoiceOf {
            "Debug"
            "Informational"
            "Notice"
            "Warning"
            "Error"
            "Critical"
            "Alert"
            "Emergency"
        }
    } transform: {
        // 4
        SeverityLevel(rawValue: String($0))
    }
    separator

    // 5
    OneOrMore(.any, .reluctant)
    "["
    Capture {
        OneOrMore(.digit)
    } transform: { substring -> Int? in
        // 6
        let pid = Int(String(substring))
        if let pid, pid >= 1000 {
            return pid
        }

        return nil
    }
    "]"
    separator

    OneOrMore(.any)
}

// 7
let matches = syslog.matches(of: regex)

print(type(of: matches[0].output))

for match in matches {
    let (_, date, status, pid) = match.output
    // 8
    if let pid {
        print("\(date) \(status) \(pid)")
    }
}

// 9
enum SeverityLevel: String {
    case debug = "Debug"
    case info = "Informational"
    case notice = "Notice"
    case warning = "Warning"
    case error = "Error"
    case critical = "Critical"
    case alert = "Alert"
    case emergency = "Emergency"
}

Running the example gives us the following output:

(Substring, Date, SeverityLevel, Optional<Int>)
2022-06-09 19:11:52 +0000 notice 1211
2022-06-09 19:12:18 +0000 notice 1211
2022-06-09 19:12:30 +0000 critical 1211
2022-06-09 19:16:43 +0000 error 3978

Here’s what is happening with the syslog example.

  1. Here, we are capturing an ISO 8601 formatted date. The iso8601 static function (new in iOS 16) is called on the Date.ISO8601FormatStyle type. This function constructs and returns a date formatter for use by the Swift Regex Capture in converting the captured string into a Date. This Date is then used in the Captures output with no further string-to-date conversion necessary.
  2. After the ISO 8601 formatted date, we have a UTC offset timezone component matched by the dash and one or more digits.
  3. Here TryCapture is being used to transform a captures type. It will convert the matched value into a non-optional type or fail the match.
  4. The transform closure will be called upon matching the capture. It is passed the matched substring value that can then transform to the desired type. In this example, the transform is converting the matched substring into a SeverityLevel enum. The corresponding regex output for this capture becomes the closures return type. In the case of a transform on TryCapture this type will be non-optional. For a Capture transform, the type will be optional.
  5. Swift Regex defines several repetitions, which are OneOrMoreZeroOrMoreOptionally, and Repeat. The .reluctant repetition behavior will match as few occurrences as possible. The default repetition behavior for all repetitions is .eager.
  6. A transforming capture will transform the matching substring of digits into an optional Int value. If this value is 1000 or greater, then it is returned from the transform and becomes the captures output value. Otherwise, it returns nil for this captures output.
  7. Assign the matches of the regex to the matches variable.
  8. If the pid capture is not nil then print out the data.
  9. Defines the SeverityLevel enum type, which is used by the transforming capture defined in #3.

Conclusion

Swift Regex is a welcome and powerful addition to Swift. Regex Builder is a go-to solution for all but the simplest of regex needs, and mastering it will be time well spent. The declarative approach of Regex Builder coupled with compile time regex support giving us compile time errors, syntax highlighting, and strongly typed captures, makes for a potent combination. A lot of thought has gone into the design of Swift Regex, and it shows. Swift Regex will make a worthy addition to your development toolbox, and taking the time to learn it will pay dividends.

Resources

  1. Meet Swift Regex – WWDC 2022
  2. Swift Regex: Beyond the basics – WWDC 2022
  3. Swift Evolution Proposal 0351 – Regex builder DSL
  4. Swift Evolution Proposal 0354 – Regex Literals
  5. Swift Evolution Proposal 0355 – Regex Syntax and Run-time Construction
  6. Swift Evolution Proposal 0357 – Regex-powered string processing algorithms
  7. Swift Regex DSL Builder

The post Swift Regex Deep Dive appeared first on Big Nerd Ranch.

]]>
https://bignerdranch.com/blog/swift-regex/feed/ 0
Custom Operators in Swift Combine https://bignerdranch.com/blog/custom-operators-in-swift-combine/ https://bignerdranch.com/blog/custom-operators-in-swift-combine/#respond Fri, 09 Sep 2022 10:00:12 +0000 https://bignerdranch.com/?p=9505 The Combine framework in Swift is a powerful declarative API for the asynchronous processing of values over time. It takes full advantage of Swift features such as Generics to provide type-safe algorithms that can be composed into processing pipelines. These pipelines can be manipulated and transformed by components called operators. Combine ships with a large […]

The post Custom Operators in Swift Combine appeared first on Big Nerd Ranch.

]]>
The Combine framework in Swift is a powerful declarative API for the asynchronous processing of values over time. It takes full advantage of Swift features such as Generics to provide type-safe algorithms that can be composed into processing pipelines. These pipelines can be manipulated and transformed by components called operators. Combine ships with a large assortment of built-in operators that can be chained together to form impressive conduits through which values can be transformed, filtered, buffered, scheduled, and more.

Despite the usefulness of Combine’s built-in operators, there are times when they fall short. This is when constructing your own custom operators adds needed flexibility to perform often complex tasks in a concise and performant manner of your choosing.

Big Nerd Note: The world of Swift is constantly changing. If it’s worth learning, it’s worth learning right. Try out Swift Essentials or our introductory bootcamp to Swift UI.

Combine Lifecycle

In order to create our own operators, it is necessary to understand the basic lifecycle and structure of a Combine pipeline. In Combine, there are three main abstractions: Publishers, Subscribers, and Operators.

Publishers are value types, or Structs, that describe how values and errors are produced. They allow the registration of subscribers who will receive values over time. In addition to receiving values, a Subscriber can potentially receive a completion, as a success or error, from a Publisher. Subscribers can mutate state, and as such, they are typically implemented as a reference type or Class.

Subscribers are created and then attached to a Publisher by subscribing to it. The Publisher will then send a subscription back to the Subscriber. This subscription is used by the Subscriber to request values from the Publisher. Finally, the Publisher can start sending the requested values back to the Subscriber as requested. Depending on the Publisher type, it can send values that it has indefinitely, or it can complete with a success or failure. This is the basic structure and lifecycle used in Combine.

Operators sit in between Publishers and Subscribers where they transform values received from a Publisher, called the upstream, and send them on to Subscribers, the downstream. In fact, operators act as both a Publisher and as a Subscriber.

Creating a Custom Operator

Let’s cover two different strategies for creating a custom Combine operator. In the first approach, we’ll use the composition of an existing chain of operators to create a reusable component. The second strategy is more involved but provides the ultimate in flexibility.

Composing a Combine Operator

In our first example, we’ll be creating a histogram from a random array of integer values. A histogram tells us the frequency at which each value in the sample data set appears. For example, if our sample data set has two occurrences of the number one, then our histogram will show a count of two as the number of occurrences of the number one.

// random sample of Int
let sample = [1, 3, 2, 1, 4, 2, 3, 2]    

// Histogram
//     key: a unique Int from the sample
//     value: the count of this unique Int in the sample
let histogram = [1: 2, 2: 3, 3: 2, 4: 1]

We can use Combine to calculate the histogram from a sample of random Int.

// random sample of Int

// 1
let sample = [1, 3, 2, 1, 4, 2, 3, 2]

// 2
sample.publisher
      // 3
      .reduce([Int:Int](), { accum, value in
        var next = accum

        if let current = next[value] {
            next[value] = current + 1
        } else {
            next[value] = 1
        }

        return next
    })
    // 4
    .map({ dictionary in
        dictionary.map { $0 }
    })
    // 5
    .map({ item in
        item.sorted { element1, element2 in
            element1.key < element2.key
        }
    })
    .sink { printHistogram(histogram: $0) }
    .store(in: &cancellables)

Which gives us the following output.

histogram standard operators:
1: 2
2: 3
3: 2
4: 1

Here is a breakdown of what is happening with the code:

  1. Define our sample data set
  2. Get a Publisher of our sample data
  3. Bin each unique value in the data set and increase a counter for each occurrence.
  4. Convert our Dictionary of binned values into an Array of key/value tuples. eg [(key: Int, value: Int)]
  5. Sort the array in ascending order by key

As you can see, we have created a series of chained Combine operators that calculates a histogram for a published data set of Int. But what if we use this sequence of code in more than one location? It would be really nice if we could use a single operator to perform this entire operator chain. This reuse not only makes our code more concise and easier to understand but easier to debug and maintain as well. So let’s do just that by composing a new operator based on what we’ve already done.

// 1
extension Publisher where Output == Int, Failure == Never {
    // 2
    func histogramComposed() -> AnyPublisher<[(key:Int, value:Int)], Never>{
        // 3
        self.reduce([Int:Int](), { accum, value in
            var next = accum

            if let current = next[value] {
                next[value] = current + 1
            } else {
                next[value] = 1
            }

            return next
        })
        .map({ dictionary in
            dictionary.map { $0 }
        })
        .map({ item in
            item.sorted { element1, element2 in
                element1.key < element2.key
            }
        })
        // 4
        .eraseToAnyPublisher()
    }
}

What is this code doing:

  1. Create an extension on Publisher and constrain its output to type Int
  2. Define a new function on Publisher that returns an AnyPublisher of our histogram output
  3. Perform the histogram chain of operators as in the previous example but this time on self. We use self here since we are executing on the current Publisher instance
  4. Type erase our publisher to be an AnyPublisher

Now let’s use our new Combine operator.

// 1
let sample = [1, 3, 2, 1, 4, 2, 3, 2]

// 2
sample.publisher
    .histogramComposed()
    .sink { printHistogram(histogram: $0) }
    .store(in: &cancellables)

Which gives us the following output.

histogram composed: 
1: 2
2: 3
3: 2
4: 1

Using the new composed histogram operator:

  1. Define our sample data set
  2. Directly use our new composed Combine histogram operator

From the example usage of our new histogram operator, you can see that the code at the point of usage is quite simple and reusable. This is a fantastic technique for creating a toolbox of reusable Combine operators.

The Complete Combine Operator

Creating a Combine operator through composition, as we have seen, is a great way to refactor existing code for reuse. However, composition does have its limitations, and that is where creating a native Combine operator becomes important.

A natively implemented Combine operator utilizes the Combine PublisherSubscriber, and Subscription interfaces and relationships in order to provide its functionality. A native Combine operator acts as both a Subscriber of upstream data and a Publisher to downstream subscribers.

For this example, we’ll create a modulus operator implemented natively in Combine. The modulus is a mathematical operator which gives the remainder of a division as an absolute value and is represented by the percent sign, %. So, for example, 10 % 3 = 1, or 10 modulo 3 is 1 (10 ➗ 3 = 3 Remainder 1).

Let’s look at the complete code for this native Combine operator, how to use it, and then discuss how it works.

// 1
struct ModulusOperator<Upstream: Publisher>: Publisher where Upstream.Output: SignedInteger {
    typealias Output = Upstream.Output // 2
    typealias Failure = Upstream.Failure

    let modulo: Upstream.Output
    let upstream: Upstream

    // 3
    func receive<S>(subscriber: S) where S : Subscriber, Self.Failure == S.Failure, Self.Output == S.Input {
        let bridge = ModulusOperatorBridge(modulo: modulo, downstream: subscriber)
        upstream.subscribe(bridge)
    }
}

extension ModulusOperator {
    // 4
    struct ModulusOperatorBridge<S>: Subscriber where S: Subscriber, S.Input == Output, S.Failure == Failure {
        typealias Input = S.Input
        typealias Failure = S.Failure

        // 5
        let modulo: S.Input
        
        // 6
        let downstream: S

        //7
        let combineIdentifier = CombineIdentifier()

        // 8
        func receive(subscription: Subscription) {
            downstream.receive(subscription: subscription)
        }

        // 9
        func receive(_ input: S.Input) -> Subscribers.Demand {
            downstream.receive(abs(input % modulo))
        }

        func receive(completion: Subscribers.Completion<S.Failure>) {
            downstream.receive(completion: completion)
        }
    }

// Note: `where Output == Int` here limits the `modulus` operator to 
// only being available on publishers of Ints.
extension Publisher where Output == Int {
    // 10
    func modulus(_ modulo: Int) -> ModulusOperator<Self> {
        return ModulusOperator(modulo: modulo, upstream: self)
    }
}

As you can see, the modulus is always positive, and when evenly divisible it is equal to 0.

How does the code work?

Now we can discuss how the native Combine operator code works.

  1. We define our new Combine operator as a Publisher with a constraint on some upstream Publishers output of type SignedInteger. Remember, our operator will be acting as both a Publisher and a Subscriber. Thus our input, the upstream, must be SignedIntegers.
  2. Our ModulusOperator output, acting as a Publisher, will be the same as our input (i.e. SignedIntegers).
  3. Required function implementation for Publisher. Creates a Subscription which acts as a bridge between the operators upstream Publisher and the downstream Subscriber.
  4. The ModulusOperatorBridge can act as both a Subscription and a Subscriber. However, simple operators like this one can be a Subscriber without the need of being a Subscription. This is due to the upstream handling lifecycle necessities like Demand. The upstream behavior is acceptable for our operator, so there is no need to implement Subscription. The ModulusOperatorBridge also performs the primary tasks of the modulus operator.
  5. Input parameter to the operator for the modulus that will be calculated.
  6. References to the downstream Subscriber and the upstream Publisher.
  7. CombineIdentifier for CustomCombineIdentifierConvertible conformance when a Subscription or Subject is implemented as a structure.
  8. Required function implementations for Subscriber. Links the upstream Subscription to the bridge as a downstream Subscription in addition to lifecycle.
  9. Receives input as a Subscriber, performs the modulus operation on this input, and then passes it along to the downstream Subscriber. The new demand for data, if any, from the downstream is relayed to the upstream.
  10. Finally, an extension on Publisher makes our custom Combine operator available for use. The extension is limited to those upstream Publishers whose output is of type Int.

Putting this new modulus operator into action on a Publisher of Int would look like:

[-10, -9, -8, -7, -6, -5, -4, -3, -2, -1, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10].publisher
    .modulus(3)
    .sink { modulus in
        print("modulus: \(modulus)")
    }
    .store(in: &cancellables)
modulus: 1
modulus: 0
modulus: 2
modulus: 1
modulus: 0
modulus: 2
modulus: 1
modulus: 0
modulus: 2
modulus: 1
modulus: 0
modulus: 1
modulus: 2
modulus: 0
modulus: 1
modulus: 2
modulus: 0
modulus: 1
modulus: 2
modulus: 0
modulus: 1

As you can see, the modulus operator will act upon a Publisher of Int. In this example, we’re taking the modulus of 3 for each Int value in turn.

Conclusion

Combine is a powerful declarative framework for the asynchronous processing of values over time. Its utility can be extended and customized even further through the creation of custom operators which act as processors in a pipeline of data. These operators can be created through composition, allowing for excellent reuse of common pipelines. They can also be created through direct implementation of the Combine PublisherSubscriber, and Subscription protocols, which allows for the ultimate in flexibility and control over the flow of data.

Whenever you find yourself working with Combine, keep these techniques in mind and look for opportunities to create custom operators when relevant. A little time and effort creating a custom Combine operator can save you hours of work down the road.

The post Custom Operators in Swift Combine appeared first on Big Nerd Ranch.

]]>
https://bignerdranch.com/blog/custom-operators-in-swift-combine/feed/ 0
Navigation in SwiftUI 4: NavigationView, NavigationLink, NavigationStack, and NavigationSplitView https://bignerdranch.com/blog/the-different-forms-of-navigation-in-swiftui/ https://bignerdranch.com/blog/the-different-forms-of-navigation-in-swiftui/#respond Tue, 07 Jun 2022 19:34:55 +0000 https://bignerdranch.com/?p=7548 SwiftUI has changed a great many things about how developers create applications for iOS, and not just in the way we lay out our views. One area of significant impact is the way we navigate between scenes.  Until recently, we used NavigationView and NavigationLink. In June of 2022, Apple introduced NavigationStack and NavigationSplitView. Developers now […]

The post Navigation in SwiftUI 4: NavigationView, NavigationLink, NavigationStack, and NavigationSplitView appeared first on Big Nerd Ranch.

]]>

SwiftUI has changed a great many things about how developers create applications for iOS, and not just in the way we lay out our views. One area of significant impact is the way we navigate between scenes. 

Until recently, we used NavigationView and NavigationLink. In June of 2022, Apple introduced NavigationStack and NavigationSplitView. Developers now have multiple methods of navigating through scenes:

  • NavigationView (deprecated).
  • NavigationLink.
  • NavigationStack.
  • NavigationSplitView.
  • Programmatic navigation.

Below, we’ll cover the basics of navigation in SwiftUI 4.

Big Nerd Note: This article has been updated as of 9/22/2022 regarding the new features, NavigationStack and NavigationSplitView.

 

NavigationView and NavigationLink: Basic Navigation in SwiftUI

Before NavigationStack and NavigationSplitView, SwiftUI  introduced the NavigationView and NavigationLink structs. These two pieces work together to allow navigation between views in your application. The NavigationView is used to wrap the content of your views, setting them up for subsequent navigation. The NavigationLink does the actual work of assigning what content to navigate to and providing a UI component for the user to initiate that navigation. Let’s look at a quick example.

var body: some View {
    /// 1
    NavigationView {
        Text("Primary View")
        NavigationLink(
            /// 2
            destination: Text("Secondary View"),
            /// 3
            label: {
                Text("Navigate")
            })
    }
}

Let’s break the above down:

  1. We encompass our content within a NavigationView, otherwise our NavigationLink will not function.
  2. Within our NavigationLink we assign the destination. This is whatever content we want to navigate to.
  3. Assigns a piece of content to act as a button for the navigation.

There are a number of ways to set up your NavigationLink, so check the documentation for the right flavor for your own implementation. The above example is just one of the simplest ways to handle basic navigation.

The most notable thing about this compared to the way navigation was previously handled in UIKit is that all of our code is handled in a single location, within the SwiftUI view itself. This means we’ll have cleaner code in many instances, but also that any information we want to pass to subsequent screens must be available within the view we’re navigating from. This may seem a minor distinction, but it will be important later.

NavigationStack and NavigationSplitView: New to SwiftUI 4

NavigationStackNavigationStack builds a list of views over a root view rather than defining each view individually. When a user interacts with a NavigationLink, a view is added to the top of the stack. The stack will always show the most recently added view that hasn’t been removed.To create a NavigationStack:

var body: some View {
   NavigationStack {
       List(users) { user in
           NavigationLink(user.name, value: user)
       }
       .navigationDestination(for: User.self) { user in
           UserDetails(user: user)
       }
   }
}
NavigationSplitView is a special type of view that presents in two or three columns — particularly useful for tablets. Selections in the leading column (such as a menu) will control the presentation in the next columns (such as a content box).

To create a NavigationSplitView:

var body: some View {
    NavigationSplitView {
        List(model.users, selection: $userids) { user in
            Text(user.name)
        }
    } detail: {
        UserDetails(for: userids)
    }
}

A developer can embed a NavigationStack within a NavigationSplitViewcolumn. When viewed in a small format device, columns will collapse.

Both NavigationStack and NavigationSplitView are newly introduced, but still utilize NavigationLink. Apple provides migration directions for transitioning older navigation types to these newer navigation types.

Programmatic Navigation in SwiftUI

What happens when I need to await a response from an API call before navigating? What about waiting on the completion of an animation? Let’s say I need to perform validation on user fields before navigating? WHAT THEN!?!?!? Well, when setting up your NavigationLink you do so with an isActive parameter which will allow you to toggle the navigation. Let’s check out a quick example.

/// 1
@State var goWhenTrue: Bool = false

var body: some View {
    NavigationView {
        Text("Primary View")
        /// 2
        NavigationLink("Navigator",
                       destination: Text("Subsequent View"),
                       isActive: $goWhenTrue)
    }
    
    Button("Go Now") {
        /// 3
        goWhenTrue = true
    }
  1. We need some variable to act on, and that variable must be bindable (@Binding, @State, etc) so that the view will re-render and navigate when it is updated.
  2. Here we set up the NavigationLink. The first parameter is a tag for the link itself and is not displayed in the UI. The isActive parameter is a boolean that controls whether the navigation executes.
  3. Here we set the state variable to true, thus triggering the navigation in (2)

This can be set up in any number of different ways such as the variable being controlled by the view, a view model, a singleton, or any other point of reference within the application that the view can reference for the isActive binding variable. This means we can not wait on other criteria or completions before navigation, but does that really solve all of our problems?

Programmatic Navigation Outside of Views

I mentioned above that all navigation must be configured within the SwiftUI view for navigation to be possible. We can trigger that navigation from other places using the steps outlined above, but what if our navigation is prompted from outside of the view hierarchy? This can be the case for Deep Linking, responding to an asynchronous event, or any number of other reasons (use your imagination). Unfortunately, there isn’t a good answer for this, but in the interest of scholarly pursuit, let’s see what we can do! Below is a potential workaround for this issue. We’ll start with our NavigationCoordinator which stores our content before we perform navigation.

class NavigationCoordinator: ObservableObject {
    /// 1
    fileprivate var screen: AnyView = AnyView(EmptyView())
    ///2
    @Published fileprivate var shouldNavigate: Bool = false
    
    ///3
    func show<V: View>(_ view: V) {
        let wrapped = AnyView(view)
        screen = wrapped
        shouldNavigate = true
    }
}
  1. This stores the content that we want to navigate to.
  2. Our binding variable for determining when we want to navigate.
  3. A helper method that handles the wrapping of our content and kicks off navigation automatically when assigned.

Next, we’ll look at our NavigationWrapper which is implemented in any SwiftUI view that we want to be able to navigate from.

struct NavigationWrapper<Content>: View where Content: View {
    /// 1
    @EnvironmentObject var coordinator: NavigationCoordinator
    /// 2
    private let content: Content
    
    public init(@ViewBuilder content: () -> Content) {
        self.content = content()
    }

    var body: some View {
        NavigationView {
            /// 3
            if coordinator.shouldNavigate {
                NavigationLink(
                    destination: coordinator.screen,
                    isActive: $coordinator.shouldNavigate,
                    label: {
                        content
                    })
            } else {
                content
            }
        }
        /// 4
        .onDisappear(perform: {
            coordinator.shouldNavigate = false
        })
    }
}
  1. This is where we store our coordinator information for subsequent use during navigation.
  2. This is where we store the content that will be displayed on the view we’re navigating from. This is essentially your view body.
  3. Here we check to see if the coordinator should navigate. This will be checked whenever the environment object is updated and will trigger navigation when things have been set properly.
  4. Once we have successfully navigated away from this view, we want to set shouldNavigate to false to prevent the next view in the hierarchy from attempting navigation as well on load.

This is designed with the intent that you can use the above implementations to allow navigation from anywhere within your application to a new view, even outside of a SwiftUI view. It is a bit heavy-handed, as it requires that any scene you implement wrap its content in the NavigationWrapper and pass the appropriate coordinator information to it. Beyond that, it is also by no means a “best practices” implementation as it can create additional overhead and is only necessary for specific instances.

Final Thoughts

At this time, it seems that programmatic navigation within SwiftUI is always going to come with certain caveats and additional considerations. Be sure to think through your navigation thoroughly and consider how you can centralize how your users move through your application.

The post Navigation in SwiftUI 4: NavigationView, NavigationLink, NavigationStack, and NavigationSplitView appeared first on Big Nerd Ranch.

]]>
https://bignerdranch.com/blog/the-different-forms-of-navigation-in-swiftui/feed/ 0
Mocking With Protocols in Swift https://bignerdranch.com/blog/mocking-with-protocols-in-swift/ https://bignerdranch.com/blog/mocking-with-protocols-in-swift/#respond Fri, 17 Dec 2021 13:30:00 +0000 https://nerdranchighq.wpengine.com/blog/mocking-with-protocols-in-swift/ Let's get right to it. You need to test your code, and you need to test it often. You do a lot of manual testing throughout the development process, find bugs, and fix them. While this can be very beneficial, it leaves much of the code untested. When I say untested, I mean untested by you. The code will be tested at some point, it just might be by one of your users. This is where writing automated unit tests comes in, however, it is often the last thing developers do, if at all. Where do you start? How do you make this class testable? Many of these challenges can be overcome by using protocols to mock our objects in testing.

The post Mocking With Protocols in Swift appeared first on Big Nerd Ranch.

]]>
Let’s get right to it. You need to test your code, and you need to test it often. You do a lot of manual testing throughout the development process, find bugs, and fix them. While this can be very beneficial, it leaves much of the code untested. When I say untested, I mean untested by you. The code will be tested at some point, it just might be by one of your users. This is where writing automated unit tests comes in, however, it is often the last thing developers do, if at all. Where do you start? How do you make this class testable? Many of these challenges can be overcome by using protocols.

Regardless of your testing methods, when you are writing testable code there are important characteristics your code should adhere to: 

  • You need to have control over any inputs. This includes any and all inputs that your class acts on. 
  • You need visibility into the outputs. There needs to be a way to inspect the outputs generated by your code. Your unit tests will use the outputs to validate things are working as expected. 
  • There should be no hidden state. You should avoid relying on internal system state that can affect your code’s behavior later. 

Using Swift mock protocols can help to meet these characteristics while also allowing for dependencies.

Mocking with Protocols

Mocking is imitating something or someone’s behavior or actions. In automated tests, it is creating an object that conforms to the same behavior as the real object it is mocking. Many times the object you want to test has a dependency on an object that you have no control over. 

There are several ways to create iOS unit testing mock objects. One way is to subclass it. With this approach you can override all the methods you use in your code for easy testing, right? Wrong. Subclassing many of these objects comes with hidden difficulties. Here are a few.

  • Unknown state: you don’t know if your object has any shared owners, which can result in one of them mutating the expected state of your mock.
  • Unexpected behavior: A change in your superclass, or its superclass, can create unknown effects to your mock.
  • Some classes cannot be subclassed, like UIApplication.

Also, Swift structs are powerful and useful value types. Structs, however, cannot be subclassed. If subclassing is not an option, then how can the code be tested?

Protocols! In Swift, protocols are full-fledged types. This allows you to set properties using a protocol as its type. Using protocols for testing overcomes many of the difficulties that come with subclassing code you don’t own and the inability to subclass structs.

Swift Mocking Example

In the example, you have a class that interacts with the file system. The class has basic interactions with the file system, such as reading and deleting files. For now, the focus will be on deleting files. The file is represented by a struct called MediaFile which looks like this.

struct MediaFile {
    var name: String
    var path: URL
}

The FileInteraction struct is a convenience wrapper around the FileManager that allows easy deletion of the MediaFile

struct FileInteraction {
    func delete(_ mediaFile: MediaFile) throws -> () {
        try FileManager.default.removeItem(at: mediaFile.path)
    }
}

All of this is managed by the MediaManager class. This class keeps track of all of the users media files and provides a method for deleting all of the users media. deleteAll method returns true if all the files were deleted. Any files that are unable to be deleted are put back in the media array.

class MediaManager {
    let fileInteraction: FileInteraction = FileInteraction()
    var media: [MediaFile] = []
   
    func deleteAll() -> Bool {
        var unsuccessful: [MediaFile] = []
        var result = true
        for item in media {
            do {
                try fileInteraction.delete(item)
            } catch {
                unsuccessful.append(item)
                result = false
            }
        }
        media = unsuccessful
        return result
    }
}

This code, as it stands, is not very testable. It is possible to copy some files to the directory, create the MediaManager with MediaFiles that point to them, and run a test. This, however, is not repeatable or fast. A protocol can be used to make the tests fast and repeatable. The goal is to mock the FileInteraction struct without disrupting MediaManger. To do this, create a protocol with the delete method signature and declare the FileInteraction conformance to it.

protocol FileInteractionProtocol {
    func delete(_ mediaFile: MediaFile) throws -> ()
}

struct FileInteraction: FileInteractionProtocol {
    ...
}

There are two changes to MediaManager that need to be implemented. First, the type of the fileInteraction property needs to be changed. Second, add an init method that takes a fileInteraction property and give it a default value.

class MediaManager {
    let fileInteraction: FileInteractionProtocol
    var media: [MediaFile] = []
    
    init(_ fileInteraction: FileInteractionProtocol = FileInteraction()) {
        self.fileInteraction = fileInteraction
    }

    ...
}

Now MediaManager can be tested. To do so, a mock FileInteraction type will be needed.

struct MockFileInteraction: FileInteractionProtocol {
    func delete(_ mediaFile: MediaFile) throws {
        
    }
}

Now the test class can be created.

class MediaManagerTests: XCTestCase {
    var mediaManager: MediaManager!

    override func setUp() {
        mediaManager = MediaManager(fileInteraction: MockFileInteraction())

        let media = [
            MediaFile(name: "file 1", path: URL(string: "/")!),
            MediaFile(name: "file 2", path: URL(string: "/")!),
            MediaFile(name: "file 3", path: URL(string: "/")!),
            MediaFile(name: "file 4", path: URL(string: "/")!)
        ]
        
        mediaManager.media = media
    }

    func testDeleteAll() {
        mediaManager.deleteAll()
        XCTAssert(mediaManager.deleteAll(), "Could not delete all files")
        XCTAssert(mediaManager.media.count == 0, "Media array not cleared")
    }
}

All of this looks good, except the delete method is marked as throws but is never tested to throw. To do this, create another mock that throws exceptions.

struct MockFileInteractionException: FileInteractionProtocol {
    func delete(_ mediaFile: MediaFile) throws {
        throw Error.FileNotDeleted
    }
}

Then modify the test class.

class MediaManagerTests: XCTestCase {
    var mediaManager: MediaManager!
    var mediaManagerException: MediaManager!

    override func setUp() {
        mediaManager = MediaManager(fileInteraction: MockFileInteraction())
        mediaManagerException = MediaManager(fileInteraction: MockFileInteractionException())
        
        let media = [
            MediaFile(name: "file 1", path: URL(string: "/")!),
            MediaFile(name: "file 2", path: URL(string: "/")!),
            MediaFile(name: "file 3", path: URL(string: "/")!),
            MediaFile(name: "file 4", path: URL(string: "/")!)
        ]
        
        mediaManager.media = media
        mediaManagerException.media = media
    }
    
    func testDeleteAll() {
        XCTAssert(mediaManager.deleteAll(), "Could not delete all files")
        XCTAssert(mediaManager.media.count == 0, "Media array not cleared")
    }
    
    func testDeleteAllFailed() {
        XCTAssert(!mediaManagerException.deleteAll(), "Exception not thrown")
        XCTAssert(mediaManagerException.media.count > 0, "Media array was incorrectly cleared")
    }
}

Mocks, Dummies, Stubs, and Fakes

A Swift mock is only one type of “test double”–a test double being a replacement entity used solely for testing. There are four commonly used types of test double:

  • Mocks.
  • Dummies.
  • Stubs.
  • Fakes.

Dummy objects are empty objects used within a unit test. When you use a dummy object, you isolate the code being tested; while you can determine whether the code correctly calls the dummy, the dummy does nothing.

Stub objects are objects that always return a specific set of core data. For example, you could set an error-handling object to always return “true” if you wanted to test a failure condition, or drop a stub object into legacy code to test and isolate behavior.

Fake objects are objects that still roughly correlate to a “real” object, but are simplified for the sake of testing. Instead of pulling data from a database, you might return a specific set of data that could have been pulled from the database. Fake objects are similar to stubs, just with more complexity.

While there are use cases for dummies, stubs, and fakes–such as UI tests–mocks frequently provide more comprehensive Swift unit test data. Mocking protocols are particularly useful for dynamic environments and dependency injection. Use mocks for complex tasks such as integration tests.

Partial vs. Complete Mocking

One final element to discuss is the practice of partial mocking vs. complete mocking.

Complete mocking refers to mocking the entirety of the protocol, while partial mocking refers to when you override a specific behavior or behaviors in the protocol. 

When unit testing, software engineers use partial mocking to drill down to specific behaviors within the protocol. Otherwise, partial and complete mocking are virtually identical–the difference lies in scope. 

Summary

Initially the MediaManager delete all method was not very testable. Using a protocol to mock interaction with the file system made testing this code repeatable and fast. The same principles for testing the delete all method can be applied to other areas of interaction such as reading, updating, or moving files around. Mock protocols can also be used to mock Foundation classes such as URLSession and FileManager where applicable. 

In addition to mock protocols, there are also test suites, mocking libraries, and mocking frameworks, such as Mockingbird–and the automated testing provided by continuous integration/delivery before pushing production code. Nevertheless, you should still know how to hand code your mocking and develop your own tests.

Protocols are powerful tools for testing code–and testing should never be an afterthought. Learn more about high-quality, resilient code generation and test-driven development at our iOS and Swift Essentials bootcamp.

The post Mocking With Protocols in Swift appeared first on Big Nerd Ranch.

]]>
https://bignerdranch.com/blog/mocking-with-protocols-in-swift/feed/ 0
Getting started with computer vision using CoreML and CreateML https://bignerdranch.com/blog/getting-started-with-computer-vision-using-coreml-and-createml/ https://bignerdranch.com/blog/getting-started-with-computer-vision-using-coreml-and-createml/#respond Mon, 01 Nov 2021 10:00:53 +0000 https://bignerdranch.com/?p=9124 Over the past several years the barrier to entry for adopting machine learning and computer vision-related features has dropped substantially. Still, many developers are intimidated, or just don’t know where to start to take advantage of these state-of-the-art technologies. In this blog post, you will learn what computer vision is. You will then dive into […]

The post Getting started with computer vision using CoreML and CreateML appeared first on Big Nerd Ranch.

]]>
Over the past several years the barrier to entry for adopting machine learning and computer vision-related features has dropped substantially. Still, many developers are intimidated, or just don’t know where to start to take advantage of these state-of-the-art technologies. In this blog post, you will learn what computer vision is. You will then dive into how to use CreateML to train a model. Finally, you will take a look at how to use the model you created with the CoreML and Vision frameworks.

What is computer vision?

Computer vision in its simplest terms is the ability to give a machine an image and the machine to give back meaningful information about the contents of the image. There are two ways computer vision is mostly used today. The first is image classification. This is where a machine can identify what an image is as a whole, but has no concept of what part of the image contains the detected items. The other type is object detection. While similar to image classification object detection is able to find individual items in an image and report their locations within the image. This can be further built upon to perform object tracking and is widely used in fields like self-driving cars and Snap filters.

Collecting training data.

The first step in any machine learning field is training a model to teach the computer what it should look for. For the purposes of experimenting with your first ML project, Google Images combined with a bulk image downloader like Fatkun Batch Image Downloader can be a great resource.

You will need at minimum 50 images for each kind of item you would like to identify, but your results will be better with more. A test data set while not strictly required will also go a long way towards helping validate your model works as it should. Make sure you have your training data organized in folders like in the screenshot below. You will need at least 2 different categories of images for image classification.

If you’re creating an object detection model you will need to include an annotations.json file in each directory that has training or testing images. This file lists the different objects that can be found in each image as well as their location in the image. Creating this file is something you can do by hand, but given your data set requires at least 50 images this can be extremely time-consuming. While it is frustrating that Apple leaves us to come up with our own way to do all this data entry, This HackerNoon article How to Label Data — Create ML for Object Detection, walks you through generating an annotations file using IBM Cloud. This does a lot to make the process less painful. Once you have your data set up you can move on to training your model.

Training your model.

With your data collected and organized all you need to do to create a working .mlmodel file is

  1. Open CreateML and create a new project of type Image Classifier or Object Detection.
  2. Drag and drop the directories that contain your training data onto the CreateML window and make sure it looks like the screenshot below.
  3. Click train.

This phase may take several hours to complete. As training continues you should see a graph on the screen trend towards 0. This indicates that the computer is getting better at recognizing the images provided.

When the training completes you will have access to the Preview and Output tabs of CreateML. The Preview tab will allow you to manually drag and drop photos for you to validate your newly created model. While doing this isn’t strictly necessary it is a good way to quickly test your model before moving on. If making use of this tab for additional testing you should make sure you are feeding it new images that are not included in either the training or test data sets.

When you are comfortable that the model is good enough to move to the next stage of your project you will use the Output tab to export your work to a .mlmodel file.

Using your model.

Now that you have a model it is time to explore how to actually process images and get data back about them. Luckily the Vision framework has all the functionality you will need to execute queries against the model. The images you use can come directly from the camera using an AVCaptureSession or come from anywhere else in the form of a CGImage. At this point processing, an image will be pretty straightforward. Let’s take a look at the code below

// 1. Converting CMSampleBuffer from a capture session preview into a CVPixelBuffer
       guard let pixelBuffer: CVPixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) else { return }

       // 2. Where we bring in the model that powers all the heavy lifting in CoreML
       // All of the code to initialize the model should be automatically generated when you import the mlmodel file into the project.
       guard let model = try? VNCoreMLModel(for: FerrariObjectDetector(configuration: MLModelConfiguration()).model) else { return }
       let request = VNCoreMLRequest(model: model, completionHandler: requestCompletionHandler)

       // 3. Where the magic happens. Passes the buffer we want Vision to analyze and the request we want to Vision to perform on it.
       try? VNImageRequestHandler(cvPixelBuffer: pixelBuffer, options: [:])
           .perform([request])

This is all you need to have Vision return objects recognized in an image. One interesting thing to note is VNImageRequestHandler can handle multiple requests on the same image at a time. The completion handler in this case is defined outside of this block of code to make reading it easier. Let’s look at it now to get an idea of what we can do with this framework and what some of the sticking points might be.

VNRequestCompletionHandler is really just a closure that takes VNRequest and an optional error as perimeters and returns void. (VNRequest, Error?) -> Void. The completion handler in the example looks like this.

{ completedRequest, error in
                guard error == nil,
        // 1. Keep an eye on what type you expect your results to cast to
                      let results = completedRequest.results as? [VNRecognizedObjectObservation] else { return }
                // 2. You may want to do more filtering here.
                // ie check for overlap, or changes in objects since the last frame.
                if !results.isEmpty {
                    // Remember we are running on the video queue
                    // switch back to main for updating UI
                    DispatchQueue.main.async {
                        self.handleResults(for: results)
                    }
                } else {
                    DispatchQueue.main.async {
                        self.clearResults()
                    }
                }
            }

An easy gotcha is that the results you get back will depend on the type of model you send to your VNCoreMLRequest. Make sure if using an image classification model to cast to a [VNClassificationObservation]. In this case we’re using [VNRecognizedObjectObservation] because we’re using an object detection model. Once you do have your collection of recognized objects there are 2 properties that you will mostly be concerned with. The first will be .labels and in the case of object detection .boundingBox. The labels array contains each item the machine thinks a detected item might be. These labels are ranked by its confidence in that classification. In most cases you will want the first item in the array. You can get the actual title string and confidence using the identifier and confidence properties on the label.

The bounding box is returned as a CGRect. It is important to note that Vision uses a different coordinate system that UIView/CGLayer and will need to be converted from a bottom left to top right system to correctly make an overlay for the object. While that falls outside of the scope of this blog post getting these coordinates to true up should be easy enough with a little effort.

Recap

In this post, you learned what computer vision is, where to get training data, and how to train a model with that data using CreateML. You then learned how to use the exported model. With this in mind, you should have everything you need to start experimenting with your first computer vision project on Apple devices. For more information feel free to check out Apple’s own documentation Recognizing Objects in Live Capture and an example project Understanding a Dice Roll with Vision and Object Detection. Now go out there and get started!

The post Getting started with computer vision using CoreML and CreateML appeared first on Big Nerd Ranch.

]]>
https://bignerdranch.com/blog/getting-started-with-computer-vision-using-coreml-and-createml/feed/ 0
WWDC21 Round Up https://bignerdranch.com/blog/wwdc21-round-up/ https://bignerdranch.com/blog/wwdc21-round-up/#respond Fri, 11 Jun 2021 19:31:01 +0000 https://bignerdranch.com/?p=7550 This year Apple teased a packed WWDC with over 200 sessions. Monday’s (2-hour long!) Keynote moved at breakneck speed, and the Platforms State of the Union gave us a glimpse into what we’ll be learning about for the rest of the week. Let’s survey what’s new and exciting this week. Like everyone here at BNR, […]

The post WWDC21 Round Up appeared first on Big Nerd Ranch.

]]>
This year Apple teased a packed WWDC with over 200 sessions. Monday’s (2-hour long!) Keynote moved at breakneck speed, and the Platforms State of the Union gave us a glimpse into what we’ll be learning about for the rest of the week. Let’s survey what’s new and exciting this week.

Like everyone here at BNR, I’m a nerd. So, of course, I immediately want to dive into new developer tools.

Developer Tools

The biggest updates that developers will experience daily are the new Swift Concurrency features. Async/await and Actors are two new, powerful tools that are landing in Swift 5.5 and Xcode 13. You can see Apple’s commitment to these new features in the APIs. Async versions of functions have been added to many of the frameworks you already use, such as Foundation’s URLSession. There’s also @MainActor which is a new way to specify that an asynchronous function or property access must run on the main thread. You can now get compiler errors for wrongly multi-threaded code, like trying to update UI state off of the main thread! This is huge!

Xcode 13 has a fresh coat of paint, with much deeper integration with source control platforms like GitHub. You can create pull requests inside Xcode and see code review comments inline and even reply without leaving Xcode.

Apple’s also launching Xcode Cloud, a continuous integration and delivery platform deeply embedded into Xcode and TestFlight. You can set up builds to deploy and tests to run on different branches of your project, all inside Xcode’s UI. I’m surely not going to miss poking at YAML files.

Swift projects get a new tool for generating documentation: DocC, the Documentation Compiler. Markdown documentation comments in your Swift code can now be compiled into richly formatted documentation pages (à la the Apple Dev Docs) that you can distribute to users of your APIs or even just within your own team.

I’ve waited so long for this one: the new Swift Playgrounds for iPad. Entire SwiftUI apps can be built, tested, and deployed to the App Store directly from your iPad. Even more than before, Swift Playgrounds will be an amazing tool for getting started with iOS development. It’s basically a mini-Xcode, and it looks so fun for those weekend side-projects I’ve been meaning to get around to.

iOS 15

Apple opened the Keynote by introducing us to iOS 15, with a focus on shared experiences. SharePlay is a new feature that allows users in a FaceTime call to have shared experiences, such as seamlessly watching videos together. Video content isn’t the only thing SharePlay supports. The GroupActivities framework allows developers to build app features that users can share live over FaceTime. Definitely check out the SharePlay and GroupActivities sessions this week.

The next focus for iOS 15 is… Focus. This is an expansion of Do Not Disturb that allows you to customize your notifications and Home Screen to hide unnecessary distractions based on contexts you create. The API change to pay attention to here is the updates to notification importance and the new Notifications Summary powered by on-device intelligence. Developers can now specify the importance level of each notification so that users are only interrupted by notifications that are relevant to them at the time. This will be a great way to keep users engaged without annoying them into disabling notifications entirely for your app.

There’s still so much more in iOS, but I’ll move on for now.

iPadOS 15

The iPad gained some extra features in addition to those in iOS 15. Widgets can now be used on the iPad Home Screen, along with a new extra-large widget size that’s exclusive to the iPad. Adding widgets to your app is even more of a good idea this year. Your newly-added widgets can even be surfaced to users via Intent donation and on-device intelligence before the user has explicitly added your widget.

The UI for iPad multitasking got some love this year too. In addition to the gestures for splitting and rearranging app windows, there’s now a dedicated control in the status bar for splitting windows or opening an app in Slide Over. Also, tapping an app icon in the Dock will show a temporary overlay showing the user all of the windows they have open for that app, without needing to go to the App Switcher. This fall users are going to expect that your app supports Split Screen, Slide Over, and multiple windows. If you’ve been waiting, now’s the time to add that support!

macOS 12 Monterey

Lots of macOS updates are coming this year. SharePlay support also comes to macOS, AR authoring gets some new tools such as the Object Capture API, and Mac Catalyst apps get some new APIs to be better macOS citizens like menu button styles and more native cursor types.

The Shortcuts app comes to the Mac this year, part of a multi-year update to scripting and Automator. Like on iOS, Mac apps can now offer actions to the Shortcuts app to allow users to build custom workflows.

TestFlight for Mac is coming later this year, so beta testing apps will finally be unified across all of Apple’s platforms.

Here’s to an overstuffed WWDC week!

Thankfully we can all take in all these new session videos at our own pace. I recommend watching all of the “What’s New” sessions in areas you’re interested in first. Then, download the latest Xcode beta and follow along with the deeper sessions and try out the APIs as they talk about them, pausing the video as necessary. There are even some code-along sessions built around this type of live experimentation. Pace yourself, drink water, take bathroom breaks, and remember to have fun!

The post WWDC21 Round Up appeared first on Big Nerd Ranch.

]]>
https://bignerdranch.com/blog/wwdc21-round-up/feed/ 0
Learning how to think in pipelines with Combine https://bignerdranch.com/blog/learning-how-to-think-in-pipelines-with-combine/ https://bignerdranch.com/blog/learning-how-to-think-in-pipelines-with-combine/#respond Tue, 18 May 2021 13:48:02 +0000 https://bignerdranch.com/?p=7440 In this post, we’re going to follow an iterative, real-world example of how to solve a complex problem with Combine. You’ll also learn to how to overcome the pitfalls of procedural thinking when designing Combine pipelines. Let’s get started. How would you solve this problem? You’re implementing a complex data loading system. You have data […]

The post Learning how to think in pipelines with Combine appeared first on Big Nerd Ranch.

]]>
In this post, we’re going to follow an iterative, real-world example of how to solve a complex problem with Combine. You’ll also learn to how to overcome the pitfalls of procedural thinking when designing Combine pipelines. Let’s get started.

How would you solve this problem?

You’re implementing a complex data loading system.

  • You have data sources A, B, and C to read from
  • Each needs to be connected/initialized before reading any data from it
  • To initialize B and C, you must read a configuration object from A
  • All the data sources are synced from a cloud service automatically when initialized, which could take a variable amount of time for each
  • An auth token is required to open the data sources, which must be fetched from a web service

With each of these requirements, the complexity grows. In a real project, these requirements may have been added over months and multiple shipping versions of the app. Without the full context from the start, accounting for the final complexity becomes very difficult.

An experienced reader may have already recognized these as asynchronous problems. Knowing that the complexity compounds further. We have to manage callbacks and dispatch queues to avoid blocking the main thread, tricky but nothing too painful. You may even reach for operation queues which would also help with the dependency management for this data.

You can download the full Swift Playground and follow along. There are multiple pages, each corresponding to one of the steps below, and a Common.swift file that contains some of the convenience functions and type definitions used in these examples.

Simplicity is Key, Right?

In a naive, single-threaded case (or our glorious async/await future, but that’s another blog post), your code may look something like this:

// From page "01 - Sequential BG Queue"
func getAllDataSources(userName: String) -> MyDataSourceFacade {
    let token = getTokenFromServer()

    let A = getDataSourceA(token)

    let userData = A.getData(for: userName)

    let B = getDataSourceB(userData, token)
    let C = getDataSourceC(userData, token)

    return MyDataSourceFacade(userData, A, B, C)
}

You may notice one big thing that’s missing from this example: error handling. So it would be a bit more complex in reality but roughly the same structure.

To get this off the main thread, we’d need something like the following:

// From page "01 - Sequential BG Queue"
DispatchQueue.global(qos: .userInitiated).async {
    let facade = getAllDataSources(userName: "Jim")

    DispatchQueue.main.async {
        print("done!")
        // do something with facade
    }
}

It’s a familiar pattern, but it’s very brittle and is prone to simple errors when adding functionality. It’s also very static. What if someone refactors the code and forgets to dispatch the code off and on the main thread properly? What if the auth token expires and we need to start the process over?

A First Try with Combine

Thankfully these things are much easier in a pipeline-oriented paradigm like Combine. A very natural way to update this for Combine is to replace the variables with Subjects or @Published properties then fuse them all together like this:

class FacadeProvider {

    @Published private var token: String
    @Published private var A: MyDataSource
    @Published private var B: MyDataSource
    @Published private var C: MyDataSource
    @Published private var userData: MyUserData

    private var cancellables: [AnyCancellable] = []

    func getAllDataSources(userName: String) -> AnyPublisher<MyDataSourceFacade, Never> {

        cancellables = []

        getTokenPublisher()
            .logError()
            .subscribe(on: backgroundQueue)
            .assign(to: .token, on: self)
            .store(in: &cancellables)

        $token
            .tryMap { getDataSourceA($0) }
            .logError()
            .subscribe(on: backgroundQueue)
            .assign(to: .A, on: self)
            .store(in: &cancellables)

        $A
            .tryMap { $0.getData(for: userName) }
            .logError()
            .subscribe(on: backgroundQueue)
            .assign(to: .userData, on: self)
            .store(in: &cancellables)

        let userAndTokenPub = $userData.combineLatest($token)

        userAndTokenPub
            .tryMap { getDataSourceB($0.0, $0.1) }
            .logError()
            .subscribe(on: backgroundQueue)
            .assign(to: .B, on: self)
            .store(in: &cancellables)

        userAndTokenPub
            .tryMap { getDataSourceC($0.0, $0.1) }
            .logError()
            .subscribe(on: backgroundQueue)
            .assign(to: .C, on: self)
            .store(in: &cancellables)

        return $userData.combineLatest($A, $B, $C)
            .map { (userData, A, B, C) -> MyDataSourceFacade? in
                return MyDataSourceFacade(userData, A, B, C)
            }
            .subscribe(on: backgroundQueue)
            .receive(on: DispatchQueue.main)
            .eraseToAnyPublisher()
    }
}

This is a pretty direct translation from our naive example, and it’s easy to figure out what’s happening. I purposely chose this because it’s what those new to Combine will likely think to do when hearing about @Published, myself included. It’s a bit more verbose, but we constructed valid pipelines, logged errors (albeit with a helper function) and guaranteed the threading behavior we wanted.

Better? Or Worse…

However, I’ve glossed over a pretty big problem with this implementation: it doesn’t actually work. We’ve defined our properties as non-optional, so when we create this type, each property must contain a value. However, we don’t have initial values for these complex data types.

So let’s change this to actually work, using optional properties where needed:

// From page "02 - Combine First Try"
class FacadeProvider {

    @Published private var token: String?
    @Published private var A: MyDataSource?
    @Published private var B: MyDataSource?
    @Published private var C: MyDataSource?
    @Published private var userData: MyUserData?

    private var cancellables: [AnyCancellable] = []

    func getAllDataSources(userName: String) -> AnyPublisher<MyDataSourceFacade, Never> {

        cancellables = []

        getTokenPublisher()
            .logError()
            .subscribe(on: backgroundQueue)
            .assign(to: .token, on: self)
            .store(in: &cancellables)

        $token
            .ignoreNil()
            .tryMap { getDataSourceA($0) }
            .logError()
            .subscribe(on: backgroundQueue)
            .assign(to: .A, on: self)
            .store(in: &cancellables)

        $A
            .ignoreNil()
            .tryMap { $0.getData(for: userName) }
            .logError()
            .subscribe(on: backgroundQueue)
            .assign(to: .userData, on: self)
            .store(in: &cancellables)

        let userAndTokenPub = $userData.ignoreNil().combineLatest($token.ignoreNil())

        userAndTokenPub
            .tryMap { getDataSourceB($0.0, $0.1) }
            .logError()
            .subscribe(on: backgroundQueue)
            .assign(to: .B, on: self)
            .store(in: &cancellables)

        userAndTokenPub
            .tryMap { getDataSourceC($0.0, $0.1) }
            .logError()
            .subscribe(on: backgroundQueue)
            .assign(to: .C, on: self)
            .store(in: &cancellables)

        return $userData.combineLatest($A, $B, $C)
            .compactMap { (userData, A, B, C) -> MyDataSourceFacade? in
                guard let userData = userData,
                      let A = A,
                      let B = B,
                      let C = C else {
                    return nil
                }

                return MyDataSourceFacade(userData, A, B, C)
            }
            .subscribe(on: backgroundQueue)
            .receive(on: DispatchQueue.main)
            .eraseToAnyPublisher()
    }
}

This is starting to get messy. Not to mention that our error handling could use some improvement. In this implementation, the caller of this function will never receive an Error, because the Publisher they’re returned is only connected to the @Published properties (whose Failure types are Never). This is a problem because if any setup goes awry and the process needs to start over, the caller will just wait quietly for a value/error that will never come. That’s obviously not ideal.

Wield the Pipeline(s)

The problem here is with how we’ve decided to model the problem with Combine. We did something that seemed natural to a developer who has worked almost exclusively with procedural code, which I’d bet is most of us in the iOS/Mac developer community. But that’s not what Combine is made for. We need to model this as a reactive stream: multiple signals that come together to give a complex output value.

Here’s a more “Combine-flavored” solution:

// From page "03 - Combine Flavored"
func getAllDataSources(userName: String) -> AnyPublisher<MyDataSourceFacade, Error> {

    let tokenPub = getTokenPublisher()

    let APub = tokenPub
        .tryMap { getDataSourceA($0) }

    let userDataPub = APub
        .tryMap { $0.getData(for: userName) }

    let userAndTokenPub = userDataPub.combineLatest(tokenPub)

    let BPub = userAndTokenPub
        .tryMap { getDataSourceB($0.0, $0.1) }

    let CPub = userAndTokenPub
        .tryMap { getDataSourceC($0.0, $0.1) }

    return userDataPub.combineLatest(APub, BPub, CPub)
        .compactMap { (userData, A, B, C) -> MyDataSourceFacade? in
            print("Returning facade")
            return MyDataSourceFacade(userData, A, B, C)
        }
        .subscribe(on: backgroundQueue)
        .receive(on: DispatchQueue.main)
        .eraseToAnyPublisher()
}

This is so much better! No more managing subscriptions with AnyCancellable! No more assigning to properties! We’re returning errors properly to the caller! But there is one wrinkle in this code that can trip you up. When we run this, we notice in our server logs that we’re contacting the auth server six times for the token every time we create a facade. Huh, that’s weird… Let’s take a look at why this is happening.

Above is a diagram of what we expected our above code to do, and what actually happened. On the left, we see the simple data flow we intended to define in the previous code sample, where each value is only created once. On the right, we see the actual outcome, where every intermediate value is being duplicated for every receiver. This is because the publishers we defined are only “recipes” for creating Subscriptions.

Subscriptions are the actual data-flow connections that are created when a subscriber connects to the pipeline. The subscription process happens in reverse, against the flow of data. By default, publishers don’t know about their existing subscriptions, they must create a new subscription to their upstream source each time they receive a downstream connection. That’s what you want in most cases, where stateless, value-type semantics offer safety and convenience, but in our case we only need these intermediate publishers to load their data a single time.

Spread the Word with Reference-type Publishers

Luckily Combine has a solution for this: class-type publishers like ShareMulticast, and Autoconnect. Share is the simplest to use since (per Apple’s documentation) it’s “effectively a combination of Multicast and PassthroughSubject, with an implicit .autoconnect().” We’ll update our re-used publishers to use .share() so they can publish to multiple downstreams.

// From page "04 - Shared Publishers
func getAllDataSources(userName: String) -> AnyPublisher<MyDataSourceFacade, Error> {

    let tokenPub = getTokenPublisher()
        .share()

    let APub = tokenPub
        .tryMap { getDataSourceA($0) }
        .share()

    let userDataPub = APub
        .tryMap { $0.getData(for: userName) }
        .share()

    let userAndTokenPub = userDataPub.combineLatest(tokenPub)
        .share()

    let BPub = userAndTokenPub
        .tryMap { getDataSourceB($0.0, $0.1) }

    let CPub = userAndTokenPub
        .tryMap { getDataSourceC($0.0, $0.1) }

    return userDataPub.combineLatest(APub, BPub, CPub)
        .compactMap { (userData, A, B, C) -> MyDataSourceFacade? in
            print("Returning facade on (Thread.current.description)")
            return MyDataSourceFacade(userData, A, B, C)
        }
        .subscribe(on: backgroundQueue)
        .receive(on: DispatchQueue.main)
        .eraseToAnyPublisher()
}

 

And that’s it! For real this time. Sorry for the deception, but I wanted to present this in a realistic, iterative-problem-solving way, so you could directly see what sort of issues you may run into when using Combine in the real world.

In fact, this blog post is almost exactly the path I took in a recent project (minus a lot of frustration and soul-searching along the way). But once I had a breakthrough on how to use Combine “The Right Way,” I was honestly giddy. And I never use that word (it sounds gross to me). So I felt the need to share and hopefully help anyone else out there struggling to take their first steps into the reactive world.

The post Learning how to think in pipelines with Combine appeared first on Big Nerd Ranch.

]]>
https://bignerdranch.com/blog/learning-how-to-think-in-pipelines-with-combine/feed/ 0
Be a square – create custom shapes with SwiftUI https://bignerdranch.com/blog/be-a-square-create-custom-shapes-with-swiftui/ https://bignerdranch.com/blog/be-a-square-create-custom-shapes-with-swiftui/#respond Wed, 28 Apr 2021 14:32:27 +0000 https://bignerdranch.com/blog/be-a-square-create-custom-shapes-with-swiftui/ Shapes out of the box This Swift UI tutorial will introduce you to the techniques and code needed to add custom shapes in Swift UI’s framework. We can create our own shape by drawing the Path ourselves, and these custom shapes are used to display a task that is running or to show feedback to […]

The post Be a square – create custom shapes with SwiftUI appeared first on Big Nerd Ranch.

]]>
Shapes out of the box

This Swift UI tutorial will introduce you to the techniques and code needed to add custom shapes in Swift UI’s framework. We can create our own shape by drawing the Path ourselves, and these custom shapes are used to display a task that is running or to show feedback to the user when interacting with an element on the screen.

SwiftUI gives us some powerful tools out of the box, shapes being one of them. Apple provides us shapes like CapsuleCircleEllipseRectangle, and RoundedRectangle. A shape is a protocol that conforms to the Animatable, and View protocols, which means we can configure their appearance and behavior. But we can also create our own shape with the power of the Path struct! A Path is simply an outline of a 2D shape that we will draw ourselves. If you’re thinking, ok but how is this practical? Custom shapes and animations are used to display a task that is running or to show feedback to the user when interacting with an element on the screen. Here’s where we’re going, and we’ll get there by building the vehicle body, adding some animation and styling, then adding the sunset behind it. Let’s get started!

Plotting out points

Since we’re working on an iOS application, the origin of CGRect will be in the upper-left, and the rectangle will extend towards the lower-right corner. To build our shape we’re going to start the origin in the bottom-left corner and work clockwise. You can read the official Apple Documentation for more details on CGRect.

Based on this we can plan our shapes before fumbling around with numbers and CGPoint values. For this example, we’ll build a vehicle and animate it to look like it’s moving. I’ve drawn out the frame of the vehicle using Path, and we’ll use the Circle shape to make the wheels and hubcaps. Again, here is what it will look like:

A simple car shape of body and two wheels viewed from the side. It's body is a curved wedge tapering to the right. Each wheel is made of an outer tire circle and an inner hubcap circle.Create a struct that conforms to the Shape protocol. In it we need to add the func path(in rect: CGRect) -> Path method. This is what allows us to draw our shape.

struct VehicleBody: Shape {
    // 1.
    func path(in rect: CGRect) -> Path {
        // 2.
        var path = Path()
        // 3.
        let bottomLeftCorner = CGPoint(x: rect.minX, y: rect.maxY)
        path.move(to: bottomLeftCorner)
        // 4.
        path.addCurve(to: CGPoint(x: rect.maxX, y: rect.maxY * 0.7),
                      control1: CGPoint(x: rect.maxX * 0.1, y: rect.maxY * 0.1),
                      control2: CGPoint(x: rect.maxX * 0.1, y: rect.maxY * 0.4))

        path.addCurve(to: CGPoint(x: rect.maxX * 0.8, y: rect.maxY),
                      control1: CGPoint(x: rect.maxX * 0.9, y: rect.maxY),
                      control2: CGPoint(x: rect.maxX, y: rect.maxY))

        // 5.
        path.closeSubpath()
        // 6.
        return path
    }
}

Code breakdown

Let’s go through what’s happening in the code.

  1. Within our struct, we need to define the function path(in:), which is required by the Shape protocol. This returns a Path which we will create. It takes a CGRect parameter that will help us lay out our shape.
  2. Add a local variable called path that is a Path. Remember a Path is the outline of a 2D shape.
  3. Tell the path where our starting point will be using the move(to: CGPoint) function. Here is where our parameter CGRect will help us find our starting point. Thinking in terms of a grid or coordinates, we want our shape to start at the bottom-left corner. A CGRect is a structure that contains the location and dimensions of a rectangle, and a CGPoint is a structure that contains a point in a two-dimensional coordinate system. For iOS the bottom-left corner of a CGRect is the minX or 0, and maxY or the largest value of y on the coordinate system.
  4. Let’s add two curves that will serve as the back, and front our vehicle. path has a function called addCurve, and it does exactly what the name says. It adds a cubic Bézier curve to the path with specified end and control points. The endPoint is the endpoint of the curve. Essentially where you want the curve to end. The path the curve will take starts at our move(to:) point, rect.minX, and rect.maxYcontrolPoint1 and controlPoint2 determine the curvature of the segment. The addCurve must be called after the move(to:) or after a previously created line. If the path is empty, this method does nothing. This method can seem overwhelming at first, so I’d suggest reading Apple's official documentation. If you’re wondering how I ended up with these control points, I simply changed each point until I was happy with the shape. Feel free to modify these points in your own shape. This is what the curves should look like:
  1. A basic outline of a car.We can then close off our shape’s path by calling closeSubpath(). This will create a straight-line segment from the last to the first point of our shape.
  2. Finally, return our completed path.

The hard part is over

Now that we have our frame, let’s add some wheels using a shape we get for free. If you haven’t guessed it already, we’re going to use the Circle shape for our wheels. In order to line things up correctly, we need to layout our view with a few ZStacks. Let’s create a new struct that we’ll build our vehicle parts in.

struct Vehicle: View {
    var body: some View {
        // 1.
        ZStack {
            // 2.
            VStack(spacing: -15) {
                // 3.
                VehicleBody()
                // 4.
                HStack(spacing: 30) {
                    // Back wheel
                    ZStack {
                        Circle()
                            .frame(width: 30, height: 30)
                        Circle()
                            .fill(Color.gray)
                            .frame(width: 20, height: 20)
                    }
                    // Front wheel
                    ZStack {
                        Circle()
                            .frame(width: 30, height: 30)
                        Circle()
                            .fill(Color.gray)
                            .frame(width: 20, height: 20)
                    }
                }            
            }
            // 5.
            .frame(width: 150, height: 100)
        }
    }
}

Code breakdown

  1. We want our shapes to overlap some so our wheels aren’t floating beneath the vehicle frame. Using a ZStack allows us to overlap views.
  2. Now a ZStack isn’t enough to put our parts in the correct placement. Adding a VStack will stack our frame and wheels, vertically. We can then adjust the spacing to line our wheels up so half their height aligns with the bottom of the frame.
  3. Add the VehicleBody()
  4. Let’s create our wheels. Our wheels will have the tire and hubcap appearance. First, we know that they will be horizontally aligned, so wrap them in a HStack and give them a spacing of 30. Next, our wheels will each be wrapped in a ZStack so we can place the hubcap on top of the wheel. First add the wheel shape with Circle() and give it a frame with a width and height of 30. Then, add the hubcap with a width and height of 20. Give the hubcap a fill color of gray so we can see it over the wheel. Repeat this for the second wheel.
  5. Set a fixed-size frame for the Vehicle view.

Lights, camera, animation!

Now that we have the frame and wheels of our vehicle we’re going to add some animations and ride off into the sunset.

Let’s animate!

Since we’ve just built a sweet vehicle that looks like it can handle some off-roading, I think our suspension should animate to show that. We don’t need a lot of code to make this happen, but we need to take care to animate the right elements. For this our animation will be on the parent VStack of the VehicleBody. We need to add a @State property to tell our view to animate, and two modifiers after the frame modifier of the VStack placing the wheels relative to the body:

struct Vehicle: View {
    // 1.
    @State var isPlayingAnimation: Bool = false

    var body: some View {
        ZStack {            
            VStack(spacing: -15) {
                ...VehicleBody()
                ...HStack(spacing: 30)
            }
            // 2.
            .offset(y: isPlayingAnimation ? -3 : 0)
            // 3.
            .animation(Animation.linear(duration: 0.5).repeatForever(autoreverses: true))
        }
    }
}

Code breakdown

  1. Add a @State property to manage our animation offset y position just above var body: some View.
  2. Add the offset modifier to change the y position of our ZStack. Place this just after the .frame modifier. We want the vehicle to move up and down like a bouncing effect.
  3. Call the animation modifier with a linear type. Finally, add the .repeatForever(autoreverses: true) function so our vehicle will appear to bounce…forever.

We’re going to add the same functions to the HStack that contains our Circle shapes, but we’ll change the y position and animation duration slightly. This will give us a nice suspension effect.

.offset(y: isPlayingAnimation ? -2 : 0)
.animation(Animation.linear(duration: 0.4).repeatForever(autoreverses: true))

Ah, the sunset

We’ll add one more shape to create our sunset, and then we’ll style our vehicle a bit. Our sunset will be in the shape of a Circle. Let’s add it directly inside our top ZStack.

Circle()
    .fill(LinearGradient(gradient: Gradient(colors: [.yellow, .red, .clear, .clear]), startPoint: .top, endPoint: .bottom))
    .frame(width: 130, height: 130)
    .shadow(color: .orange, radius: 30, x: 0, y: 0)

I’ve added some style to my vehicle, but feel free to style yours however you’d like. Here’s mine:

.fill(LinearGradient(gradient: Gradient(colors: [.purple, .red, .orange]), startPoint: .topTrailing, endPoint: .bottomLeading))

Lastly, in order to see our animation work, we need to add the onTapGesture function to our top ZStack and inside the closure toggle the isPlayingAnimation bool. Now we can interact with our animation simply by tapping it.

.onTapGesture {
    self.isPlayingAnimation.toggle()
}

You can see the animation right inside the canvas preview of Xcode by pressing the play button above the preview device. Or build and run on a simulator  .

Conclusion

Our example shows just how easy it is to create a custom shape in SwiftUI. We barely scratched the surface of what we can do here, so I encourage you to explore some of the other functions in Path such as addArc or addQuadCurve. For example, try using quad curves to build a vehicle with more rounded corners.

The post Be a square – create custom shapes with SwiftUI appeared first on Big Nerd Ranch.

]]>
https://bignerdranch.com/blog/be-a-square-create-custom-shapes-with-swiftui/feed/ 0
Auditing Your App for Accessibility https://bignerdranch.com/blog/auditing-your-app-for-accessibility/ Tue, 09 Mar 2021 16:20:19 +0000 https://www.bignerdranch.com/?p=4737 Getting started with accessibility in your app may be intimidating at first, but there are some handy tools out there to get you started.

The post Auditing Your App for Accessibility appeared first on Big Nerd Ranch.

]]>
Adding accessibility to your app may be an intimidating task, but there are some handy tools out there to get you started quickly. There are a variety of ways to approach your accessibility implementation, but the first step is to perform an audit on your app. By auditing your app, you can identify all of the improvement areas and outline the items needed to make your app fully accessible.

When it comes to auditing your app, there are a variety of tools and methods available to help. Generally, you’ll want to spend time getting familiar with accessibility features and navigating through your app as one of your users. Xcode also provides the Accessibility Inspector, which is a handy way to see what accessibility settings might be missing on User Interface (UI) elements.

Manually Auditing

It’s best to explore the different accessibility options on your iOS device before anything else. Getting familiar with how users navigate apps using assistive technologies is essential to understanding where you should improve your app.

VoiceOver

iOS VoiceOver settingsWhen thinking about accessibility, the first thing that comes to mind is VoiceOver. iOS does a decent job at making labels, buttons, and sometimes images accessible by default, but there is still room for improvement. To tell how accessible your app truly is, enable VoiceOver on your device and run through it as a user. It’s important to both note which elements are focusable (they’ll appear with a box around them when focused and be read aloud) and the overall flow of the focus. To get a full example of using the app as a user, it’s helpful to enable the screen curtain feature which will black out your screen while allowing you to use VoiceOver gestures. Once you have VoiceOver on, the easiest way to toggle screen-curtain is by using the three-finger triple-tap gesture.

Try experimenting with different VoiceOver rotor options to see how elements are focused. If you’re unfamiliar with the rotor, “Getting Familiar with iOS VoiceOver’s Rotor” will get you started.

Display & Text Size

VoiceOver isn’t the only accessibility feature you should explore. “Display & Text Size” is another prominent feature and might be even more widely used than VoiceOver. Some users may be able to see but have a difficult time with small fonts on their devices. In the iOS Accessibility settings, there is an option to increase text size significantly. However, UILabels do not handle this out of the box. To test different font sizes:

  1. Navigate to Settings
  2. Select “Display & Text Size”
  3. Select “Larger Text”

Here you can change the text size with the slider at the bottom of the screen. To use even larger sizes, enable the “Larger Accessibility Sizes” toggle. Now the slider should show an even larger range of sizes.

Returning to your app, take note of which labels, if any, respond to this setting. It’s important to take note if text gets cut off, truncated, or appears incorrectly. Buttons should also scale and increase their text size per the settings. It may be an edge case, but be sure to check the largest size together with the “Bold Text” option (also on the “Display & Text Size” screen) to account for the largest option. Making your app look good and function at the largest size is a great way to ensure all sizes in between will likely work.

Example of dynamic textTip: Try using “Smart Invert” to find items that aren’t inverting but should and vice-versa. The more you track down and fix, the more useful your app will be for accessibility users!

Voice Control

Voice Control is a handy feature that can also aid you in assessing accessibility. It assumes that users have sight and voice and can use their voice to navigate their device using commands like “Swipe left,” “Tap,” etc. There’s also an “Overlay” option where you can enable “Item Names” which provides a nice at-a-glance view of what accessibility sees.

Voice control screenUsing Accessibility Inspector

Manually testing your app to see what is accessible is essential, but there are also tools available that can speed up your assessment by identifying the low-hanging fruit. The Accessibility Inspector can help you step through your app and scan all of the accessibility settings on UI elements for issues to fix. You can find the Accessibility Inspector in Developer Tools:

  1. Navigation to accessibility inspectorOpen the “Xcode” menu in the menu bar
  2. Select “Open Developer Tool”
  3. Select “Accessibility Inspector”

In order to use the Accessibility Inspector to check your app, you’ll have to connect your device to your MacBook or launch your app in a simulator. Once it’s connected, it will show as an option in the device dropdown.

Accessibility inspector device listOnce you’ve chosen the device or simulator, the Accessibility Inspector will show all of the basic settings, actions, element details, and UI element hierarchy. For the purposes of accessibility, we’re going to focus on the “Basic” section. In there, you can see the values set for the accessibilityLabelaccessibilityValueaccessibilityTraits, and the accessibilityIdentifier. By pressing the “Audio” button, you can also hear the label read aloud. Keep in mind that this isn’t always exactly what VoiceOver speaks on a device! For one thing, it does not voice or show the accessibilityHint.

Inspected element cowboy hat detail screenThere are multiple options available for navigating through the focusable elements in your app:

  • inspection pointer Enable the Inspection Pointer which allows you to hover over simulator elements.
  • previous buttonnext button Use the Next/Previous element buttons to manually move through elements in focus order.
  • play button Press the Play button to automatically move through elements as VoiceOver finishes voicing each one.
  • audio button You also have the option to navigate through the elements without any audio by toggling the audio button off.

By going through your app with the Accessibility Inspector, you can verify accessibility settings and perform a more in-depth audit of what is missing.

Inspector Audit

Last but not least, the Accessibility Inspector also has an audit feature! This goes through all of the UI elements on the screen and lets you know which ones may be inaccessible with VoiceOver. It also lets you know which elements don’t support Dynamic Text.

accessibility inspector audit screenSimply switching to the audit screen and pressing “Run Audit” gives you a list of issues to fix. Above, you can see that the screen audited is missing accessibility settings on multiple UI elements, and its labels don’t allow dynamic sizing. The eye button next to each item will show you which UI element the warning is referring to. There is also a button with a question mark in a circle next to that which provides suggestions on how to fix the issue.

audit issue exampleFixing all of these warnings is a great start to making your app more accessible! As you fix issues, you can re-run the audit periodically to check what is left to fix. These tools are a great starting point, but it’s critical to also manually test your app as a user to verify things work as expected when using accessibility options.

Inspector Settings

You might have noticed that there’s a settings button available in the Accessibility Inspector as well. These settings allow you to make real-time adjustments to your app to test accessibility. For example, if you enable Dynamic Text on a label, using the font size slider will change the size of text on the simulator immediately.

increase text size exampleConclusion

Making your app accessible isn’t as daunting as it seems. With all of the tools available and some time to audit your app, you can identify significant usability improvements. Implementing these improvements is a huge win for your users and makes your app more user-friendly for everyone!

Ready to start implementing VoiceOver? “Implementing VoiceOver with a Custom Rotor” provides an example of how to integrate your app with VoiceOver’s rotor.

The post Auditing Your App for Accessibility appeared first on Big Nerd Ranch.

]]>
SwiftUI or UIKit: Which is Right For You? https://bignerdranch.com/blog/learning-apples-swiftui-or-uikit-which-one-is-right-for-you-right-now/ Tue, 02 Mar 2021 17:54:36 +0000 https://www.bignerdranch.com/?p=4729 SwiftUI, Apple's new declarative programming framework, was introduced along with iOS 13 in September 2019. As a student rolling into 2021 you may be wondering if you should start your iOS development journey with SwiftUI or pick the tried and true UIKit? Spoiler alert: it depends.

The post SwiftUI or UIKit: Which is Right For You? appeared first on Big Nerd Ranch.

]]>
SwiftUI, Apple’s new declarative programming framework, was introduced along with iOS 13 in September 2019. As a student rolling into 2021 you may be wondering if you should start your iOS development journey with SwiftUI or pick the tried and true UIKit? Spoiler alert: it depends.

UIKit

UIKit provides a variety of objects which you can use to develop apps for iOS. These objects, such as UIView and its subclasses, allow for the display and interaction of content within your app. UIKit apps generally make use of the Model-View-Controller (MVC) design pattern.

UIKit has been the backbone of UI development on iOS for over a decade. It is a mature platform that sees use in just about every iOS application in existence. Since it is well established, there is an abundance of resources available in case you get stuck or have questions.

UIKit apps can be built in a few different ways:

  • Leveraging Interface Builder in order to design a UI without writing any code. Interface Builder is integrated into Xcode and allows for editing of .storyboard and .xib files, both of which describe layouts using XML.
  • A code focused approach where views and layout constraints are defined in Swift (or Objective-C).
  • A mix of the above two approaches.

SwiftUI

SwiftUI is Apple’s new declarative programming framework used to develop apps for iOS and Mac using Swift. The declarative approach is the key differentiator when comparing SwiftUI to UIKit. In UIKit you mediate the relationship between events and changes to presentation. With SwiftUI the need to mediate that relationship disappears since that is handled by the framework itself.

As far as building apps with SwiftUI, things are a bit more streamlined when compared to UIKit:

  • Xcode displays the visual editor alongside any file that contains a SwiftUI view, displaying a live representation of the view you are building. You can still interactively design on the canvas, just like in Interface Builder.
  • .storyboard and .xib files are not used in SwiftUI. The Swift code itself describes the layout rather than these opaque XML files.

Here are some additional notes and caveats:

  • Widgets built using WidgetKit are required to use SwiftUI. This is the only case as of now (iOS 14), but given Apple’s technology and API history, we anticipate a time will come when you will need to use SwiftUI in order to leverage the latest Apple features.
  • Swift developers can take advantage of Apple’s native libraries and design elements across all of its platforms. For example, you can’t do a native UI on watchOS if you don’t use SwiftUI.
  • SwiftUI requires iOS 13 or later. It might not be right for apps where backwards compatibility is important.
  • It wasn’t until iOS 14 and the introduction of the App and Scene protocols that SwiftUI became viable for building whole apps.

Using SwiftUI and UIKit Together

It is important to keep in mind that UIKit and SwiftUI are not mutually exclusive. It is possible to use UIKit code in a SwiftUI view and vice versa. Let’s take a look at some examples!

Using UIKit in SwiftUI

UIViewRepresentable is a protocol provided by the SwiftUI framework. Using this protocol it is possible to wrap an instance of a UIKit view so that it can be displayed with SwiftUI

Defining a UIKit view wrapper with UIViewRepresentable:

@available(iOS 13, *)
struct MyUIKitView: UIViewRepresentable {
    func makeUIView(context: Context) -> UITextView {
        // Creating, configuring, and returning the UIKit view here
        let view = UIView()
        view.backgroundColor = .blue
        return view
    }

    func updateUIView(_ uiView: UITextView, context: Context) {
        // Update the state of the view here.
    }
}

Using the wrapped UIKit view in SwiftUI:

struct ContentView: View {
   var body: some View {
      VStack {
         Text("Hello from UIKit!")
         MyUIKitView()
      }
   }
}

Using SwiftUI in UIKit

UIHostingController is a view controller which allows for the display of SwiftUI views into a UIKit view hierarchy. It can be used just like any other view controller in UIKit:

let hostingController = UIHostingController(rootView: Text("Hello from SwiftUI!"))
present(hostingController, animated: true)

See Apple’s Documentation for more information.

The Bottom Line

Here at Big Nerd Ranch, we’ve had ample experience working with both UIKit and SwiftUI. We believe that UIKit is foundational to an iOS developer building apps on Apple platforms now and into the future. It is our recommendation that you start with that if you’re unsure of which to choose.

SwiftUI is a great choice assuming the minimum iOS version for the project is set to iOS 14. However, be aware that you may run into issues and limitations as you build your app. Having a foundation in UIKit would help you navigate these problems.

TL;DR

Learn UIKit. It’s a mature platform that can handle a wide range of user interfaces, from the simple to the complex. If you want to explore new frontiers, check out SwiftUI. Since they are not mutually exclusive, it’s even worth learning both!

The post SwiftUI or UIKit: Which is Right For You? appeared first on Big Nerd Ranch.

]]>