Mark Dalrymple - Big Nerd Ranch Wed, 16 Nov 2022 21:29:33 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.5 Core Graphics, Part 3: Lines https://bignerdranch.com/blog/core-graphics-part-3-lines/ https://bignerdranch.com/blog/core-graphics-part-3-lines/#respond Mon, 09 Jul 2018 10:00:23 +0000 https://nerdranchighq.wpengine.com/blog/core-graphics-part-3-lines/ Consider the humble line—just a straight sequence of pixels connecting two points. There are well-known algorithms you can use to do your own drawing, but these days, we have toolkits to do the hard work.

The post Core Graphics, Part 3: Lines appeared first on Big Nerd Ranch.

]]>

Update 7/02/18 (The Quartz API has undergone some radical changes over the years. We’re updating our popular Core Graphics series to work well with the current version of Swift, so here is an update to the third installment.)In medias res? Check out Part 1
and Part 2
of our posts on Core Graphics.**

Consider the humble line: just a straight sequence of pixels connecting
two points. There are well-known
algorithms

you can use to do your own drawing, but these days, we have toolkits to
do the hard work. In Core Graphics, a line is just a kind of path.
Paths are central to many Core Graphics features, and next time you’ll
get a lot of path information. But for now, think of lines as
sequences of line segments that are stroked (not filled). There are a
bunch of general
GState
parameters that affect lines (color, line width, shadows, transforms)
as well as GState values dedicated to drawing lines.

All of the line images you see here were created by GrafDemo. You can
find the source over on
Github
.

This is what the Lines window looks
like, with an Objective-C NSView on the left, and a Swift implementation on the right:

GrafDemo Lines window.

Recall that CG paths are just descriptions of a shape. They don’t
actually contain any pixels. The GState controls how that path
actually gets rendered, whether it’s filled or stroked in a view, an
image or a PDF. The white line down the center of the line is the ideal shape,
and the blue line is the stroked line using the context’s settings.

There are four GState properties peculiar to stroked
lines that control how they look: Join, Miter Limit, End Cap and
Dash.

Join the Dark Side

The line join property controls what happens when a line turns a
corner, and is described by one of these enum values:

public enum CGLineJoin : Int32 {
    case miter  // default
    case round
    case bevel
}

You set it with this call:

context.setLineJoin(.miter)

The miter join has a point on it that sticks out. The round join puts a semicircle around the “knee” of the join, while the bevel has a flattened spot.

Demonstration of Mitered, Round, and Bevel join styles.

This figure has two places where the lines join. All line intersections in a single path use the same line join value, so if you want to mix and match join styles, you’ll need to set the line join in the context, draw one set of lines, set the line join to another value, then draw the other set of lines. You can’t mix and match within one drawing operation.

The Cheat is to the Limit

The rounded and bevel line joins are kind of boring. Just a circle on the end, or the corner gets chopped off. The Miter, though, is cool. The miter join draws that pointy bit, with the length of the point changing depending on the angle the two lines make:

Animation showing the miter arrow growing very long.

One problem though—the pointy end can get pretty long if the angle between the two lines is very acute. There is another GState parameter that can control this: the Miter Limit. This is a CGFloat value that tells CG when to draw the pointy miter thing, or to turn the join into a bevel:

Animation showing the miter arrow turning into a bevel once the miter limit is exceeded.x

The miter limit API is simple, assuming you know what the value passed in means:

context.setMiterLimit(5.0)

When deciding whether to miter or bevel, Quartz divides the length of the miter it is planning on drawing by the GState’s line width. Exceeding the miter limit means CG will use a bevel join for this intersection instead. Because the length of the miter is proportional to the line width (wider lines mean longer miters), the miter limit actually ends up being independent of line width—the terms cancel out. Once you have your drawing code tweaked such that it has good mitering/beveling behavior, you don’t have to worry about the line width changing.

Heh Heh, he said “Butt”

Not only can you control what happens at the join, you can also control what happens when the lines begin and end. There are three Line Cap Styles:

public enum CGLineCap : Int32 {
    case butt   // default
    case round
    case square
}

There is one call to change the cap style:

context.setLineCap(.butt)

The butt cap does no extra drawing at the ends of lines. The round cap attaches a half-circle, and the square has a half-square at the end. The size of this extra stuff is proportional to the width of the line.

Demonstration of Butt, Round, and Square cap styles.

Like with line join styles, you can’t mix and match cap styles on a single line.

Dashing Through The Snow

The line join and cap concepts were inherited from Postscript, as is another cool property: the line dash.

A line dash is a repeating pattern specified by an array of floating-point “mark-space” values. Element zero is the length of the first part of the dash. Element one is the amount of blank space to leave. Element two is another length of line, and element three is another space, and so on. The pattern is repeated once CG (or Postscript) runs out of elements of this array.

Here’s a set of line segment lengths:

let pattern: [CGFloat] = [12.0, 8.0, 6.0, 14.0, 16.0, 7.0]

And the corresponding line pattern:

An illustration of the line dash pattern described by the lengths[] array.

Here’s a line drawn with this pattern:

A line drawn with the line dash pattern described by the lengths[] array.

The miter line-join style is being used here, so both of the angles here are miter joins. The missing lower join is due to the dash pattern having a blank region where the join should be.

The dash pattern is anchored at the first point of the line:

An animation dragging one line intersection around, showing subsequent line segments changing their line pattern.

Each individual pattern section has the end cap property honored, so having a dashed line along with cap or butt endcaps could lead to caps overlapping each other and forming one solid line.

Set Phasers to Stun

Here’s how you set the line phase:

context.setLineDash(phase: 0, lengths: pattern)

You pass it the segment-lengths pattern array along with a phase value. The phase tells Quartz where into the pattern to start interpreting the pattern. You can animate the line dash by calling setLineDash(phase:lengths:) with different phases.

This is the same line, but only the phase is being changed:

Animation showing the line dash starting at different points in the pattern.

Construction Zone

Core Graphics provides a number of calls for creating line paths.

Even though I haven’t talked about path API yet, if you’ve ever used NSBezierPath or UIBezierPath, this first form should be somewhat familiar: Move to a point, and then add a new point indicating the end of a new line segment, forming a continuous path.

let points: [CGPoint] = [CGPoint(x: 0, y: 0), CGPoint(x: 23.5, y: 42.17), 
                         CGPoint(x: 33.333, y: 12.0)]

let path = CGMutablePath()
path.move(to: points[0])

for i in 1 ..< points.count {
    path.addLine(to: points[i])
}

currentContext.addPath(path)
currentContext.strokePath()

The next form takes an array of CGPoints, and internally performs the same kind of loop as you just saw. This also results in a single path.

let path = CGMutablePath()

path.addLines(between: points)

currentContext.addPath(path)
currentContext.strokePath()

A third way to draw the line is by stroking each individual line segment. Each segment will get its own end-cap, and have any line dash applied to it. There will be no mitering happening at line junctions because none of the lines are connected as far as CG is concerned.

for i in 0 ..< points.count - 1 {
    let path = CGMutablePath()

    path.move(to: points[i])
    path.addLine(to: points[i+1])

    currentContext.addPath(path)
    currentContext.strokePath()
}

The last form also draws individual segments.
CGContext.strokeLineSegments(between:) takes an array of pairs of points and draws a line segment starting at even point X and ending at X + 1. So, for three line segments it strokes lines from 0->1, 2->3, and 4->5.

GrafDemo’s data isn’t in a convenient form (being of the form 0->1->2->3), so some data shuffling needs to be done to turn this into an array like 0->1, 1->2, 2->3, and so on:

var segments: [CGPoint] = []

for i in 0 ..< points.count - 1 {
    segments += [points[i]]
    segments += [points[i + 1]]
}

// Strokes points 0->1 2->3 4->5
context.strokeLineSegments(between: segments)

Performance

One last bit before wrapping up. CoreGraphics can be pretty fast, but
one issue it has is that overlapping lines in a single path can be
computationally expensive. When Quartz renders a path, it can’t just
say, “Ok, draw this segment. Now draw this segment.” without any other
processing. Imagine you were stroking the line with a semi-transparent
green color. If you blindly drew segments on top of each other, you
would get darker colors as several layers of transparent green “paint”
are overlaid. Before stroking a line segment, Quartz needs to figure
out where the intersections are and not do any double drawing.

Here’s the effect of drawing a set of lines as one path or as multiple
segments:

Side-by-side illustration showing line crossing drawing in a darker color due to transparency.

Keep an eye on your performance if you’ve got a bunch of overlapping
lines—the intersection calculations (amongst all the other work
Quartz does) are more than O(N) and can get pretty expensive with a
large number of lines.

Next Time

All about paths. A Path! A Path!

The post Core Graphics, Part 3: Lines appeared first on Big Nerd Ranch.

]]>
https://bignerdranch.com/blog/core-graphics-part-3-lines/feed/ 0
Core Graphics, Part 2: Contextually Speaking https://bignerdranch.com/blog/core-graphics-part-2-contextually-speaking/ https://bignerdranch.com/blog/core-graphics-part-2-contextually-speaking/#respond Sun, 01 Jul 2018 10:00:23 +0000 https://nerdranchighq.wpengine.com/blog/core-graphics-part-2-contextually-speaking/ You need to interact with the current Core Graphics context in some manner to actually draw stuff, so it's good to get comfortable with it, what it does, and why it's there.

The post Core Graphics, Part 2: Contextually Speaking appeared first on Big Nerd Ranch.

]]>

Update 7/02/18 (The Quartz API has undergone some radical changes over the years. We’re updating our popular Core Graphics series to work well with the current version of Swift, so here is an update to the second installment.)
In medias res? Check out part one of our posts on Core Graphics.

The context lies at the heart of Quartz: you need to interact with the current Core Graphics context in some manner to actually draw stuff, so it’s good to get comfortable with it, what it does, and why it’s there.

One fundamental operation in Core Graphics is creating a path. A path is a mathematical description of a shape. A path can be a rectangle, or a circle, a cowboy hat, or even the Taj Mahal. This path can be filled with a color—that is, all points within the path are set to a particular color. The path can also be outlined, a.k.a. stroked. This is like taking a calligraphy pen and drawing around the path leaving an outline. Here’s a hat that’s been stroked, filled, and then both filled with yellow and stroked in blue:

Hat path

As you can see, the actual outline can get pretty complex. It can be drawn in a particular color. The line can have a dash pattern. It could be stroked with a wide line or a narrow line. The ends of lines can have square or round ends, and on and on and on. That’s a lot of attributes.

If you peruse the Core Graphics API, you won’t see a call that takes all the settings:

CoreGraphics.stroke(path: path, color: green, lineWidth: 2.0,
                    dashPattern: dots, bloodType: .oPositive, endCap: .miter)

Instead, you have this call:

context.strokePath()

Where do all those extra values come from, then? They come from the context.

Bucket of Bits

The context holds a pile of global state about drawing, which are a bunch of independent values:

  • current fill and stroke colors
  • line width and pattern
  • line cap and join (miter) styles
  • alpha (transparency), antialiasing and blend mode
  • shadows
  • transformation matrix
  • text attributes (font, size, matrix)
  • esoteric things like line flatness and interpolation quality
  • and more

That’s a lot of state. The entire set of state that Core Graphics maintains is undocumented, so there may be even more settings lurking under the hood. Different kinds of contexts (an image vs. a PDF, for example) may contain additional settings.

Whenever Core Graphics is told to draw something, such as “fill this rectangle,” it looks to the current context for the necessary bits of drawing info. The same sequence of code can have different results depending on what’s in the context. On one hand, this is very powerful: a generic bit of drawing code can be manipulated via the context into dramatically different results. On the other hand, the context is a big pile of global state, and global state is easy mess up unintentionally.

Say you have code like this:

draw orange square:
    set color to orange in the current context
    fill a rectangle

You’ll end up with an orange square. Now assume you’re drawing a valentine too:

draw red valentine:
    set color to red in the current context
    fill a valentine

Yay! A red heart. Now say you add the valentine drawing code in your first function:

draw orange square:
    set color to orange in the current context
    draw red valentine
    fill a rectangle

Your rectangle will come out red instead of orange. Why? The valentine drawing code has clobbered the current drawing color. The color used to be orange by the time you filled the rectangle, but now it’s red. How can you avoid bugs like this?

There are two approaches. One way is to save off state before you change it—if you’re changing the global color, save off the current color, change it, do your drawing, and then restore it. That’s ok with one or two parameters, but doesn’t scale if you’re changing a dozen of them. There are also some context values that can get changed as side effects, so you’d have to account for those. Oh, and it’s actually impossible to do in Core Graphics because there are no getters for the current context. Sorry about that.

A stack of buckets

The other approach is to save the entire context before you change anything. Save the context, make your adjustments to the color or line width, do your drawing, and then restore the entire context. The Core Graphics API provides calls to save and restore the settings of the current context. These settings are known as the graphics state, or GState. A Core Graphics context keeps a stack of GStates behind the scenes.

Saving a context’s settings means you are pushing a copy of the settings on to the context’s stack. When you restore the graphics state, the previously saved GState gets popped off the stack and becomes the context’s current set of values, undoing any changes you may have made.

Changing the valentine drawing code like this fixes the “orange rectangle is red” bug:

draw red valentine:
    save graphics state
      set color to red in the current context
      fill a valentine
    restore graphics state

Then, the entire sequence of drawing calls will look like this:

    set color to orange in the current context
    save graphics state
      set color to red in the current context
      fill a valentine
    restore graphics state
    fill a rectangle

Here are the GState manipulations for this sequence of drawing calls. Time moves from left to right:

GState manipulations

Core Graphics API

There are a couple of flavors of the CG API. One is the Core Foundation / C-based version used in C, C++, and Objective-C. It uses pseudo-object opaque types such as CGContextRef or CGColorRef. It’s pretty old-school with a lot of C functions that take their “objects” as the first parameter, and then a pile of arguments afterwards. Swift has overlays that provide a Swifty API on top of the C API. The Swift API is what I’ll be talking about here and in future postings.

Getting the context

CGContext is the Swift type for a core graphics context. Usually you’ll get this context from your UI toolkit. In desktop Cocoa you ask NSGraphicsContext:

let context = NSGraphicsContext.current()?.cgContext  // type is CGContext?

and UIKit:

let context = UIGraphicsGetCurrentContext()  // type is CGContext?

You can also get contexts that render into a bitmap image (check out UIGraphicsBeginImageContext and friends).

Once you have a context, you can do things like change the color color, change the line width, or tell it to stroke/fill specific shapes.

For example, this outlines a rectangle:

let context = ...  // CGContext
let bounds = someThing.bounds // CGRect
context.stroke(bounds)

We’ve got more information about rectangles (part 1, part 2) and paths for the more curious.

CGContext.stroke(_:) outlines a given rectangle. If the context is an image context, or the context used to render graphics on the screen, a rectangle’s-border worth of pixels will be laid down using the context’s current settings. If you’re drawing into a PDF context, then a couple of bytes of instructions are recorded to ultimately outline a rectangle when the PDF is rendered at some future time.

Context Hygiene

GrafDemo is a Cocoa desktop app that demonstrates various parts of Core Graphics for this series of postings. You can poke around that for examples of CG code.

GrafDemo’s Simple demo contains an NSView that draws a green circle, surrounded by a thick blue line, on a white background, with a thin black border around the entire view.

Good vs. sloppy drawing

There are two versions of the code: one that has good GState hygiene and one that doesn’t. Notice that in the sloppy version, the thick blue line leaks out and is contaminating the border. (When you run the program you’ll actually see two views side-by-side when you run the program. One is implemented in Objective-C and the other in Swift.)

There’s a convenience property for getting the current context from inside of an
NSView’s draw(_:) method:

extension NSView {
    var currentContext : CGContext {
        let context = NSGraphicsContext.current()
        return context!.cgContext
    }
}

You can make a similar extension on UIView to unify accessing the current context.

Here’s the sloppy drawing method:

    func drawSloppily () {
        let context = currentContext
        context.setStrokeColor(red: 0.0, green: 0.0, blue: 0.0, alpha: 1.0) // Black
        context.setFillColor(red: 1.0, green: 1.0, blue: 1.0, alpha: 1.0) // White
        
        context.setLineWidth(3.0)
        
        drawSloppyBackground()
        drawSloppyContents()
        drawSloppyBorder()
    }

The background and border methods are pretty straightforward:

    func drawSloppyBackground() {
        currentContext.fill(bounds)
    }

    func drawSloppyBorder() {
        currentContext.stroke(bounds)
    }

They both assume the context is configured the same way that draw(_:) set it up. But! There is a problem:

    func drawSloppyContents() {
        let innerRect = bounds.insetBy(dx: 20.0, dy: 20.0)
        
        let context = currentContext
        context.setFillColor(red: 0.0, green: 1.0, blue: 0.0, alpha: 1.0) // Green
        context.fillEllipse(in: innerRect)
        
        context.setStrokeColor(red: 0.0, green: 0.0, blue: 1.0, alpha: 1.0) // Blue
        context.setLineWidth(6.0)
        context.strokeEllipse(in: innerRect)
    }

Notice the changes to the color and line width. The current context holds a pile of global state, so the existing fill and stroke color, and the existing line width, totally get clobbered.

Push-me Pull-you

The way to fix this problem is to copy the graphics context before drawing the contents. CGContext.saveGState() pushes a copy of the existing graphics context/graphics state onto a stack. CGContext.restoreGState() pops off the top of the stack and replaces the current context.

Here’s a nicer version of the content drawing that saves the graphics state:

    func drawNiceContents() {
        let innerRect = bounds.insetBy(dx: 20.0, dy: 20.0)

        let context = currentContext
        context.saveGState()  // Push the current context settings

        context.setFillColor(red: 0.0, green: 1.0, blue: 0.0, alpha: 1.0) // Green
        context.fillEllipse(in: innerRect)
        
        context.setStrokeColor(red: 0.0, green: 0.0, blue: 1.0, alpha: 1.0) // Blue
        context.setLineWidth(6.0)
        context.strokeEllipse(in: innerRect)

        context.restoreGState()  // Pop them off and undo
    }

Wrapping a save/restoreGState around the drawing prevents this method from polluting other methods.

Scoping it out

Because this drawing happens inside of a “scope” defined by GState saving and restoring,
I like to make that scope explicit in my code – this code is unambigiously protected
by saving the GState, without having to scan for save/restoreGState calls. You can even see
me making a scope via indentation in the orange-rectangle / red-heart example earlier.

I have another extension that wraps a GState push/pop in a closure:

import CoreGraphics

extension CGContext{
    func protectGState(_ drawStuff: () -> Void) {
        saveGState()
        drawStuff()
        restoreGState()
    }
}

Which makes the more hygenic drawing look like this:

    func drawNiceContents() {
        let innerRect = bounds.insetBy(dx: 20.0, dy: 20.0)
        let context = currentContext

        context.protectGState {
            context.setFillColor(red: 0.0, green: 1.0, blue: 0.0, alpha: 1.0) // Green
            context.fillEllipse(in: innerRect)
            
            context.setStrokeColor(red: 0.0, green: 0.0, blue: 1.0, alpha: 1.0) // Blue
            context.setLineWidth(6.0)
            context.strokeEllipse(in: innerRect)
        }
    }

Objective-C

OK, so what about Objective-C? GrafDemo has parallel Swift and Objective-C implementations, so feel free to peruse the sample code of your choice.

Back in the old days, Objective-C and Swift use of Core Graphics was nearly identical.
That made sharing source code easy (copy and paste, search and replace semicolons, tweak
your variable declaratons.) Swift 3 converted the old C-based API into
a nice object-oriented API.

The actual operations are identical – save a gstate, set a color, make a path, stroke or
fill a color, restore a gstate. They’re just spelled differently.

Other Platforms

OK, so what about other Apple platforms? Core Graphics is a lower-level framework that lives below AppKit, UIKit (iOS and TVos), and WatchKit. This means that your Core Graphics code can be pretty much identical on macOS and iOS. The main differences are how you initially get a context to draw in to (which is easily hidden behind extensions) and some of the more esoteric functions are only avaialble on macOS. You also don’t have easy cross-platform access to the higher-level abstractions (e.g. UIBezierPath / NSBezierPath and UIImage / NSImage).

The higher-level APIs do mix and match well with the lower-level ones. For example, you can push a GState, then use UIColor.purple.set() to change the drawing color, and then fill/stroke a path.

GState of the Union

This time, you met Core Graphics contexts, which are buckets of various drawing attributes. A context is an opaque structure, so you have no real idea what is really lurking inside. Because of this opaque state, and given the fact that some Core Graphics calls come with side effects, it’s impossible to save drawing attributes before changing them.

Core Graphics has the concept of a graphics state stack, which comes from Postscript. You can push a copy of the current graphics state onto a stack with CGContext.saveGState() and can undo any changes made to the context by popping the saved state with CGContext.restoreGState(). Got code that’s polluting the context so that subsequent drawing is wrong? Wrap it in a Save/Restore.

Core Graphics code is nearly identical across Apple’s platforms, so Core Graphics code can be pretty portable amongst the different parts of the Apple ecosystem.

Coming up next time: Lines! (as in, Lines-excitement-yay-happy-fun-times, not Lines-implicity-unwrapped-optionals.)

The post Core Graphics, Part 2: Contextually Speaking appeared first on Big Nerd Ranch.

]]>
https://bignerdranch.com/blog/core-graphics-part-2-contextually-speaking/feed/ 0
Core Graphics, Part 1: In the Beginning https://bignerdranch.com/blog/core-graphics-part-1-in-the-beginning/ https://bignerdranch.com/blog/core-graphics-part-1-in-the-beginning/#respond Mon, 05 Mar 2018 10:00:23 +0000 https://nerdranchighq.wpengine.com/blog/core-graphics-part-1-in-the-beginning/ Core Graphics, also known by its marketing name "Quartz," is one of the oldest graphics-related APIs on the platforms. Quartz forms the foundation of most things 2-D. Want to draw shapes, fill them with gradients and give them shadows? That's Core Graphics. Compositing images on the screen? Those go through Core Graphics. Creating a PDF? Core Graphics again.

The post Core Graphics, Part 1: In the Beginning appeared first on Big Nerd Ranch.

]]>

Update 03/06/18 (The Quartz API has undergone some radical changes over the years. We’re updating our popular Core Graphics series to work well with the current version of Swift, so here is an update to the first installment.)

Mac and iOS developers have a number of different programming interfaces to get stuff to appear on the screen. UIKit and AppKit have various image, color and path classes. Core Animation lets you move layers of stuff around. OpenGL lets you render stuff in 3-space. SpriteKit lets you animate. AVFoundation lets you play video.

Core Graphics, also known by its marketing name “Quartz,” is one of the oldest graphics-related APIs on the platforms. Quartz forms the foundation of most things 2-D. Want to draw shapes, fill them with gradients and give them shadows? That’s Core Graphics. Compositing images on the screen? Those go through Core Graphics. Creating a PDF? Core Graphics again.

CG (as it is called by its friends) is a fairly big API, covering the gamut from basic geometrical data structures (such as points, sizes, vectors and rectangles) and the calls to manipulate them, stuff that renders pixels into images or onto the screen, all the way to event handling. You can use CG to create “event taps” that let you listen in on and manipulate the stream of events (mouse clicks, screen taps, random keyboard mashing) coming in to the application.

OK. That last one is weird. Why is a graphics API dealing with user events? Like everything else, it has to do with History. And knowing a bit of history can explain why parts of CG behave like they do.

Just a PostScript In History

Back in the mists of time (the 1980s, when Duran Duran was ascendent), graphics APIs were pretty primitive compared to what we have today. You could pick from a limited palette of colors, plot individual pixels, lay down lines and draw some basic shapes like rectangles and ellipses. You could set up clipping regions that told the world, “Hey, don’t draw here,” and sometimes you had some wild features like controling how wide lines could be. Frequently there were “bit-blitting” features for copying blocks of pixels around. QuickDraw on the Mac had a cool feature called regions that let you create arbitrarily-shaped areas and use them to paint through, clip, outline or hit-test. But in general, APIs of the time were very pixel oriented.

In 1985, Apple introduced the LaserWriter, a printer that contained a microprocessor that was more powerful than the computer it was hooked up to, had 12 times the RAM, and cost twice as much. This printer produced (for the time) incredibly beautiful output, due to a technology called PostScript.

PostScript is a stack-based computer language from Adobe that is similar to FORTH. PostScript, as a technology, was geared for creating vector graphics (mathematical descriptions of art) rather than being pixel based. An interpreter for the PostScript language was embedded in the LaserWriter so when a program on the Mac wanted to print something, the program (or a printer driver) would generate program code that was downloaded into the printer and executed.

Here’s an example of some PostScript code and the resulting image:

PostScript code and what it renders

You can find this project over on Github.

Representing the page as a program was a very important design decision. This allowed the program to represent the contents of the page algorithmically, so the the device that executed the program would be able to draw the page at its highest possible resolution. For most printers at the time, this was 300dpi. For others, 1200dpi. All from the same generated program.

In addition to rendering pages, PostScript is Turing-complete, and can be treated as a general-purpose programming language. You could even write a web server.

Companion CuBEs

When the NeXT engineers were designing their system, they chose PostScript as their rendering model. Display PostScript, a.k.a. DPS, extended the PostScript model so that it would work for a windowed computer display. Deep in the heart of it, though, was a PostScript interpreter. NeXT applications could implement their screen drawing in PostScript code, and use the same code for printing. You could also wrap PostScript in C functions (using a program called pswrap) to call from application code.

Display PostScript was the foundation of user interaction. Events (mouse, keyboard, update, etc.) went through the DPS system and then were dispatched to applications.

NeXT wasn’t the only windowing system to use PostScript at the time. Sun’s NeWS (capitalization aside, no relation to NeXT) had an embedded PostScript interpreter that drove the user’s interaction with the system.

Gallons of Quartz

Why don’t OS X and iOS use Display PostScript? Money, basically. Adobe charged a license fee for Display PostScript. Also, Apple is well known for wanting to own as much of their technology stack as possible. By implementing the PostScript drawing model, but not actually using PostScript, they could avoid paying the license fees and also own the core graphics code.

It’s commonly said that Quartz is “based on” PDF, and in a sense that’s true. PDF (Adobe’s Portable Document Format) is the PostScript drawing model without the arbitrary programmability. Quartz was designed that the typical use of the API would map very closely to what PDF supports, making the creation of PDFs nearly trivial on the platform.

The same basic mechanisms were kept, even though Display PostScript was replaced by Quartz, including the event handling. Check out frame 18 from this Cocoa stack trace. DPS Lives!

Stack trace including DPSNextEvent

Basic Architecture

I’ll be covering more aspects in Quartz in detail in the coming weeks, but one of the big take-aways is that the code you call to “draw stuff” is abstracted away from the actual rendering of the graphics. “Render” here could be “make stuff appear in an NSView,” or “make stuff appear in a UIImage,” or even “make stuff appear in a PDF.”

All your CG drawing calls are executed in a “context,” which is a collection of data structures and function pointers that controls how the rendering is done.

App, context and rendered output

There are a number of different contexts, such as (on the Mac) NSWindowGraphicsContext. This particular context takes the drawing commands issued by your code and then lays down pixels in a chunk of shared memory in your application’s address space. This memory is also shared with the window server. The window server takes all of the window surfaces from all the running applications and layers them together onscreen.

Another CG context is an image context. Any drawing code you run will lay down pixels in a bitmap image. You can use this image to draw into other contexts or save to the file system as a PNG or JPEG. There is a PDF context as well. The drawing code you run doesn’t turn into pixels; instead it turns into PDF commands and is saved to a file. Later on, a PDF viewer (such as Adobe Acrobat or Mac Preview) can take those PDF commands and render them into something viewable.

Drawing in different contexts yields different results

Coming up

Next time, a closer look at contexts, and some of the convenience APIs layered over Core Graphics.

The post Core Graphics, Part 1: In the Beginning appeared first on Big Nerd Ranch.

]]>
https://bignerdranch.com/blog/core-graphics-part-1-in-the-beginning/feed/ 0
Digging into the Swift compiler: Nerdcamp is the Shovel https://bignerdranch.com/blog/digging-into-the-swift-compiler-nerdcamp-is-the-shovel/ https://bignerdranch.com/blog/digging-into-the-swift-compiler-nerdcamp-is-the-shovel/#respond Mon, 29 Jan 2018 10:00:00 +0000 https://nerdranchighq.wpengine.com/blog/digging-into-the-swift-compiler-nerdcamp-is-the-shovel/ Learn how to be more proficient with Swift by using the Compiler. Big Nerd Ranch shares lessons learned from nerdcamp.

The post Digging into the Swift compiler: Nerdcamp is the Shovel appeared first on Big Nerd Ranch.

]]>

One of the perks working at Big Nerd Ranch is we can attend one of our bootcamps. Not only do you learn a lot of stuff, they’re a lot of fun.  But what can you do if you’ve already taken All The Things?  A sleep-away Nerdcamp!  A Nerdcamp is an immersive learning experience like our bootcamps, but self-directed on a topic of our choosing.

In January 2018, I participated in a nerdcamp with my colleague, Step Christopher. The topic was “Digging into the Swift compiler”.  Step and I were having a chat one day and I mentioned that I wanted to attain some more swift proficiency, and mentioned the compiler. One thing led to another so we carved out a week in January.  I definitely wanted a week away from my remote office (Böb the cat is very cute, but can be demanding at times) vs say staying at our respective home bases and doing the telepresence thing.

The Plan

Not only is Swift open source, there is a public bug tracker (https://bugs.swift.org). Our plan was to grab some starter bugs and go from there.

This was attractive to me because I’ve had zero formal exposure to compiler innards. The small college I attended didn’t have have a compiler design course. I’ve skimmed through the first half the Dragon Book a couple of times. That’s about it.  So, a big win for me would be navigating around the swift compiler’s code base enough to locate where the bug is, understand what’s going on, and perhaps fix it.

The Venue

Step found us an airBnb cabin up near Ellijay GA, north of Atlanta. We knew we’d have necessities at hand such as a Wal Mart, a Waffle House, a bbq joint, and a big farmer’s market that sold fried pies.  The cabin was down near the Coosawatte river along a scenically twisty / windey / no-guard-raily / scary / somewhat-gravely road.  There was a kitchen, sleeping areas and a large living room.  Most of the work occurred around the kitchen table, or sprawled on the gigantic sofa.

Starter Bug

I asked Jordan Rose (@UINT_MIN on Twitter) from the Swift compiler team at Apple if there were any interesting starter bugs.  He recommended two – Step picked one regarding NSProxy subclassing, and I grabbed SR-1557 – Unused function-typed return values result in a hard error

The bug: given a function that returns a function and the return value is unused, it should be a warning (like other unused function values) rather than an error:

thingie() // yields error "expression resolves to an unused function"

After reproducing the bug with the current official release (Swift 4), I started hacking on a 4.1 branch expecting it’d be somewhat more stable than master.  I figured out all the important stuff like how to build all the things, how to rebuild all the things, how to run all the tests, how to run individual tests, how to run my custom compiler, and how to learn to accept my laptop fans running 24/7.  After a fair amount of code spelunking and breakpointing in lldb, came to the place in the code where that error was being generated. This was my favorite part of the process – the detective work.

It was with great satisfaction that I ended up at the same place a commenter on the bug suggested where to look.  Removing an explicit error diagnostic generation removed the error and provided decent enough warnings.

Then came unit test whack-a-mole, a cycle of “run the test suite, watch it fail.  Fix the failing tests (usually by changing the expectation of error/warnings), then repeat.  I found this process frustrating.  After getting to the point of an error in a test file I couldn’t find, I declared victory-enough: I could find things in the code if I wanted, understand what was going on around it, and even kibbitz on Step’s bug.  I kind of stopped learning new things at this point. And learning was the point of the week.  Time to pivot.

LibSyntax

I’d been curious about some of Swift’s auxiliary tools such as source kit.  They recently opened up a new thing: libSyntax. Harlan Haskin has a great introductory video, Improving Swift Tools with libSyntax

I was hoping this was a tool for taking swift code and extracting All The Information from it,  but it’s primarily a whitespace-preserving lexer with a nice API.  You can feed it Swift code and it’ll generate a parse tree from it: “Here is a struct-keyword-token with three spaces after it. Here’s a curly brace token with a newline in front of it. Here is an attribute-start token.”  Given one of these parse trees, you can regenerate byte-for-byte the original source text.  Goals of the project include using it for the compiler’s internal parsing, to help support editor tooling, and also to make a swift-format tool.

Once you’ve got one of these libsyntax trees, you can walk it looking for things like function or protocol declarations. You can also do surgery like rename functions, clean up whitespace, or insert other tokens.

In the course of exploring libsyntax, I learned about installing custom toolchains, getting Xcode and its UI affordances like documentation lookup to work with said custom toolchains. Safety tip: always try restarting Xcode a couple of times before you start reverse-engineering the swiftdoc file format to get at the library’s documentation.

To start out learning the API I ported the C++ sample code from the libsyntax README to Swift. It’s really easy to use, if a bit tedious:

For example, to get libsyntax to serialize out this statement:

@greeble(bork) typealias Element = Int

Involves making a syntax tree piece by piece:

import SwiftSyntax

@greeble(bork) typealias Element = Int

let typeAliasKeyword = SyntaxFactory.makeTypealiasKeyword(leadingTrivia: .spaces(1),
                                                                 trailingTrivia: .spaces(1))
let elementID = SyntaxFactory.makeIdentifier("Element", leadingTrivia: .zero, trailingTrivia: .spaces(1))
let equal = SyntaxFactory.makeEqualToken(leadingTrivia: Trivia.zero, trailingTrivia: .spaces(1))
let intType = SyntaxFactory.makeTypeIdentifier("Int", leadingTrivia: .zero, trailingTrivia: .zero)
let initializer = SyntaxFactory.makeTypeInitializerClause(equal: equal, value: intType)

let openParen = SyntaxFactory.makeLeftParenToken()
let closeParen = SyntaxFactory.makeRightParenToken()

let balancedTokens = [openParen, SyntaxFactory.makeIdentifier("bork", leadingTrivia: .zero,
                                                              trailingTrivia: .zero), closeParen]
let balancedTokenSyntax = SyntaxFactory.makeTokenList(balancedTokens)

let atsign = SyntaxFactory.makeAtSignToken(leadingTrivia: .zero, trailingTrivia: .zero)
let attributeName = SyntaxFactory.makeIdentifier("greeble", leadingTrivia: .zero, trailingTrivia: .zero)

let attribute = SyntaxFactory.makeAttribute(atSignToken: atsign,
                                            attributeName: attributeName,
                                            balancedTokens: balancedTokenSyntax)
let attributes = SyntaxFactory.makeAttributeList([attribute])

let typeAlias = SyntaxFactory.makeTypealiasDecl(attributes: attributes,
                                                accessLevelModifier: nil,
                                                typealiasKeyword: typeAliasKeyword,
                                                identifier: elementID,
                                                genericParameterClause: nil,
                                                initializer: initializer)

This isn’t stuff you’d write a lot of by hand. But, it’s a great candidate for being driven by something else.  For example, invent a “generate your borkerplate” domain-specific language, and then make an interpreter that calls the libsyntax SyntaxFactory methods to blort out generated code.

You can also process existing code with libSyntax, discussed in Harlan’s libsyntax video.

This syntax rewriter:

class Renamer: SyntaxRewriter {
    static let nospaceTrivia = Trivia.spaces(0)
    static let spaceTrivia = Trivia.spaces(1)

    let bork = SyntaxFactory.makeIdentifier("bork",
                                            leadingTrivia: nospaceTrivia,
                                            trailingTrivia: spaceTrivia)

    override func visit(_ node: StructDeclSyntax) -> DeclSyntax {
        return super.visit(node.withIdentifier(bork))
    }

    override func visit(_ node: ClassDeclSyntax) -> DeclSyntax {
        return super.visit(node.withIdentifier(bork))
    }

    override func visit(_ node: FunctionDeclSyntax) -> DeclSyntax {
        return super.visit(node.withIdentifier(bork))
}

Will rename struct, class, and function names to bork.  You can see I stick to very practical examples when exploring new tools.

I had a ball with this stuff, even given my general distaste for tools that generate reams of code. I have some ideas for some tools that might be fun, such as one that could help with out course material development pipeline for marking up source code, or perhaps something that’ll look for adopting of particular  classes or protocols of a given name.  For that I usually reach for a find command or a search regex in Xcode.

Decompression

Knowing the cabin was kind of out of the way (and that snow was coming), we went to Wal Mart and got provisions for the week. We even made healthy choices – cheese sticks and goldfish crackers were as naughty as I got, avoiding the giant tubs of cheesypoofs.  Figuring we’d be snowed in, we got meal makings for the week.

We also brought some board games, so in the evenings after our brains were exhausted we played Forbidden Island, Settlers of Cataan (two-player variant), and SmallWorld. Step brought a Nintendo Switch (which I hadn’t seen before), so I got to experience Rocket League, Splattoon, the new Mario / and Zelda.

Fin

I had a good time – it was a nice time-away-from-everything similar to Big Nerd Ranch bootcamps. My brain was generally hurting by the end of the day, just like our bootcamps.  The bootcamp time dilation effect was in full force as well.  This is where the first day seems to go on for-ev-er, and by the end of the week time is just flying by.  I learned some things: compiler development isn’t my cup of tea – I’m definitely happier in app-land.  I can navigate effectively in a big foreign code base.  C++ still produces gigantic error messages. LibSyntax is pretty neat.  I’d totally do something like this again.

The post Digging into the Swift compiler: Nerdcamp is the Shovel appeared first on Big Nerd Ranch.

]]>
https://bignerdranch.com/blog/digging-into-the-swift-compiler-nerdcamp-is-the-shovel/feed/ 0
Core Graphics, Part 4: A Path! A Path! https://bignerdranch.com/blog/core-graphics-part-4-a-path-a-path/ https://bignerdranch.com/blog/core-graphics-part-4-a-path-a-path/#respond Sun, 26 Feb 2017 10:00:53 +0000 https://nerdranchighq.wpengine.com/blog/core-graphics-part-4-a-path-a-path/ In Core Graphics, a path is a step-by-step description of some kind of shape. It could be a circle, a square, a valentine heart, a word frequency histogram or maybe a happy face.

The post Core Graphics, Part 4: A Path! A Path! appeared first on Big Nerd Ranch.

]]>

In media res? Check out Part 1, Part 2, Part 3, and Part 3.5 for all our posts on Core Graphics.

In Core Graphics, a path is a step-by-step description of some kind of shape. It could be a circle, a square, a valentine heart, a word frequency histogram or maybe a happy face. It doesn’t include any information such as pixel color, line width or gradients. Paths are primarily used for drawing – fill them with a color, or stroke_ – to outline with a color. The various GState parameters you saw earlier control how the path gets drawn, including all the different line attributes such as line joins and dash patterns.

This time around you see what makes up a path. Next time you’ll see some cool stuff you can do with paths beyond simple drawing.

Even though a path represents a recipe for an ideal shape, it needs to be rendered so that someone can actually see it. Each Core Graphics context renders the path the best it can. When drawing to a bitmap, any curves and diagonal lines are anti-aliased. This means using shading to fool the eye into thinking the shape is smooth even though it’s made out of square-shaped pixels. When drawing to a printer, the same thing happens, but with extremely small pixels. When drawing to a PDF, paths mostly just get dropped in place, because the Core Graphics drawing model is basically the same as the PDF drawing model. A PDF engine (such as Preview or Adobe Acrobat) gets to render those PDF paths rather the Core Graphics engine.

You can play with paths in GrafDemo. Most of the screenshots here come from GrafDemo’s Path Parts, Arcs, and All The Parts windows.

Path Elements

A path is a sequence of points joined by a small number of primitive shapes (curves, arcs, and straight lines), called elements. You can imagine each element being a command to a specialized robot holding a pencil. You tell the robot to lift the pencil and move to a point in the Cartesian plane, but don’t leave any markings. You can tell the robot to put the pencil down and draw something from the current point to a new point. There are five basic path elements:

Move To Point – Move the current point to a new location without drawing anything. The robot lifts the pencil and moves its arm.

Add Line To Point – Add a line from the current point to a new point. The robot has put the pencil down and has drawn a straight line. Here is a single move to point (the bottom left) followed by two lines to points:

path.move(to: startPoint)
path.addLine(to: nextPoint)
path.addLine(to: endPoint)

Add Quad Curve To Point – Add a quadratic curve from the current point to a new point, using a single control point. The robot has the pencil down and is drawing a curved line. The line isn’t drawn directly to the control point – instead the control point influences the shape. The shape gets more extreme the farther away the control point is from the curve.

path.move(to: firstPoint)
path.addQuadCurve(to: endPoint, control: controlPoint)

Add Curve To Point – Add a cubic Bezier curve from the current point to a new point, using two control points. Like the quad curve, the control points affect how the line is drawn. A quad curve can’t make a loop with itself, but the bezier curve can. If you’ve ever used the Pen tool in Photoshop or Illustrator, you’ve worked with Bezier curves.

path.move(to: firstPoint)
path.addCurve(to: endPoint,
              control1: firstControl,
              control2: secondControl)

Close Subpath – Add a straight line segment from the current point to the first point of the path. More precisely, the most recent move-to-point. You’ll want to close a path rather than adding a line to the start position. Depending on how you’re calculating points, accumulated floating point round off might make the calculated end point different from the start point. This makes a triangle:

path.move(to: startPoint)
path.addLine(to: nextPoint)
path.addLine(to: endPoint)
path.closeSubpath()

Notice the name is Close Subpath. By performing a move-to operation, you can create a path with separated parts such as this bar chart from a new exercise in our Advanced iOS bootcamp. The bars are drawn using a single path. This path is used to color them in, and then it is stroked to clearly separate the individual bars.

Isn’t that convenient?

Simple shapes can be tedious to make with just the five basic path elements. Core Graphics (a.k.a. CG) provides some convenience functions to add common shapes, such as a rectangle, oval, or a rounded rectangle:

let squarePath = CGPath(rect: rect1, transform: nil)
let ovalpath = CGPath(ellipseIn: rect2, transform: nil)
let roundedRectanglePath = CGPath(roundedRect: rect3,
                                  cornerWidth: 10.0,
                                  cornerHeight: 10.0,
                                  transform: nil)

The calls take a transform object as their last parameter. You will see more about transforms in a future posting, so just pass nil for now. The above calls ( CGPath(rect:transform:), CGPath(ellipseIn:transform:) and CGPath(roundedRect:cornerWidth:cornerHeight:transform:) ), produce these shapes:

There are also functions that let you make more complex paths in a single call, such as multiple rectangles or ellipses, multiple line segments, or an entire other path.

Noah’s ARCtangent

You can also add one thre flavors of arc, which are sections of a circle’s edge. Which one you choose to use depends on what values you have handy.

Arc – Give it the center of the circle, its radius, and the starting and ending angles (in radians) of the arc segment you want. The section of the circle between the start and ending angles (going clockwise or counter-clockwise) is what will be drawn. The end of the arc becomes the current point. This code draws the left-hand line, plus the circle:

path.move(to: startPoint)
path.addLine(to: firstSegmentPoint)
path.addArc(center: centerPoint,
            radius: radius,
            startAngle: startAngle,
            endAngle: endAngle,
            clockwise: clockwise)

Relative Arc – This is similar to the regular arc. Give it the center of the circle, the radius, and the start angle. But rather than giving it an end angle, you tell it how many radians to sweep forward or backward from the start angle:

path.move(to: startPoint)
path.addLine(to: firstSegmentPoint)
path.addRelativeArc(center: centerPoint,
                    radius: radius,
                    startAngle: startAngle,
                    delta: deltaAngle)

Arc to Point – This one’s kind of weird. You give it the circle’s radius, and two control points. Under the hood, the current point is connected to the first control point, and then to the second control point forming an angle. These lines are then used to construct a circle tangent to those lines with the given radius. I call this flavor of arc “Arc to Point” because the underlying C API is named CGContextAddArcToPoint.

path.move(to: startPoint)
path.addLine(to: firstSegmentPoint)
path.addArc(tangent1End: tangent1Point,
            tangent2End: tangent2Point,
            radius: radius)

I was trying to come up with a good use for this function, fellow Nerd Jeremy W. Sherman had a cool application: This sounds handy if you wanted to do something like cross-hatching of a curved surface, think “shading an ink drawing of the tip of a sword” – you can repeat with the same tangents and vary the radius to draw the arcs further and further away from the tip.

You may have noticed that these arc calls can introduce straight line segments to connect to the circle’s arc. Beginning a new path with the first two arc calls won’t create the connecting line segment. Arc to point could include that initial segment.

Path vs Context operations.

There are two ways to create a path in your code. The first way is by telling the context: “Hey, begin a new path” and start accumulating path elements. The path is consumed when you stroke or fill the path. Gone. Bye-bye. The path is also not saved or restored when you save/restore the GState – it’s actually not part of the GState. Each context only has a single path in use.

Here’s the current context being used to construct and stroke a path:

let context = UIGraphicsGetCurrentContext()
context.beginPath()
context.move(to: controlPoints[0])
context.addQuadCurve(to: controlPoints[1], control: controlPoints[2])
context.strokePath()

These are great for one-off paths that are made once, used, and forgotten.

You can also make a new CGMutablePath path object (a mutable subclass of the CGPath type, similar to the NSArray / NSMutableArray relationship) and accumulate the path components into that. This is an instance you can hang on to and reuse. To draw with a path object, you add the path to the context and then perform the stroke and/or fill operation:

let path = CGMutablePath()
path.move(to: controlPoints[0])
path.addQuadCurve(to: controlPoints[1], control: controlPoints[2])

context.addPath(path)
context.strokePath()

For shapes that you use often (say the suit symbols in a card game), you would want to make a heart path and club path once and use them to draw over and over.

How to Make?

So how do you actually make useful and interesting paths, such as heart-shapes or smiley faces? One approach is to do the math (and trial and error) and calculate where points, lines, curves, and arcs need to go.

Another approach is with software tools. There are applications that let you draw your shapes, and then emit a pile of CG code you can paste into your application. There are also libraries that can take data in another representation (such as from Illustrator, PDF, or SVG) and turn them in to paths. I used SVG for the clickable map of the world demo app for Protocols part 2: Delegation.

Path, Deconstructed

Core Graphics paths are opaque data structures. You accumulate path elements and then you render it in a context. To peer inside, use CGPath’s apply(info:function:) method to iterate through the path components. You supply a function (in Swift you can use a closure) that gets called repeatedly for each path element. (You can ignore the info parameter by passing nil. It’s a holdover from the C API that underlies Swift’s Core Graphics API. In C you would have to supply a function and pass in any objects you might want to use inside the function. With closures you can just capture what you need.)

Also due to its C heritage, the function / closure is passed an UnsafePointer<CGPathElement>. This is a pointer to a CGPathElement in memory. You have to dereference that pointer via pointee to get at the actual CGPathElement. The path element has an enum value that represents the kind of element, and an UnsafeMutablePointer<CGPoint> that points to the first CGPoint in an array of points. It’s up to you to figure out how many points you can read safely from that array.

Here is a CGPath extension that lets a path dump out its contents. You can also grab it from this gist:

import CoreGraphics

extension CGPath {

    func dump() {
        self.apply(info: nil) { info, unsafeElement in
            let element = unsafeElement.pointee

            switch element.type {
            case .moveToPoint:
                let point = element.points[0]
                print("moveto - (point)")
            case .addLineToPoint:
                let point = element.points[0]
                print("lineto - (point)")
            case .addQuadCurveToPoint:
                let control = element.points[0]
                let point = element.points[1]
                print("quadCurveTo - (point) - (control)")
            case .addCurveToPoint:
                let control1 = element.points[0]
                let control2 = element.points[1]
                let point = element.points[2]
                print("curveTo - (point) - (control1) - (control2)")
            case .closeSubpath:
                print("close")
            }
        }
    }

}

Printing out the path that created the arc to point image earlier shows that the arc becomes a sequence of curveTo operations and connecting straight lines:

path.move(to: startPoint)
path.addLine(to: firstSegmentPoint)
path.addArc(tangent1End: tangent1Point,
            tangent2End: tangent2Point,
            radius: radius)
path.addLine(to: secondSegmentPoint)
path.addLine(to: endPoint)
moveto - (5.0, 91.0)      // explicit code
lineto - (72.3, 91.0)     // explicit code
lineto - (71.6904767391754, 104.885702433811)   // added by addArc
curveTo - (95.5075588575432, 131.015122621923)
        - (71.0519422129889, 118.678048199439)
        - (81.7152130919145, 130.376588095736)
curveTo - (113.012569145714, 124.955236840146)
        - (101.903264013406, 131.311220082842)
        - (108.168814214539, 129.14221144167)
lineto - (129.666666666667, 91.0) // explicit code
lineto - (197.0, 91.0)   // explicit code

Even a “simple” oval created with CGPath(ellipseIn:transform:) is somewhat complicated:

curveTo - (62.5, 107.0) - (110.0, 86.4050984922165) - (88.7335256169627, 107.0)
curveTo - (15.0, 61.0) - (36.2664743830373, 107.0) - (15.0, 86.4050984922165)
curveTo - (62.5, 15.0) - (15.0, 35.5949015077835) - (36.2664743830373, 15.0)
curveTo - (110.0, 61.0) - (88.7335256169627, 15.0) - (110.0, 35.5949015077835)

Up Next

This time you saw what went in to making a path, drawing it, and peering inside. There’s a lot more you can do with paths, coming up next time.

The post Core Graphics, Part 4: A Path! A Path! appeared first on Big Nerd Ranch.

]]>
https://bignerdranch.com/blog/core-graphics-part-4-a-path-a-path/feed/ 0
Articulation Accents https://bignerdranch.com/blog/articulation-accents/ https://bignerdranch.com/blog/articulation-accents/#respond Sun, 20 Nov 2016 13:13:13 +0000 https://nerdranchighq.wpengine.com/blog/articulation-accents/ Is it possible to ask a question that leads you directly to fixing a bug? If so, how do you find those questions to ask them?

The post Articulation Accents appeared first on Big Nerd Ranch.

]]>

I’ve got a bug I’m trying to fix. We’ve all got bugs we’re trying to
fix. I’ve got a bunch of bug-fixing tools at my disposal that I’ve written about
in my thoughts on
debugging

and elsewhere on this blog. One of my favorite tools is asking
questions
.

A recent experience made me think: is
it possible to ask a question that leads you directly to a solution?
If so, how do you find those questions to ask them? It’s a meta-question.

How do you do that?

At the Ranch we use Slack for a lot of our internal
communication. Sometimes it’s work-related channels dedicated to a
project, and we have for-fun channels related to hobbies such as yoga or
Pokemon. We also have a channel for iOS/macOS/tvOS/watchOS programming.
A common occurrence is “I’m having trouble with this API—any ideas?”

Sometimes it leads to a pairing session if it’s something weird or
nasty. Sometimes there’s a history lesson if someone knows why an API
is behaving oddly. Usually a back and forth happens, for getting more
details about the problem.

One day last summer, one of the nerds was having a problem with images on iOS:

Joseph: why would UIGraphicsGetImageFromCurrentImageContext()
return nil if I have just created a new bitmap graphics context
via UIGraphicsBeginImageContext(size)? I’m playing with Xcode 8b2

Joseph: okay, wrong question. why would ⁠⁠⁠⁠UIGraphicsBeginImageContext(size)⁠⁠⁠⁠ fail? UIGraphicsGetCurrentContext() is returning nil

Joseph: nvm, I’m changing professions.

MarkD: heh. how big is the size?

MarkD: (and if it’s iOS 10 only, there’s a new UIGraphicsRenderer pile of classes)

Joseph: shakes his head. How do you do that?

MarkD: do what?

Joseph: size is 0,0. how did you know the right question to ask?

MarkD: it’s the only articulation point in the call, so verify it’s sane

Joseph: I got the size from my view. assumed it was valid and mentally moved on.

MarkD: “everything you know is wrong” 🙂

Articulation Points

So, how did I know what was the right question to ask? I sometimes
think of code streams in terms of articulation points—a place where interesting
things happen. Articulation point is a concept from graph theory. It
is a node in a graph, that when removed, would split the graph into
two or more smaller graphs. Given this graph:

Articulation points

The nodes A, B, and C are articulation points. Remove any one of
those and the graph is split into two parts. Removing any of the
other nodes will leave a connected graph.

When I’m looking at code (or imagining code in my head), I try to
focus my attention on the articulation points. For software,
articulation points are places that change the behavior of the
system under study. It might be code flow decisions or
changes in some object’s state. In the UIGraphics case, the only
articulation point was the size being passed in.

How to get to that conclusion?

When Joseph asked his question, I was reminded of code I’d written
before draws into image contexts:

UIGraphicsBeginImageContextWithOptions(pageRect.size, YES, 0.0);
    CGContextRef context = UIGraphicsGetCurrentContext();
    CGContextScaleCTM(context, percent, percent);
    CGContextTranslateCTM(context, -pageRect.origin.x, -pageRect.origin.y);
    [self.view renderInContext: UIGraphicsGetCurrentContext()];
    UIImage *viewImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();

The first two lines of code are the only interesting ones here. They set up some
state. And after the second line of code returning nil, there’s nothing useful
that could be done anyway. So I mentally focused on them:

UIGraphicsBeginImageContextWithOptions(pageRect.size, YES, 0.0);
    CGContextRef context = UIGraphicsGetCurrentContext(); // this is returning nil

UIGraphicsGetCurrentContext takes no arguments and just returns a Core
Graphics
context. It
only reacts to whatever ambient state has been set up prior to being called.
It is fundamentally uninteresting.

Therefore the only interesting code is

UIGraphicsBeginImageContextWithOptions(pageRect.size, YES, 0.0);

The last two parameters are “should the context be opaque” and “what
is the scale factor to use”. Passing 0.0 for the scale parameter is a
magic value to use the scale factor of the device’s main screen.

In fact, there’s a convenience version that just takes the size:

UIGraphicsBeginImageContext(pageRect.size);

The only thing possible that could affect subsequent calls is the size parameter.

Going Halvsies

The reason why I look far articulation points is that they’re usually
nice places that can split your problem space in to multiple pieces.
Run an
experiment

that checks the size being passed to UIGraphicsBeginImageContextWithOptions.
If the size is bad, the next step is to
track down exactly why the size is bad. If the size is good, then
you’re going to have a bad day trying to figure out what’s going wrong inside
of UIKit’s graphics machinery. Luckily
problems are pretty much all my fault (I have a Hierarchy of
Blame
),
so I assume that I’ll be looking at where the size came from rather
than having to debug something deep in UIKit.

What I consider an articulation point changes depending on the behavior of the
bug and the system. Returning to my originally imagined chunk of
code:

UIGraphicsBeginImageContextWithOptions(pageRect.size, YES, 0.0);
    CGContextRef context = UIGraphicsGetCurrentContext();
    CGContextScaleCTM(context, percent, percent);
    CGContextTranslateCTM(context, -pageRect.origin.x, -pageRect.origin.y);
    [self.view renderInContext: UIGraphicsGetCurrentContext()];
    UIImage *viewImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();

What if my bug was that the contents of the image were drawn in the wrong
place? Creating the image context is no longer interesting to
me. It works. The articulation points that control the drawing are the two
transformation matrix calls, so those would be the place where I’d
start asking the questions. Questions like “do I have the order of matrix
operations correct?” and “are the scale and translation values
reasonable?”

To answer Joseph’s question from earlier, I knew to ask that
question because, after boiling everything down, it was the only
question I could ask. It’s kind of nice not really having any choices.
Does that mean Joseph is a dummy for not
realizing it? Of course not. I happened to have the luxury of coming
in to a bug at the latest possible step after a lot of legwork had
been done. He had already waded through tons of other code and
possibilities, and got sidetracked with unrelated details, like we all do.

The post Articulation Accents appeared first on Big Nerd Ranch.

]]>
https://bignerdranch.com/blog/articulation-accents/feed/ 0
Dude, Where’s my Call? https://bignerdranch.com/blog/dude-wheres-my-call/ https://bignerdranch.com/blog/dude-wheres-my-call/#respond Tue, 16 Aug 2016 10:00:53 +0000 https://nerdranchighq.wpengine.com/blog/dude-wheres-my-call/ Imagine that one day you're feeding some innocuous looking code to a Swift compiler, and then you get a smackdown of an error. Where'd it go? It got renamed.

The post Dude, Where’s my Call? appeared first on Big Nerd Ranch.

]]>

Imagine one day you’re feeding some innocuous looking code to a Swift compiler:

// xcrun -sdk macosx swiftc -emit-executable cg.swift

import CoreGraphics

let path = CGPathCreateMutable()
CGPathMoveToPoint(path, nil, 0.0, 23.0)

And then you get a smackdown:

cg.swift:7:12: error: 'CGPathCreateMutable()' has been replaced by 'CGMutablePath.init()'
<unknown>:0: note: 'CGPathCreateMutable()' has been explicitly marked unavailable here
cg.swift:8:1: error: 'CGPathMoveToPoint' has been replaced by instance method 'CGMutablePath.moveTo(_:x:y:)'
<unknown>:0: note: 'CGPathMoveToPoint' has been explicitly marked unavailable here

Where’d it go? It got renamed.

One of Swift 3’s big features is “The Grand Renaming”, brought about
via Swift-Evolution proposal SE-0005 (Better Translation of
Objective-C APIs Into
Swift)

and SE-0006 (Apply API Guidelines to the Standard
Library)
.
The Grand Renaming renames operations in C and Objective-C
APIs giving them a Swiftier feel. There’s a migrator in Xcode that
will massage your Swift 2 code in to the new style. It’ll perform a lot
of the mechanical changes, leaving you with some mop-up due to other language
changes, such as removal of the C for
loop
.

Some of the renamings are pretty mild, such as this one in NSView:

// Swift 2
let localPoint = someView.convertPoint(event.locationInWindow, fromView: nil)

// Swift 3
let localPoint = someView.convert(event.locationInWindow, from: nil)

Here Point was removed from the method’s base name. You know you’re
dealing with a point, so there’s no need to repeat that fact. fromView was
renamed to just from because the word View was only providing redundant type
information, not making the callsite any clearer.

Other changes are much bigger, such as from Core Graphics:

// Swift 2 / (Objective-C)
let path = CGPathCreateMutable()
CGPathMoveToPoint (path, nil, points[i].x, points[i].y)
CGPathAddLineToPoint (path, nil, points[i + 1].x, points[i + 1].y)
CGContextAddPath (context, path)
CGContextStrokePath (context)

// Swift 3
let path = CGMutablePath()
path.move (to: points[i])
path.addLine (to: points[i + 1])

context.addPath (path)
context.strokePath ()

Whoa. That’s pretty
huge. The API now looks like a pleasant Swift-style API rather than a
somewhat old-school C API. Apple totally changed the Core Graphics
(and GCD) APIs in Swift to make them nicer to use. You cannot use the
old-school CG C API in Swift 3, so you will need to become accustomed
to the new style. I ran GrafDemo, the demo program for my Core
Graphics
postings

through the auto-translator (twice). You can see the before and
after for the first version of Swift3
in this pull request, and for Xcode8b6’s version of Swift3 in this pull request.

What Did They Do?

The Core Graphics API is fundamentally a bunch of global variables and
global free functions. That is, functions that aren’t directly tied
to some other entity like a class or a struct. It’s pure convention
that CGContextAddArcToPoint operates on CGContexts, but there’s
nothing stopping you passing in a CGColor. Outside of blowing up at
runtime, that is. It’s C-style object-orientation where you have an opaque
type that’s passed as the first argument, as a kind of magic cookie.
CGContext* functions take a CGContextRef. CGColor* functions take a
CGColorRef.

Through some compiler magic, Apple has transformed these opaque
references into classes, and has added methods to these classes that
map to the C API. When the compiler sees something like

let path = CGMutablePath()
path.addLines(between: self.points)
context.addPath(path)
context.strokePath()

It is actually, under the hood, emitting this sequence of calls:

let path = CGPathCreateMutable()
CGPathAddLines(path, nil, self.points, self.points.count)
CGContextAddPath(context, path)
CGContextStrokePath(context)

“New” Classes

These are the common opaque types that have gotten the Swift 3.0 treatment (omitting some of the esoteric types like CGDisplayMode or CGEvent), as swell as a representative method or two:

  • CGAffineTransformtranslateBy(x:30, y:50), rotate(by: CGFloat.pi / 2.0)
  • CGPath / CGMutablePathcontains(point, using: evenOdd), .addRelativeArc(center: x, radius: r, startAngle: sa, delta: deltaAngle)
  • CGContextcontext.addPath(path), context.clip(to: cgrectArray)
  • CGBitmapContext (folded in to CGContext) – let c = CGContext(data: bytes, width: 30, height: 30, bitsPerComponent: 8, bytesPerRow: 120, space: colorspace, bitmapInfo: 0)
  • CGColorlet color = CGColor(red: 1.0, green: 0.5, blue: 0.333, alpha: 1.0)
  • CGFontlet font = CGFont("Helvetica"), font.fullName
  • CGImageimage.masking(imageMask), image.cropping(to: rect)
  • CGLayerlet layer = GCLayer(context, size: size, auxilaryInfo: aux), layer.size
  • CGPDFContext (folded in to CGContext) / CGPDFDocumentcontext.beginPDFPage(pageInfo)

CGRect and CGPoint already had a set of nice extensions added prior to Swift 3.

How Did They Do It?

The compiler has built-in linguistic transforms that turn
Objective-C’s naming style into a more Swifty form. It drops
duplicate words and words that just repeat type information. It also
moves some words that were before the opening parenthesis in function
calls and moves them into the parens as argument labels. This
automatically cleans up a great number of calls.

Humans, of course, like to make verbal languages subtle and
complicated, so there are mechanisms in the Swift compiler that allow
for manual overrides of what the automated translator comes up with.
These are implementation details (so don’t depend on them in shipping
products), but they offer insight into the work that’s been done to
make existing API be available in Swift.

One mechanism involved are “overlays”, which are secondary libraries
that the compiler imports when you bring in a framework or a C
library. The Swift
Lexicon

describes overlays as “augmenting and extending a library on the
system when the library on the system cannot be modified.
” Those
really nice extensions on CGRect and CGPoint that have been there
forever, such as ` someRect.divide(30.0, fromEdge: .MinXEdge)`? They
came from overlays. The toolchain thinks “Oh, I see you’re linking
against Core Graphics. Let me also include this set of convenience
functions.”

There’s another mechanism,
apinotes,
especially
CoreGraphics.apinotes,
which controls naming and visibility on a symbol-by-symbol basis from
Core Graphics.

For example, in Swift there is no use for calls like CGRectMake
to initialize fundamental structures because there are initializers for them.
So make these calls unavailable:

# The below are inline functions that are irrelevant due to memberwise inits
- Name: CGPointMake
  Availability: nonswift
- Name: CGSizeMake
  Availability: nonswift
- Name: CGVectorMake
  Availability: nonswift
- Name: CGRectMake
  Availability: nonswift

And then other mappings – if you see this in Swift, call that function:

# The below are fixups that inference didn't quite do what we wanted, and are
# pulled over from what used to be in the overlays
- Name: CGRectIsNull
  SwiftName: "getter:CGRect.isNull(self:)"
- Name: CGRectIsEmpty
  SwiftName: "getter:CGRect.isEmpty(self:)"

If the compiler sees something like rect.isEmpty(), it will
emit a call to CGRectIsEmpty.

There are also method and function renames:

# The below are attempts at providing better names than inference
- Name: CGPointApplyAffineTransform
  SwiftName: CGPoint.applying(self:_:)
- Name: CGSizeApplyAffineTransform
  SwiftName: CGSize.applying(self:_:)
- Name: CGRectApplyAffineTransform
  SwiftName: CGRect.applying(self:_:)

When the compiler sees rect.applying(transform), it knows to emit
CGRectApplyAffineTransform.

The compiler only automatically renames Objective-C APIs because of the well-defined
nomenclature. C APIs (like Core Graphics) have to be done
via overlays and apinotes.

What You Can Do

You can do things similar to the apinotes mechanism
via NS_SWIFT_NAME. You use this macro to annotate your C/Objective-C
headers, indicating what name to use in Swift-land. The compiler
will make the same kind of substitutions (“If I see X, I’ll emit Y”)
for your NS_SWIFT_NAMEs.

For example, here’s a call from the Intents (Siri) framework:

- (void)resolveWorkoutNameForEndWorkout:(INEndWorkoutIntent *)intent
                         withCompletion:(void (^)(INSpeakableStringResolutionResult *resolutionResult))completion
     NS_SWIFT_NAME(resolveWorkoutName(forEndWorkout:with:));

Calling it from Objective-C looks like this:

NSObject<INEndWorkoutIntentHandling> *workout = ...;

[workout resolveWorkoutNameForEndWorkout: intent  withCompletion: ^(INSpeakableStringResolutionResult) {
     ...
}];

while in Swift it would be

let workout: INEndWorkoutIntentHandling = ...
workout.resolveWorkoutName(forEndWorkout: workout) {
    response in
    ...
}

NS_SWIFT_NAME, coupled with Objective-C’s lightweight
generics, nullability annotations, and the Swift compiler’s automatic
renaming of Objective-C API, you can get interfaces that feel right at
home in Swift.

It is possible to make your own overlays and apinotes, but those are
intended to be used when Swift is shipped with Apple’s SDKs.
You can distribute apinotes with your own frameworks, but overlays need
to built from within a Swift compiler tree.

For making Swiftier APIs yourself, you should do as much as you can with
with header audits (such as adding nullability annotations and NS_SWIFT_NAME),
and then tossing in a few Swift files in your project as a fake overlay to
cover any additional cases.
These “overlay” files need to be shipped as source until there’s ABI stability.

By grazing through the iOS 10 headers, it looks like newer APIs tend
to use NS_SWIFT_NAME, while older, more established API use
apinotes. This makes sense because the headers are shared amongst
different Swift versions and adding new NS_SWIFT_NAMEs to older,
established headers might break existing code without a compiler
change. Also, apinotes can be added by the compiler team or community
members while changes to header files require attention from the team
that owns the headers. That team might already be at capacity
getting ready to ship their own functionality.

Is It Good?

The Swift 3 versions of Core Graphics is definitely much nicer and
more Swifty. To be honest, I’d prefer to work with something like
this on the Objective-C side as well. You do lose some googlability,
and have to do more mental translation when you see existing CG code
in Stack Overflow postings or in online tutorials. But it’s no worse
than the mental gymnastics needed for general Swift code these days.

There are some API incongruities due to the quasi OO nature of CG and
how it comes in to Swift. From the CoreGraphics.apinotes:

- Name: CGBitmapContextGetWidth
  SwiftName: getter:CGContext.width(self:)
- Name: CGPDFContextBeginPage
  SwiftName: CGContext.beginPDFPage(self:_:)

CGBitmapContext and CGPDFContext calls are glommed on to
CGContext. This means that you can walk up to any CGContext and ask
for its width, or tell it to begin a PDF page. If you ask a non-bitmap context
for its width, you’ll get this runtime
error:

<Error>: CGBitmapContextGetWidth: invalid context 0x100e6c3c0.
If you want to see the backtrace, please set CG_CONTEXT_SHOW_BACKTRACE environmental variable.

So even though this API is a whole lot Swiftier, the compiler can’t
catch some kinds of API misuse. Xcode will happily offer completions
for calls that aren’t actually applicable. In a sense the C API was a
bit safer, because CGBitmapContextGetWidth sticks it in your
face that that it expects a bitmap context even though the first
argument is technically just a plain old CGContextRef. I’m hoping
this is just a bug
(rdar://27626070).

If you want to see more about things like the Great Renaming and tools
like NS_SWIFT_NAME, check out WWDC 2016 Session
403
– iOS
API Design Guidelines.

(Thanks to UltraNerd Zachary
Waldowski

for insight in the dark corners of Swift and how the compiler works.)

The post Dude, Where’s my Call? appeared first on Big Nerd Ranch.

]]>
https://bignerdranch.com/blog/dude-wheres-my-call/feed/ 0
Hannibal #selector https://bignerdranch.com/blog/hannibal-selector/ https://bignerdranch.com/blog/hannibal-selector/#respond Wed, 06 Jul 2016 10:00:53 +0000 https://nerdranchighq.wpengine.com/blog/hannibal-selector/ The `selector` is key to Objective-C's dynamic runtime nature. It's just a name that's used, at runtime, as a key into a dictionary of function pointers. Whenever you send a message to an Objective-C object, you're actually using a selector to look up a function to call. Sometimes selectors bubble up in Cocoa/CocoaTouch API, and you will need to deal with them in Swift.

The post Hannibal #selector appeared first on Big Nerd Ranch.

]]>

Just want a reminder of the syntax? Jump to the TL;DR.

The selector is key to Objective-C’s dynamic runtime nature. It’s
just a name that’s used, at runtime, as a key into a dictionary of function
pointers. Whenever you send a message to an Objective-C object,
you’re actually using a selector to look up a function to
call. Sometimes selectors bubble up in Cocoa/CocoaTouch API, and you
will need to deal with them in Swift.

In Cocoa/CocoaTouch, selectors are a way of telling an object like a
UIButton “When someone taps you, I want you to send this message to
this particular object.” The name of the method to invoke,
and the object to send the message to, are potentially not
known until runtime. Most of the time these connections are made in a
xib file or a storyboard, but you can do it in code too:

UIButton *button = [UIButton buttonWithType: UIButtonTypeInfoLight];
[button addTarget: self
        action: @selector(showAboutBox:)
        forControlEvents: UIControlEventTouchUpInside];

Tapping the button causes self to be sent the message
showAboutBox:. If there is no showAboutBox: method, you’ll die
at runtime with an undefined selector exception.

@selector(message:name:argument:names:too:) is an Objective-C
compiler feature that turns a sequence of characters that happens to
look like an Objective-C method name into some key useful for looking
up code at runtime. There is no type information encoded in the
selector, outside of the number of colons indicating the number of
arguments expected by the method.

This kind of API is common with Cocoa/CocoaTouch’s “target/action”
architecture, used in most of the UI controls, as well as with
NSNotificationCenter. This is a fairly old design—APIs
that take blocks/closures are more common these days.

Why this kind of convoluted machinery of storing a name and then later
looking it up at runtime? It lets you use very descriptive names for
handler functions. You can easily figure out what openAboutBox:
would do. If you had to subclass a Button class and override
tapped() all the time, it would be harder to determine at a glance
what the button handler does. It also allows tooling, such as
Interface Builder, to set up these connections without having to
generate code. It just saves the selector name in a resource file.

Selectors in Swift

You use Cocoa/CocoaTouch from Swift, and so you want button taps to
invoke Swift code. Swift needs to be able to use selectors. Selectors
aren’t part of the default Swift runtime behavior, because sending
arbitrary messages to objects can be unsafe. The compiler can’t guarantee
that the receiving object actually responds to that selector.

To have your Swift objects participate in the Objective-C runtime, you
will need to opt-in by having your class inherit from NSObject, or
decorate individual methods with @objc. This makes a Swift class
participate in Objective-C’s method dispatch mechanism. If you’re
curious about how Objective-C’s runtime does its thing, check out the
Inside the
Bracket

extravaganza.

Here’s the equivalent code in Swift for adding a new callback to a UIButton:

button.addTarget(self,
    action: #selector(GroovyViewController.showAboutBox),
    forControlEvents: .TouchUpInside)

#selector is Swift’s equivalent of @selector. The # sigil serves
the same purpose as @ in Objective-C: here comes some compiler
magic, such as how
#available
guards code to protect it from running on too-old systems. In this
case, #selector is used to construct an Objective-C selector given a
description of a Swift function. You can see all the details about
#selector in swift-evolution proposal SE-0022 – Referencing the
Objective-C selector of a
method
.

Why not just use a string, like “showAboutBox,” similar to how
@selector works? Method names get automatically renamed going
between Objective-C and Swift—sometimes an argument name goes before
the opening paren, sometimes after. Maybe there’s an NSError**
involved. Remembering all the rules is tedious and error-prone. If
you spend all your time in Swift, you don’t need to juggle a bunch of
Objective-C details in your head. Sounds like a great job for a
compiler, though.

When Swift sees #selector it examines the
method being referenced, and then derives a selector from it. Any
necessary name rewriting is done automatically. This is why you usually
supply the class name to #selector—the compiler can unambiguously
determine the information it needs about this particular method, such as, “Does
this thing actually exist?” or “Has the selector name been explictly renamed?”

If the method you’re getting a #selector for is defined in the same class where
you’re referencing it (say in a view controller’s viewDidAppear referencing
methods in that same view controller), you can leave off the explicit class
name. Be aware that Xcode won’t properly autocomplete it for you, instead
only offering a function call-site rather than just a reference to the function.

Xcode completing a `#selector` without a class name component, suggesting a function call

For the rest of this post, I’ll be using the fully-qualified form.

Selector Syntax

Objective-C’s selector syntax is pretty simple – it just uses
@selector(methodName:arguments:). Swift’s is a bit more complicated.

In its simplest form, you need to provide a class name that defines
(or inherits) the message, and then the method name without any extra
decoration:

#selector(UIView.setNeedsDisplay)
#selector(GroovyViewSubclass.setNeedsDisplay)

It doesn’t matter how many arguments these methods take—the compiler
will synthesize the correct name with the correct number of colons in
the selector.

Swift has function overloading (same name, different arguments), while
Objective-C does not. You may need to disambiguate which Swift
function you want the selector to refer to.

Here’s an overloaded doStuff method:

class Thing {
    ...
    func doStuff(stuff: Int) {
        print("do Stuff (stuff)")
    }

    func doStuff(stuff: Double, fluffy: String) {
        print("do Stuff (stuff) - (fluffy)")
    }

When you call doStuff directly, Swift can figure out which one of
these to use based on the arguments you pass. Indirect calling by
selector doesn’t have the luxury of knowing the argument’s types. If
you try to make a selector for doStuff, you will get an error about
ambiguous use of doStuff:

Error: Ambiguous use of 'doStuff'

The way you fix this is to include argument labels:

#selector(Thing.doStuff(_:fluffy:)))

This says to build a selector that references the two-argument form of
doStuff, rather than the one-argument form. You can get more
details on this naming convention in swift-evolution proposal
SE-0021 – Naming Functions with Argument
Labels
.

The Rename Game

Swift classes that participate in the Objective-C runtime have
selectors automatically built by the compiler. The Swift compiler
bases the selectors on the method’s name. Sometimes you may want to
expose a different name to Objective-C, a
name that’s more in line with what an Objective-C developer is
expecting.

You might also have overloaded functions that don’t differ by number
of arguments (just types). The selector then becomes ambiguous.
Here’s a pure-Swift class:

class Blorf {
    func takesAnArgument(fluffy: String) {
        print("in takes an argument: (fluffy)")
    }

    func takesAnArgument(fluffy: Double) {
        print("taking a double (fluffy)")
    }
}

This works fine in your app. Then you decide to inherit from NSObject
so a Blorf object can receive UIButton taps:

Error: Method takesAnArgument with ObjC selector conflicts with previous declaration with the same ObjC selector

Objective-C demands that all methods have unique selectors. Objective-C selectors don’t carry type information, so the selectors
for both takesAnArgument methods will be the same: @selector(takesAnArgument:). You either have
to rename one of them, or tell the compiler to keep the Swift name and
use a different name for the Objective-C selector:

@objc(takesADoubleArgument:)
func takesAnArgument(fluffy: Double) {
    print("taking a double (fluffy)")
}

This fixes the compile error.

All The Things

There’s still one problem, though. Take a look at what you get when
you autocomplete takesAnArgument while trying to make a #selector:

Xcode autocomplete shows two identical takesAnArgument(_:) suggestions

Autocomplete doesn’t show type information. And we’re right back to
ambigousnessland:

Error: Abiguous use of 'takesAnArgument'

So the options now are to rename the method, or use the full
#selector to refer to a method unambiguously:

let selector = #selector(((Blorf.takesAnArgument(_:)) as (Blorf) -> (Double) -> Void))

That syntax is kind of intense. Here it is broken down:

  • #selector(...) – here comes a selector

  • (Blorf.takesAnArgument(_:)) – This is the Swift method name name
    that you want a selector for.

  • (... as (Blorf) -> – this is the curried self, also known as the
    instance self. Just think of this as the type of a hidden
    parameter which ends up being self inside of the method.

  • ... (Double) -> Void) – this is the signature of the method
    without the self parameter. In this case, this selector
    references the version of takesAnArgument that takes a Double and
    returns nothing. This is the one that was decorated with
    @objc(takesADoubleArgument:), so the selector ultimately generated
    by this expression would be an Objective-C
    @selector(takesADoubleArgument:)

Swift 3 Additions

Swift 3 extends the #selector syntax with getter: and setter: arguments. When determining a selector for an Objective-C propery, you need to specify
whether you want the setter or getter. Use it like this:

let sel = #selector(getter: UIViewController.preferredContentSize)

Older Syntax

#selector was introduced in Swift 2.2. Prior versions of Swift used a
String mechanism to automatically convert selector-looking strings
into selectors. If you use this syntax today, you will get a
deprecation warning.

You can also use Selector("someSelectorName") as an alternative syntax,
but the compiler will give you a warning suggesting you use #selector,
assuming it has seen this selector anywhere before. You can use this syntax to
generate a selector that the compiler hasn’t seen yet, perhaps as a
selector to methods that are dynamically loaded by plugins at
runtime.

Danger!

Remember that selectors in Objective-C don’t carry along type
information. The #selector syntax is just used to derive the proper
Objective-C selector to reference a particular Swift method. There is
no safety checking performed. This means you’re welcome to create a
Swift method that takes 17 generic enum parameters and use it for a
UIButton action. You’re also welcome to crash hard at runtime. Swift doesn’t make using selector-based API foolproof.

TL;DR

Here’s the syntax, as of Swift 2.2, for selectors. Unindented lines
indicate what particular kinds of methods are being #selectored, and
indented lines are using actual types. The first form includes the class name.

If you’re getting a selector for something in the current class (or a superclass), you can use the second, more compact form. Be aware that Xcode (at least as of 7.3) won’t properly autocomplete that form.

Basic selector:

#selector(AnotherClassName.unambiguousFunction)
    #selector(ViewController.showAboutBox)

#selector(unambiguousFunctionInThisClass)
    #selector(showAboutBox)

Property getters and setters:

#selector(getter|setter: AnotherClassName.propertyName)
    #selector(getter: ViewController.isEditing)
    #selector(setter: ViewController.isEditing)

#selector(getter|setter: propertyInThisClass)
    #selector(getter: isEditing)
    #selector(setter: isEditing)

Choosing between overloads with different argument lists:

#selector(AnotherClassName.overloadedFunction(_:label:label:))
    #selector(BadgerViewController.logStuff(_:args))

#selector(overloadedFunctionInThisClass(_:label:label:))
    #selector(logStuff(_:args))

Choosing between overloads that have the same argument list:

#selector(((AnotherClassName.ambiguousOverload(_:)) as (AnotherClassName) -> (ArgumentList) -> ReturnType))
    #selector(((Blorf.takesAnArgument(_:)) as (Blorf) -> (Double) -> Void))

#selector(((ambiguousOverload(_:)) as (ArgumentList) -> ReturnType))
    #selector(((takesAnArgument(_:)) as (Double) -> Void))

Want to learn more about Swift and Objective-C Interoperability?
Join our next Advanced iOS
Bootcamp
!

The post Hannibal #selector appeared first on Big Nerd Ranch.

]]>
https://bignerdranch.com/blog/hannibal-selector/feed/ 0
WWDC 2016: A Quick Look at APFS https://bignerdranch.com/blog/wwdc-2016-a-quick-look-at-apfs/ https://bignerdranch.com/blog/wwdc-2016-a-quick-look-at-apfs/#respond Wed, 15 Jun 2016 10:00:53 +0000 https://nerdranchighq.wpengine.com/blog/wwdc-2016-a-quick-look-at-apfs/ WWDC 2016. New Notification Center features? Eh. Bouncy sticker graphics giant emoji in Messages? Blah. New file system? OMG. Apple File System. APFS. [NEW FILE SYSTEM!](https://www.youtube.com/watch?v=-rTcfKfXwqo). That wasn't on any Rumor Radar that I had seen.

The post WWDC 2016: A Quick Look at APFS appeared first on Big Nerd Ranch.

]]>

WWDC 2016. New Notification Center features? Eh. Bouncy sticker graphics giant emoji in Messages? Blah. New file system? OMG. Apple File System. APFS. NEW FILE SYSTEM!. That wasn’t on any Rumor Radar that I had seen.

So, Why?

Why a new file system? Apple’s documentation and session do a good job of describing their motivation: the old HFS+ design was introduced back in 1998, and was based in part on the original HFS which was introduced back in 1985. (Historical note, the prior Mac file system, MFS, was a flat file system. It did not have directories. Folders were a Finder-only display mechanism. The H in the acronym stands for Hierarchical). The iMac of 1998 had a 233mhz G3 processor, 32 megs of RAM, and an incredibly huge 4 gigs of disk space. We’ve progressed a long way since then in terms of processor and memory capability.

Apple has done amazing things with HFS+ since its introduction, making it work for much larger disk volumes and holding much larger files than we used to manipulate. They also added journaling to give resilience in the face of sudden power loss and the occasional system crash.

But now it’s time to move on to something a bit more modern and flexible.

Dominic Giampaolo

When I first heard of APFS, I wondered if Dominic Giampaolo was involved. And sure enough, there he was, presenting at the APFS session.

Who is Dominic? He wrote the BeOS file system beck in the 90s, which at the time was incredible, doing things unheard of in a PC operating system: high performance, journaling, with a clever file system metadata system that could be queried like a database. He later went to Apple, and soon thereafter HFS+ got extended attributes and journaling. And then after that we got Spotlight.

He also wrote a book on the BeOS file system, which is available as a PDF as well. It’s a fun read, even if file systems aren’t your primary technical love.

Obviously APFS is the work of a big team of talented engineers, but it’s nice knowing some of the humans behind the software.


Playing with APFS

The first version of APFS is available in the macOS Sierra WWDC beta. There are some limitations on what you can do it with it—mainly you can’t boot from it. Also, Apple is definitely saying that they do not guarantee that an Apple File System volume created today will be readable in future releases. To play with it now, their guidance is to create a disk image and mount it.

Creating a volume

It’s very easy to create an empty volume. I figure if you’re excited enough about a new file system to have read this far, that you’re comfortable in the terminal.

Use the hdiutil command to create and manipulate disk images:

% hdiutil create -size 100m -fs APFS -volname "APFS" new-filesystem.dmg
WARNING: You are using a pre-release version of the Apple File System called
APFS which is meant for evaluation and development purposes only.  Files
stored on this volume may not be accessible in future releases of OS X.

You should back up all of your data before using APFS and regularly back up
data while using APFS, including before upgrading to future releases of OS X.

Continue? [y/N]

You can tell by the wall of text that Apple is really serious that this is sharp-edged pre-release stuff.

The hdiutil command-line arguments are pretty self explanatory. This creates a 100 meg disk image with the APFS file system, with the volume name also being APFS. new-filesystem.dmg is ready for mounting. You can double-click it in the finder, or use hdiutil mount:

% hdiutil mount new-filesystem.dmg
/dev/disk3              GUID_partition_scheme
/dev/disk3s1            Apple_APFS
/dev/disk3s1s1          41504653-0000-11AA-AA11-0030654 /Volumes/APFS

Unfortunately, you can’t use the srcfolder argument to hdiutil to pre-populate a disk image from a directory. You get a “Operation not permitted” error – rdar://26822248. Also, be aware that creating a very large disk image could take a fair amount of time and seem to lock up your machine.

COWabunga

Copy On Write (a.k.a. COW) seems to be making a resurgence these days. It’s been used forever by the virtual memory system when you fork a new process. Rather than duplicating all the memory pages for the new process, they’re just linked. A page is only copied when you write to it.

COW is also used when implementing Swift value types. They’re often a structure that has a reference to a heavier, dynamically allocated structure. This structure is only copied when someone attempts to modify it.

APFS’s “Cloning” feature is COW for the file system. When you duplicate files in the Finder (and via NSFileManager), the data isn’t duplicated, just a bunch of references to the same blocks on disk. These blocks are only copied if the file gets modified. There’s more detail in the APFS WWDC session.

You can prove to yourself that APFS is doing COW by using the diskutil utility. The diskutil manpage has a section describing the different moving parts of APFS:

DiskUtil

What I did was create an empty disk image, and added a small text file and a cat photo.

diskutil APFS list will show you some interesting information:

% diskutil APFS list
WARNING:  You are using a pre-release version of the Apple File System called
          APFS which is meant for evaluation and development purposes only.
          Files stored on APFS volumes may not be accessible in future releases
          of OS X. You should back up all of your data before using APFS and
          regularly back up data while using APFS, including before upgrading
          to future releases of OS X.
You can pass the "-IHaveBeenWarnedThatAPFSIsPreReleaseAndThatIMayLoseData"
option between the "APFS" verb and the APFS sub-verb to bypass this message.

I love the sense of humor the low-level engineers have, as demonstrated by the -IHaveBeenWarnedThatAPFSIsPreReleaseAndThatIMayLoseData option.

Running this command gives you this screen of knowledge:

======================================================================================================
ENUMERATION OF ALL CURRENT APFS OBJECTS

APFS CONTAINER: REFERENCE:     disk3s1     Total Container Size = 104.8 MB (104816640 Bytes)
|                                          Container Free Space = 102.6 MB (102572032 Bytes)
|
|--<  APFS PHYSICAL STORE:     disk3s1
|
|-->  APFS VOLUME:             disk3s1s1   Volume Name = APFS (/Volumes/APFS)
|                                          Space-Sharing Current Volume Size = 1.5 MB (1548288 Bytes)
|
======================================================================================================

---------------------------------------------------------------------
APFS OBJECTS BY ITERATING ALL CURRENT DISKS WHILE CHECKING APFS ROLES

APFS PHYSICAL STORE = disk3s1   -> APFS CONTAINER REFERENCE = disk3s1
APFS VOLUME =         disk3s1s1 -> APFS CONTAINER REFERENCE = disk3s1
---------------------------------------------------------------------

The interesting part is the container volume storage part:

Space-Sharing Current Volume Size = 1.5 MB (1548288 Bytes)

The cat picture and the text take 1.5 megs of space. I then duplicated both files about 30 times. And ran the command again:


% diskutil APFS -IHaveBeenWarnedThatAPFSIsPreReleaseAndThatIMayLoseData list

And the interesting line is unchanged:


Space-Sharing Current Volume Size = 1.5 MB (1548288 Bytes)

Under HFS+ that would be upwards of sixty megs.

Just to prove that it’s not miscalculating, I edited several of the cat pictures, re-ran the command and got:

Space-Sharing Current Volume Size = 9.0 MB (8986624 Bytes)

So only the edited files consume additional space.

What’s In It For Us

For the most part, this new file system is just an interesting aside. It won’t have that much of an impact for almost anyone’s day-to-day work. Once it’s deployed, we’ll notice our disk space being used more efficiently and many I/O operations being more efficient. If you program for the Mac, the sparse file support will make using core dump files more tractable. Some kinds of operations, such as calculating the on-disk footprint of a directory, will need different implementations on APFS, because a simple “walk the files and sum them up” will over-report disk usage in the face of cloning.

But me, I like low-level things like file systems. It’s interesting the kinds of thought involved and what some of the different tradeoffs are (such as APFS optimizing latency vs raw throughput), and it’s good nerdy fun.

The post WWDC 2016: A Quick Look at APFS appeared first on Big Nerd Ranch.

]]>
https://bignerdranch.com/blog/wwdc-2016-a-quick-look-at-apfs/feed/ 0
Hi! I’m #available! https://bignerdranch.com/blog/hi-im-available/ https://bignerdranch.com/blog/hi-im-available/#respond Mon, 18 Jan 2016 10:10:22 +0000 https://nerdranchighq.wpengine.com/blog/hi-im-available/ As app developers, we don't have the luxury of shipping our software exclusively on the latest-and-greatest OS version. We want to use the new shiny toys, but we also need to be able to work on older versions of the OS that don't have these features. Swift 2.0 introduced a new language construct, `#available`, that helps solve the problems that crop up when your app needs to support multiple versions of iOS or OS X.

The post Hi! I’m #available! appeared first on Big Nerd Ranch.

]]>

Swift 2.0 introduced a new language construct, #available, that helps solve
the problems that crop up when your app needs to support multiple versions of
iOS or OS X.

Using some API that’s available only in iOS 9? The Swift availability
features prevent you from trying to run that code when the app is running on iOS 8.

What it is Not

But first: there’s a common misconception that #available is used for including or excluding code at compile time. Given the name, it’s reasonable to think, “This call is available only on watchOS, so this extension I’m writing for iOS
shouldn’t include that code at all because it’s not available.”

#available is the
wrong tool for this. Code in #available clauses always compile. You’ll want to
use the #if build configuration statement instead.

The Problem

Apple likes to update their operating systems on a regular basis—new features
get added, new APIs are introduced and older crufty APIs are occasionally
deprecated. Except for a few specific deviations in the past (like ARC),
Apple never back-ports new stuff to older OSes.

As app developers, we don’t have the luxury of shipping our software exclusively on the
latest-and-greatest OS version. We want to use the new shiny toys, but we also
need to be able to work on older versions of the OS that don’t have these
features. When you ship an app, you have a chunk of executable code for each
chip architecture you’re supporting, such as armv7 or arm64. You don’t have
separate chunks of executable code for different platform versions. The same
code will run on iOS 9, iOS 8 and as far back in the OS catalog that
you want to support.

Stay on Target

There are two OS version numbers that are involved when building software in the
Apple ecosystem.

The first is the Target SDK version. SDK stands for “Software Development Kit,”
which is the set of libraries and headers for a particular OS version. This is
the version of Apple’s APIs that you compile and link against. The SDK describes
the set of
API available to you. Linking against the iOS 9 SDK means you can use
any API that comes with iOS 9. You won’t be able to directly use stuff introduced in
iOS 10. Modern Xcodes are tightly coupled to the SDKs for the latest OS
versions, so if you upgrade your Xcode, you will be linking against a newer version
of the SDK.

The other version number is the Deployment Target. This declares
the oldest OS version your app will support. How far back you decide to support
is a business decision based on how much work you are willing to do for customers on
older versions of the OS.

So, a modern App might use iOS 9 as the Target SDK, and iOS 7 as the deployment
target. This means that you can run on iOS 7, iOS 8 and iOS 9, and that you
have available to you any iOS 9 calls when actually running on iOS 9.

There’s a problem, though. What happens if you use a new iOS 9 class, such as
NSDataAsset, but your user is running iOS 8? There is no NSDataAsset in the libraries that shipped with their iPad, so your app won’t work. Depending on
language and circumstances, the behavior of using newer API on older systems
could range from silently not doing what you want all the way to immediate
process termination.

The Solution

The solution to the problem is to never enter code paths intended for newer
versions of the OS.

Well, that was easy. Time for lunch!

While the solution is easy to describe, it is harder to implement. Back in the
old days, we’d do a lot of work explicitly checking that a particular feature
was there before using it. Even then, improper API use would still slip through
to shipping apps.

Why is the solution harder to implement? It turns out there actually two
problems that need to be solved. The first happens at compile time: I want to
know if I’m accidentally using API that may be too fresh for my deployment target. It’s much
better to catch errors at compile time.

The other problem happens at run time: you need to construct your program logic so
that you decide whether or not to enter the code paths that use newer API.

Swift 2’s availability system addresses both of these problems. It’s a
conspiracy between the compiler, to detect the use of API that’s “too new” for the
code’s current context, and at run time to query the OS version and
conditionally skip over (or enter) chunks of code based on availability of API.

SDK Levels and Areas of Capability

You can think of a sequence of code as having an “SDK Level” (not an official
term, but it’s how I envision this stuff). This is the version of the SDK that
the programmer was using as their baseline for what API is available.

There is no problem if you use calls that were introduced in, or are older than, this level.
There is no need to guard its usage. If you use calls that were introduced in a newer
version, however, you will need to tell the compiler, “Hey, this chunk of code uses iOS 9
goodies.”

By default, code has an ambient SDK level dictated by the deployment target.
API older than this SDK is ok. API newer than this SDK could cause problems.
The compiler will reject any API usage that’s too new, unless you tell the
compiler to stop complaining.

You do that by using the availability condition, affectionately known as
#available. You specify an OS version in a conditional statement and the
compiler will then raise its idea of the current SDK level until that scope is
exited.

Here’s some new code just written for a project that targets iOS 7.
NSDataAsset is not available in iOS 7:

func fetchKittyDescriptions() -> [String]? {
    let dataAsset = NSDataAsset(name: "Hoover")

    return nil
}

The compiler knows that NSDataAsset was introduced in iOS 9, so it gives you an error:

SDK level error

Notice that this is an error. You must address this. This is default
behavior, and there is no way to opt-out. Otherwise the
compiler refuses to build your app, because it would lead to terrible horrible
death at runtime on iOS 7 targets. Remember that one of Swift’s stated goals is
to make things as safe as possible at compile time.

Xcode’s editor offers a fix-it:

Fix-it menu

Choosing the first option yields code like this:

    func fetchKittyDescriptions() -> [String]? {
        if #available(iOS 9.0, *) {
            let dataAsset = NSDataAsset(name: "Hoover")
        } else {
            // Fallback on earlier versions
        }

        return nil
    }

The #available statement does two things. Firstly, at compile time, it raises the SDK
level for the true portion of the if to iOS 9.0 (from the ambient iOS 7 deployment
target). Any API calls from iOS 9 are legal inside of that set of braces. The
else portion still has the ambient SDK level, which is where you’d put in an
implementation that uses only older API, or punts on the feature entirely.

Secondly, at run time, #available in that conditional is querying the OS version. If
the currently running iOS version is 9 or beyond, control enters the first part of the if statement. If it’s 8
or lower, it jumps to the else part.

By combining this compile-time and run-time check, we’re guaranteed to always
safely use API.

Where You Can Use It

#available works with if statements, while statements, and it also works with guard:

    func fetchKittyDescriptions() -> [String]? {
        guard #available(iOS 9.0, *) else { return nil }
        ...
    }

This raises the SDK level of the function after the guard statement.

Being Proactive

The code above uses #available in a reactive way: “This one time in this
function, I want to temporarily raise my SDK level so I can get my code compiling
again”. This can become obnoxious if you use latest-version API all over the place
in a particular function or class. Use the @available attribute to decorate
individual functions or entire classes to say, “OK world, all this stuff
is based on iOS 9 functionality.”:

@available(iOS 9.0, *)
func fetchKittyDescriptions() -> [String]? {
    let dataAsset = NSDataAsset(name: "Hoover")
    print ("(dataAsset)")
    return nil
}

This marks the entire function as having an SDK level of iOS 9. You can call iOS
9 API in here all day and not have any complaints about using
too-new API. The compiler keeps a record of the SDK level for all the code it
encounters, so if you have some code that’s using the ambient SDK level (iOS 7
in this example) that tries to call fetchKittyDescriptions without an availability guard, you’ll get an error:

Hoist availability

You can raise the SDK level for classes, structs and enums with @available.

The Syntax

So, about that syntax inside of the parens:

#available(iOS 9.0, *)

It’s simply a list of applicable platform names (iOS, OS X, watchOS, tvOS and
each of those names suffixed by ApplicationExtension), paired with a version
number.

You can declare a chunk of code that needs the latest of everything:

    func fetchKittyDescriptions() -> [String]? {
        if #available(iOS 9, OSX 10.11, watchOS 2.0, tvOS 9.0, *) {
            let dataAsset = NSDataAsset(name: "Hoover")
            print ("(dataAsset)")
        }
        return nil
    }

So what’s the deal with the star at the end? That’s required. If you leave it
off, the compiler will complain:

Must handle potential future platforms with *

You’ll see “Must handle potential future platforms with *“. That means that code will be
treated as having a particular ambient SDK level if a platform is not explicitly supplied. If fetchKittyDescriptions gets included in an app destined for Apple’s new toasterOS,
the availability of NSDataAsset will be judged against the ambient toasterOS SDK,
which will be the toaster deployment target.

Remember that availability is not #if. This language construct does not control
whether the code will get compiled for toasterOS. If this code is in a toasterOS
project, it will get compiled. What OS version does the compiler use to decide
if NSDataAsset is available? The deployment target.

The star is there to remind anyone reading the code that the list of platforms
listed in the #available conditional is not exhaustive. In the future, there
will be new platforms, and the code you’re looking at might be compiled for
them.

The Ugly

The availability features in the compiler and at run time are nice, and they
work well. Be vigilant when using Xcode’s fix-its, though. In the writing of
this, I witnessed two different cases of code corruption. The first left a copy
of the iOS 9 calls outside of the if #available check:

Xcode corruption

Aside from the bad indentation, I got another availability
error. “But… but… you just fixed that!”

The second actually deleted legitimate code. Notice that the return statement
and closing brace are deleted outright, with no undo.

Deleted code

I can’t trust tools that will destroy code like this, so I manually add
#available and @available statements when needed. The compiler error tells you
what version to put in to the availability check.

But You Told Me Earlier…

When the new availability model was announced this year, many old-timers (myself
included) immediately shouted at the video stream, “But you’ve been telling us
for FIFTEEN YEARS never to do OS version comparisons!”

And that’s true. The guidance in Objective-C land had been to explicitly test
for availability of newer features before you used them.

For a number of reasons, I never liked that model. There is no unified
mechanism for checking for feature availability. There are a bunch of
different things you had to do depending on the type of API feature that you
wanted to see if it was available, such as whether a particular class exists
(thanks to weak linking, it’ll be nil in Objective-C if it doesn’t exist), or
whether an object responds to a
particular selector, or if a particular function pointer is not-NULL.

It’s also tedious and bug-prone because Xcode doesn’t have tooling in Objective-C
to tell us we
are potentially using new API on older systems. We’d unknowingly use API that
was too new, and then blow up at run-time on older devices.

There was also the possibility of false positives. What if the class does exist
and you’re not supposed to use it yet, like what happened with UIGestureRecognizer? If a constant is pointing to an object, what might it mean for it to be NULL? What does it mean for a global float constant or enum value to not exist, given that
the compiler can inline it if it knows the value?

Wrap Up

The take-away points: #available is not #if—it is not conditional compilation
based on OS platform. Instead, at compile time, #available tells the compiler
to temporarily raise the SDK level above your deployment target so that you can
safely use newer API. At run time, the code will execute depending on the version
of the OS the app is running on.

The post Hi! I’m #available! appeared first on Big Nerd Ranch.

]]>
https://bignerdranch.com/blog/hi-im-available/feed/ 0