Jay Hayes - Big Nerd Ranch Tue, 29 Nov 2022 12:48:42 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.5 Integrated Testing with React Native, Part 2: Minimize Coupling https://bignerdranch.com/blog/integrated-testing-with-react-native-part-2-minimize-coupling/ https://bignerdranch.com/blog/integrated-testing-with-react-native-part-2-minimize-coupling/#respond Thu, 28 Sep 2017 10:00:00 +0000 https://nerdranchighq.wpengine.com/blog/integrated-testing-with-react-native-part-2-minimize-coupling/ Existing tools for testing React Native apps are deeply coupled to a component's rendered structure. By using the object traversal generator function from the last post, you can write tests that focus on content and stand up to changes that don't affect behavior.

The post Integrated Testing with React Native, Part 2: Minimize Coupling appeared first on Big Nerd Ranch.

]]>

In the last post, I promised to tell you how breadth-first object traversal came into play testing React Native apps. Today is the day!

As it turns out, unit testing React Native code is really no different than testing any other JavaScript, especially with stateless, functional components. However, unit testing can only get you so far. Inevitably you need to confirm code works in integration. Unfortunately the options we have tend to feel a little heavy-handed with snapspot testing and computer-driven UI tests. Snapshot tests are coupled very tightly the entire structure of a rendered component and thus tend to be brittle. UI tests are difficult to set up and run terribly slow.

Another option is using a tool like Enzyme to make assertions about the VDOM output by the renderer. Unfortunately React Native only works with shallow rendering. It would be great have something resembling full-dom rendering for React Native!

What Is In a DOM?

Enzyme, Jest, and the like use React’s test renderer to produce an in-memory, rendered DOM as a deeplying nested, often recursive JavaScript object tree. One such tree might resemble this:

{
  type: 'View',
  props: [Object],
  children: [
    { type: 'Text', props: [Object], children: ['Just content.'] },
  ]
}

Thankfully, as we found out in the last post, such a data structure is a breeze to traverse!

The Tests

Take for example a component that has loading and error states. When data is available it displays each bit as an item in a list. Start by outlining your test cases:

  • When data is loading, I see the message “Loading”.
  • When an error occurs, I see the error message.
  • When data is loaded, I see each item’s title on screen.

Using visit from the last post, translate these cases into tests:

import React from 'react'
import renderer from 'react-test-renderer'

import Widgets from './widgets'
import { visit } from './visit'

it('displays message when data is loading', () => {
  let data = { loading: true }
  let component = renderer.create(<Widgets data={data} />)
  let tree = component.toJSON()

  expect(content(tree)).toContain('Loading')
})

it('displays message when an error occurs', () => {
  let data = { error: { message: 'It broke.' } }
  let component = renderer.create(<Widgets data={data} />)
  let tree = component.toJSON()

  expect(content(tree)).toContain('It broke.')
})

it('displays item titles when data is loaded', () => {
  let data = { loading: false, items: [{ title: 'first'}, { title: 'second' }] }
  let component = renderer.create(<Widgets data={data} />)
  let tree = component.toJSON()

  let text = content(tree)
  expect(text).toContain('first')
  expect(text).toContain('second')
})

function content(tree) {
  let each = visit(tree)
  let content = []
  for (let node of nodes) {
    if (node && node.type === 'Text' && node.children) {
      content.push(node.children.join())
    }
  }
  return content.join()
}

These tests are written to be very loosely coupled to the structure of the component. All that matters is the tested content is seen in the component. This is done by looking for all Text nodes and testing that the expected content is contained within them.

The Component

To make these tests pass, you can write a relatively simple component.

import React from 'react'
import { Text } from 'react-native'

let Widgets = ({ data: { loading, error, items } }) => {
  if (loading) {
    return <Text>Loading...</Text>
  }

  if (error) {
    return <Text>{error.message}</Text>
  }

  return (
    <Text>{items.map(i => i.title).join()}</Text>
  )
}

export default Widgets

This component is simple if not naive. However, from the tests’ perspective it doesn’t really matter how it’s built so long as the little bits of text are seen. For example, this same component could be completely re-written in a functional style:

import React from 'react'
import { Text } from 'react-native'
import { branch, compose, renderComponent } from 'recompose'

let Loading = () =>
  <Text>Loading...</Text>

let WithLoader = branch(
  ({ data: { loading } }) => loading,
  renderComponent(Loading),
)

let Error = ({ data: { error } }) =>
  <Text>{error.message}</Text>

let HandleError = branch(
  ({ data: { error } }) => error,
  renderComponent(Error),
)

let Widgets = ({ data: { items } }) =>
  <Text>{items.map(i => i.title).join()}</Text>

let enhance = compose(
  WithLoader,
  HandleError,
)

export default enhance(Widgets)

Which looks exactly the same, like this:

The tests still pass! Let’s take things just a little bit further to really drive home the point. Update the Widgets component to use fancy, scrollable lists from [NativeBase][nativebase].

 import React from 'react'
 import { Text } from 'react-native'
+import { Container, Header, Body, Content, List, ListItem } from 'native-base'
 import { branch, compose, renderComponent } from 'recompose'

 let Loading = () =>
   <Text>Loading...</Text>

 let WithLoader = branch(
   ({ data: { loading } }) => loading,
   renderComponent(Loading),
 )

 let Error = ({ data: { error } }) =>
   <Text>{error.message}</Text>

 let HandleError = branch(
   ({ data: { error } }) => error,
   renderComponent(Error),
 )

 let Widgets = ({ data: { items } }) =>
-  <Text>{items.map(i => i.title).join()}</Text>
+  <Container>
+    <Header>
+      <Body>
+        <Text>Items</Text>
+      </Body>
+    </Header>
+    <Content>
+      <List dataArray={items}
+        renderRow={item =>
+          <ListItem>
+            <Text>{item.title}</Text>
+          </ListItem>
+        }>
+      </List>
+    </Content>
+  </Container>

 let enhance = compose(
   WithLoader,
   HandleError,
 )

 export default enhance(Widgets)

Look how much better it looks!

Not only does it look better, but the tests still pass without any changes! That is because the tests are written with a hyper-focus on content. The tests are sufficiently decoupled from the component structure so tests only fail when behavior is broken!

Wrapping Up

This style of testing is a powerful mechanism for validating content without writing tests that are frustratingly brittle (read: prone to false failures). However, it probably does not replace the role of snapshots which ensure that a stable component does not suffer regressions once it’s in place.

What testing strategies have you developed for React and React Native?

The post Integrated Testing with React Native, Part 2: Minimize Coupling appeared first on Big Nerd Ranch.

]]>
https://bignerdranch.com/blog/integrated-testing-with-react-native-part-2-minimize-coupling/feed/ 0
Integrated Testing with React Native, Part 1: Generator Functions https://bignerdranch.com/blog/integrated-testing-with-react-native-part-1-generator-functions/ https://bignerdranch.com/blog/integrated-testing-with-react-native-part-1-generator-functions/#respond Tue, 05 Sep 2017 16:00:00 +0000 https://nerdranchighq.wpengine.com/blog/integrated-testing-with-react-native-part-1-generator-functions/ In my recent experience, getting started with React Native testing is a little rough since tpopular tools aren't great for integrated testing. The first step is traversing the object tree of a rendered component. For that, generator functions are of great use!

The post Integrated Testing with React Native, Part 1: Generator Functions appeared first on Big Nerd Ranch.

]]>

Love it or hate, JavaScript is everywhere. Recently, I took another step toward assimilation by attending the Big Nerd Ranch Front-end Essentials bootcamp. And… all biases aside, IT WAS GREAT! I began the week with two goals:

  1. How to even modern CSS?
  2. React Native sounds cool, but how to test?

Thankfully, the answer to my first goal was “stop being lazy, Jay. Learn flexbox.” The second part was a little more subtle, and I expanded upon the ideas that I had learned in class once I returned home. Throughout the next few posts on the topic, I’ll explain my findings.

Object Traversal

In my recent experience, getting started with React Native testing is a little rough. There are good tools for snapshot testing and shallow rendering, but I wanted something capable of deep rendering without the tight coupling of a full snapshot. To start, I began digging into the output of React’s test renderer which is used with snapshot testing. As it turns out, the results of test rendering are a deeply-nested, often circular object tree.

The first hurdle in establishing a strategy for integrated component testing with React Native is picking out particular nodes in deeply nested object trees. The below screenshots illustrate this nesting. So for example, you might want to assert that certain text is found somewhere in the hierarchy. If you have any functional programming leanings, you may immediately think of this as a recursive problem, and to be honest, it’s quite natural to reason about the problem recursively.

React Native Diagram

Recursion is great, but it can be tricky to bail early on the routine if you’re only interested in finding the first matching node. Not to mention the complicated story with tail-call optimization in JavaScript. Modern JavaScript provides a useful tool for solving our traversal problem: Generator Functions.

Take a moment to theorize the usage of such a generator function:

let dom = {
  type: 'div',
  props: {
    className: 'main',
    children: [
      { type: 'h1', props: { children: 'Welcome to React!' } }
    ]
  }
}

let each = visit(dom) // This is your traversal generator!

for (let node of each) {
  console.log(node)
}
// { type: 'div', props: [Object] }
// 'div'
// { className: 'main', children: [Array] }
// 'main'
// [ [Object] ]
// { type: 'h1', props: [Object] }
// 'hi'
// { children: 'Welcome to React!' }
// 'Welcome to React!'

Since the return value of generators conform to the iterable protocol, they can be used with the for..of construct! So how should the visit() function be implemented?

Writing visit()

Here’s your generator function signature. Don’t forget the asterisk!

function* visit(obj) {
  //
}

The data structures encountered in React Native are often very deeply nested (and sometimes circular), so with that foresight it makes sense to implement visit() as a breadth-first search. To avoid recursion stack limits in JavaScript, revert back to good-ole looping. Initialize a queue with the subject of your search and loop until you’re all out of nodes:

function* visit(obj) {
  let queue = [obj], next
  while(queue.length > 0) {
    next = queue.shift()
    yield next
    // think of the children!
  }
}

Hurray, you have visited the first object, but the algorithm is incomplete. How can you visit each of its children?

To answer this question, consider the types of children that must also be visited: arrays and objects. Take those into account:

function* visit(obj) {
  let queue = [obj], next
  while(queue.length > 0) {
    next = queue.shift()
    yield next
    if (Array.isArray(next)) {
      queue.push(...next)
    } else if(next instanceof Object) {
      queue.push(...Object.values(next))
    }
  }
}

So, if the next value in queue is an Array, add all its values to the array. Otherwise, if it’s any sort of object, add all of its enumerable properties to the queue. The spread operator (...) is particularly handy for this use.

That’s it! You can now visit each node in any object graph. However, there are a couple more things to consider.

API

For general use cases, it may not be desirable to require folks to deal with visit() directly. Instead, you might want to expose a more functional interface such as each():

var obj = {
  type: 'div'
  props: {}
}
each(obj, console.log)
// { type: 'div', props: {} }
// 'div'
// {}

Providing such an interface is very straight-forward:

function each(obj, fn) {
  let visitor = visit(obj)
  for (let node of visitor) {
    fn(node)
  }
}

More interestingly, we can build on visit() to provide other common functions on collections. Take find() for example, which returns the first match without visiting the entire tree:

function find(obj, match) {
  let each = visit(obj)
  for (let node of each) {
    if (match(node)) {
      return node
    }
  }
}

find({ foo: { bar: 'it me' } }, n => n.bar === 'it me')
// { bar: 'it me' }

You can imagine even more functions implemented in this way. Give map(), count(), and select() a try on your own!

Circular References

There is one more edge case to consider. Circular references are trivially easy to create in JavaScript, and such an object would result in infinite looping with our current implementation of visit():

var obj = {}
obj.self = obj
each(obj, console.log) // THIS IS THE LOOP THAT NEVER ENDS

The issue is easily addressed by ensuring that the exact same node is never visited twice. You can accomplish this by keeping track of which nodes have been seen:

 function* visit(obj) {
-  let queue = [obj], next
+  let queue = [obj], next, seen = new Set()
   while(queue.length > 0) {
     next = queue.shift()
+    if (seen.has(next)) { continue }
+    seen.add(next)
     yield next
     if (Array.isArray(next)) {
       queue.push(...next)
     } else if(next instanceof Object) {
       queue.push(...Object.values(next))
     }
   }
 }

Hurray, no more infinite loops!

But… Testing?

Theory and generalizations are fun and all, but how can we use this practically? I promised to relate this to React Native testing. You’ll have to look out for the next post to see how this all comes together. Stay tuned!

The post Integrated Testing with React Native, Part 1: Generator Functions appeared first on Big Nerd Ranch.

]]>
https://bignerdranch.com/blog/integrated-testing-with-react-native-part-1-generator-functions/feed/ 0
Machines Learn and You Can Too https://bignerdranch.com/blog/machines-learn-and-you-can-too/ https://bignerdranch.com/blog/machines-learn-and-you-can-too/#respond Tue, 27 Dec 2016 11:00:00 +0000 https://nerdranchighq.wpengine.com/blog/machines-learn-and-you-can-too/ It can be difficult to work up the courage to dig into machine learning. Luckily, many of the ideas are actually quite straightforward when you peel back the terminology.

The post Machines Learn and You Can Too appeared first on Big Nerd Ranch.

]]>

It has taken me some time to build up the courage to dig into machine learning. It can be difficult to learn on your own because of the mathematical terminology that is frequently used to teach the concepts. I’m here to give you hope. You can do it.

In my recent dive into machine learning, I’ve found that many of the ideas are actually quite straightforward when you peel back the terminology. Like many topics (looking at you, functional programming), machine learning is often introduced using mathematical theory that can test the knowledge of even the most qualified intellectual. Throughout this post, I might reference some scary math, but only to provide points for further exploration. Stick with it and let me know how it goes. Hopefully you will find it a gentle introduction with useful anchors to more advanced reading.

What is it learning?

In a previous post on neural networks, Nerd Bolot provides a wonderfully simple example well-suited for machine learning. Here it is again:

[…] we can easily write a program that calculates the square footage (area) of the house, given the dimensions and shapes of all its rooms and other spaces, but calculating the value of the house is not something we can put in a formula. A machine learning system, on the other hand, is well suited for such problems. By supplying the known real-world data to the system, such as the market value, size of the house, number of bedrooms, etc., we can train it to be able to predict the price.

As he said, there is no obvious formula for determining the value of a house based on its size. However, plenty of data exists about home sizes and the prices they sold at. Given enough examples, one might be able to detect a pattern that can be used to predict values for previously undocumented home sizes.

This act of programatically detecting patterns in data is machine learning. In particular, using training data to derive a formula for prediction is known as supervised learning.

Learn by Example

Supervised machine learning uses sample data to find a function that can predict the outcome for a set of arbitrary inputs. For example, you might have data about houses’ size and value. With that data you can find a function that predicts a house value given its size.

Consider a small command-line program that helps you track house size and value data:

$ ruby houses.rb
1. Add Data
2. View Data
3. Predict Value
What would you like to do?

You can add data with option #1 and visualize existing data with option #2.

points

However, option #3 is disappointing:

What would you like to do?
3
Sorry, I haven't yet a brain to think for myself...
$

Time to build to a brain.

Line, please

You might have noticed from the graph that the value of the example houses grow linearly as the size increases. Therefore if you were to find a line that slopes similarly to the data, it could be used to predict values for other house sizes. For example:

line

Obviously, drawing the line by hand is subjective and error prone. What you’d prefer is some program that draws this line for you with great precision. It turns out that this is exactly what linear regression does. Don’t sweat the details for linear regression just yet. Next up you’ll see a naive implementation.

Your Tools

Your squishy human brain can look at the graph above and make a pretty accurate guess about the best line to fit the data. This is due to both our visual nature and the simplicity of the data. What if the data were less uniform, or it had higher dimensionality not easily visualized? In order to solve this problem with a computer, you need three tools.

  1. Model: This is the general form of the function that predicts values. Sometimes referred to as the hypothesis, it models the relationship that underlies the sample data. Since you’re trying to find a line, your model will look very similar to the linear equation you learned in pre-algebra: y = mx + b.
  2. Loss Function: This function is used to determine the difference between the predicted value generated by the hypothesis and the known value in the sample data. The goal is to make the loss as close to zero as possible while also keeping the model general enough to apply to data not captured by the sample. This is also sometimes referred to as a cost function.
  3. Minimization Algorithm: This algorithm helps find ideal parameters for your model by testing them with the loss function. Without it, you would have to randomly guess at the loss function until you found something that seemed OK.

Model

Start by defining your model. Remember that your trying to find a line that fits the data. That means we’ll define a model that will look very similar to the linear equation you learned in Algebra: y = mx + b, where y is the value of the house, m is the slope of the line, x is the size of the house in question, and b is intercept of the line. For simplicity of this example, assume that the y-intercept (the b in y = mx + b) is zero. This just means that we will only be considering lines that pass through the origin.

line_through_origin = ->(slope, x) { slope * x } # y = mx + 0 = mx

Limiting your model to only lines that pass through the origin has another benefit. It limits the machine learning task to only one parameter: the slope.

Loss

With our model defined, the next step is to define a loss function. Recall from the above definition that the purpose of this function is to find the difference in values predicted by the model and known values. You know that to find the difference in two values, you subtract them. For example, the difference in five and three is 5 - 3 which is 2. Similarly to find the overall difference in predicted and known values, find the average difference of all the values.

Call these differences errors and find their mean:

# assumes data is 2-dimensional array, e.g. [[1,2], [3,4]]
mean_errors = ->(data, model, slope) {
  errors = data.map { |(x, y)| model.(slope, x) - y }
  errors.reduce(:+) / errors.count.to_f # integer division bad!
}

An aside about this loss function: Keen readers may have caught a problem with this loss function. Since the loss can be negative, certain errors may effectively cancel one another out, e.g. sample points (1, 2) and (2, 1) would result in errors -2 & 2. Robust implementations address this by avoiding negative errors. See mean squared error. However, the complexity this introduces to the minimization algorithm justifies tolerating the issue for the sake of illustration.

Minimization

The final piece to the program is the minimization algorithm. The loss function is only able to report on the loss for some given model. This is useful for testing values, but in order to solve the problem you must find the loss closest to zero.

A straight-forward minimization can be found by iteratively closing in on zero-loss. From a high level, the algorithm goes something like this:

  • Start with some initial slope to test, say 0.0
  • Iterate some number of times
    • Find the loss of the current slope
    • If it’s positive decrease the slope a little
    • If it’s negative increase the slope a little
    • Repeat

It’s important that each iteration takes smaller and smaller steps in order to zero in on the best slope.

minimize = ->(learning_rate: 0.2, iterations: 100) {
  (1..iterations).reduce(0.0) do |slope, iteration|
    loss = mean_errors.(data, line_through_origin, slope)
    direction = loss < 0 ? 1 : -1
    step = direction * learning_rate / iteration.to_f
    slope += step
}

The learning_rate is a value that helps the algorithm determine what size step to take. If this value is too high, the function will overshoot the ideal slope repeatedly. You may be wondering how to pick a learning rate and the number of iterations. The full answer is beyond the scope of this article, but in general too large/small learning rates can make it hard to zero in on a minimum. Iterations represent how long you’re willing to let the algorithm work. This has a lot to do with the rate at which it learns.

It’s alive!

Wire it all up and plug in some data. See how it performs on the above data set:

# line_through_origin & mean_errors defined above

# The data is pairs of house sizes and their values.
data = [
  [3, 10],
  [15, 4],
  [30,35],
  [10,12],
  [7, 5 ],
  [35,30],
  [25,25],
  [20,23],
  [30,27],
  [17,13],
  [15,20],
]

slope = minimize.(data)
# => 0.985...
prediction = line_through_origin.(slope, 40)
# => 39.419.., i.e. a 4000 sq. ft. house is valued at about $394k

It is also very interesting to observe how the algorithm learns. Here is an animation of the minimization algorithm zeroing in on the idea slope. As you can see, after a few iterations it finds a line that fits the data pretty well.

learning

What if you tweak the learning rate and iterations? With a rate of 2 and 250 iterations, you will get something like:

learning jitter

This result displays a lot more jitter as the minimization function is repeatedly overshooting the slope as it zeros in.

Predicting House Values

With the algorithm implemented, you can now revisit your original goal: predicting house values for square footage not in the original data.

$ ruby houses.rb
1. Add Data
2. View Data
3. Predict Value
What would you like to do?
3
Size (in 100s sq. ft.):
22.5
Such a house is worth about $222k

Looks like a 2250 sq. ft. house will run about $222k. Who knew? See this program in its entirety in this gist.

Warning!

I want to give you a fair word of warning in closing. The purpose of this post is to provide a easy to understand overview of linear regression and machine learning in general. It is not meant to provide production ready code for use! The above aside mentions that the loss function and minimization algorithm used in the example, while simple to understand, do not stand up to real-world problems. For loss, you might check out mean squared error, and for minimization gradient descent works well.

I hope you found this post interesting! I would love to hear about your experiences with machine learning and other mathy good times in your programs!

The post Machines Learn and You Can Too appeared first on Big Nerd Ranch.

]]>
https://bignerdranch.com/blog/machines-learn-and-you-can-too/feed/ 0
Composing Elixir Functions https://bignerdranch.com/blog/composing-elixir-functions/ https://bignerdranch.com/blog/composing-elixir-functions/#respond Thu, 15 Dec 2016 10:00:53 +0000 https://nerdranchighq.wpengine.com/blog/composing-elixir-functions/ Function composition is a technique used to build complex functions out of simpler ones. Elixir does not provide a mechanism for composing two functions into a new function. Let's play with that a little and see what can be done.

The post Composing Elixir Functions appeared first on Big Nerd Ranch.

]]>

Function composition is a technique used to build complex functions out of simpler ones. Elixir does not provide a mechanism for composing two functions into a new function. Let’s play with that a little and see what can be done.

An Example

Say using Elixir you want a function that appends an element to a list. An efficient way to do this is by first reversing the list, then prepending the element, and finally reversing the list again.

Kick things off by manually implementing append in terms of reversal and prepending:

iex> append = fn list, item ->
  list = Enum.reverse(list)
  list = [item|list]
  Enum.reverse(list)
end
#Function<...>
iex> append.([1,2,3], 4)
[1, 2, 3, 4]

This certainly gets the job done, but perhaps it could be even more to the point. The main problem is the implementation shifts the focus from the operations to state management. The list variable is introduced and repeatedly referenced (5 times!) during the execution of the function. As Josh points out in his decomposition blog, Elixir provides a mechanism for composing function applications with its pipe |> operator. Refactor it with pipes:

iex> prepend = &[&2|&1]
#Function<...>
iex> append = fn list, item ->
  list
  |> Enum.reverse
  |> prepend.(item)
  |> Enum.reverse
end
#Function<...>
iex> append.([1,2,3], 4)
[1, 2, 3, 4]

That does seem quite a bit better (idiomatic, even)! You no longer repeatly refer to list, and the implementation is more clearly composed of prepend/2 between calls to reverse/1. Even so, the state management is still present in the form of arguments. Is there an implementation that’s even more clearly composed of the operations? What if you could compose the existing functions together into a new append function?

Elixir does not provide such an operation, but imagine a custom infix operator, <|>, for composing functions:

iex> append = reverse <|> prepend <|> reverse
#Function<...>

Such an append function would, in effect, capture the expression reverse(prepend(reverse(list), item)), requiring the arguments list and item in that order.

Compose

To arrive at the final implementation, start by implementing a compose/2 function using recursion. First define the base case:

defmodule Compose do
  def compose(f, arg) do
    f.(arg)
  end
end

This base case effectively applies the arg to the function f. For example:

iex> double = fn n -> n*2 end
#Function<...>
iex> Compose.compose(double, 4)
8

Next add an implementation of compose/2 that recurses when the second argument is a function:

defmodule Compose do
  def compose(f, g) when is_function(g) do
    fn arg -> compose(f, g.(arg)) end
  end

  def compose(f, arg) do
    f.(arg)
  end
end

This version of compose/2 returns a function that applies its argument to g and then composes the result with f. However, at this point the implementation works only to compose functions that accept a single argument, i.e. having an arity of 1:

iex> reverse_sort = Compose.compose(&Enum.reverse/1, &Enum.sort/1)
#Function<...>
iex> reverse_sort.([3,1,2])
[3, 2, 1]

It does not work for functions requiring many arguments (N-arity):

iex> reverse_prepend = Compose.compose(&[&2|&1], &Enum.reverse/1)
#Function<...>
iex> reverse_prepend.([1,2,3])
** (BadArityError)

The error happens, because compose/2 only ever applies one argument, i.e. .(arg). Elixir is strict about the arity of functions. For compose/2 to work with N-arity functions, you need some way to apply a variable number of arguments.

Arguments

A solution to this problem is changing how N-arity functions are applied. Since there is no way to anticipate how many arguments will be needed, you can instead rearrange the function to support multiple single-argument applications until all have been provided. This is a common functional programming technique known as currying.

Currying converts a function of arity N into a function of 1-arity that when applied N times produces the result. Consider this example of manual currying with nested functions:

iex> add = fn a -> fn b -> a+b end end
#Function<...>
iex> add_one = add.(1)
#Function<...>
iex> add_one.(2)
3

The function add is defined to return another function. The result of applying the value 1 to add returns a function that accepts the second argument for the addition. This idea of applying only part of what a function needs to return a result is called partial application. The partial application of add with 1 is a function that “adds one”. Once all the arguments have been applied, the result is returned:

iex> add.(2)
#Function<...> # a function that "adds 2"
iex> add.(2).(3)
5

Notably, this mechanism is not built into Elixir, but there are packages that add the behavior as well as a great post on currying in Elixir. For the remainder of this post, it’s assumed you have a module Curry that includes a function curry/1 such that you can:

iex> add = Curry.curry(fn a,b -> a+b end)
#Function<...>
iex> add.(2).(5)
7

Finishing compose/2

As you have learned, currying is a solution to your variable argument problem. It also happens to fit really well into the recursion that is already set up in compose/2. Update the implementation to curry both functions passed in:

 defmodule Compose do
+  import Curry
+
   def compose(f, g) when is_function(g) do
-    fn arg -> compose(f, g.(arg)) end
+    fn arg -> compose(curry(f), curry(g).(arg)) end
   end

   def compose(f, arg) do
     f.(arg)
   end
 end

It might be surprising to realize this is the only change needed to complete the implemetnation! The recursive definition of compose/2 applies arguments one at a time to the composed function until a result is found, and then then applies that result to the outer function in the base case. See if it fixed your arity error:

iex> reverse_prepend = compose(&[&2|&1], &Enum.reverse/1)
#Function<...>
iex> reverse_prepend.([1,2,3]).(4)
[4, 3, 2, 1]

Nice! Notice that each successive argument is a partial application to the underlying curried functions. The really cool thing is that the order of the arguments called on reverse_prepend matches the order of respective arguments to each composed function; the list first, then the prepended item.

Custom Operator

For convenience, complete your implementation of Compose with a custom infix composition operator, <|>:

defmodule Compose do
  import Curry
  def f <|> g, do: compose(f, g)
  ...
end

That’s it! Now by importing Compose, functions may be composed together with <|>:

iex> import Compose
Compose
iex> square = fn n -> n*n end
#Function<...>
iex> fourth = square <|> square
#Function<...>
iex> fourth.(2)
16

append

Tie up this experiment by returning to the original task. Recall, you want to define a function append in terms of reverse and prepend. The implementation with function composition is now purely expressed as operations:

iex> reverse = &Enum.reverse/1
#Function<...>
iex> prepend = &[&2|&1]
#Function<...>
iex> append = reverse <|> prepend <|> reverse
#Function<...>
iex> append.([1,2,3]).(4)
[1, 2, 3, 4]

Mathy Conclusion

One important note is that the Elixir implementation demonstrated is not pure function composition in the mathematical sense. Function composition requires that function signatures match exactly in order to be compatible for composition. No such restrictions exist in this implementation. In fact it displays the interesting property of trickling arguments down in order until each composed function is fully applied. Elixir is a dynamically typed language, and as such it allows a lot of flexibility in how functions are defined and applied to. Have fun!

The post Composing Elixir Functions appeared first on Big Nerd Ranch.

]]>
https://bignerdranch.com/blog/composing-elixir-functions/feed/ 0
Catching Strong Params Problems Early https://bignerdranch.com/blog/catching-strong-params-problems-early/ https://bignerdranch.com/blog/catching-strong-params-problems-early/#respond Wed, 14 Sep 2016 10:00:53 +0000 https://nerdranchighq.wpengine.com/blog/catching-strong-params-problems-early/ If only there was a way to have Rails raise an error when an unexpected parameter is encountered out by strong params... Actually, there totally is!

The post Catching Strong Params Problems Early appeared first on Big Nerd Ranch.

]]>

TL;DR: Use the ActionController::Parameters.action_on_unpermitted_parameters configuration to control the behavior of Rails’ strong parameters.

Ruby on Rails developers, here’s a scenario:

You’re adding an attribute to a model in your application. You’ve done it all right:

  • Written an acceptance test for your feature
  • Created a migration to update the database schema
  • Added new form input for the data in your view

However, when you run the test suite, the test fails.
But how can that be? You’re an experienced Rails developer. This is basic stuff!

The test output isn’t very helpful either:

Expected page to contain [new attribute data], but it didn’t.

After double-checking your test, the last resort is firing up the application and trying it out in the browser yourself. Confirmed, the form is set up correctly, elements are named as they should be. You fill it out and submit it. It all works, but why isn’t the model being updated?! Time passes, you’ve debugged here and pry’d there. Then it suddenly hits you.
The new parameter was not permitted with strong params!

Khan

Has this ever tripped you up?
It gets me all the time, and I teach this stuff.
If only there was a way to have Rails raise an error when an unexpected parameter is encountered out by strong params…
Actually, there totally is!

Configuration

Rails provides a small bit of configuration to control the behavior of strong params for unexpected parameters.
ActionController::Parameters.action_on_unpermitted_parameters.
The default behavior is to :log the occurrence and move along.
However, another option is to :raise an error.
Perfect.

Take a moment to experiment with this option.

irb> ActionController::Parameters.new(name: "Jay", age: 29).permit(:name).to_h
=> {"name"=>"Jay"}
irb> ActionController::Parameters.action_on_unpermitted_parameters = :raise
=> :raise
irb> ActionController::Parameters.new(name: "Jay", age: 29).permit(:name).to_h
ActionController::UnpermittedParameters: found unpermitted parameter: age

That is exactly the behavior you want from your app! Create a configuration file for strong params.

# config/initializers/strong_params.rb
if Rails.env.test?
  ActionController::Parameters.action_on_unpermitted_parameters = :raise
end

With that configuration loaded, Rails will now raise an exception in your test environment any time an unpermitted parameter is encountered.

A timely error reminds you that you must permit parameters, rather than wasting time realizing that parameters are being quietly stripped from the request.
For more information see the documentation.

Back to Work

Does this tip resonate with you? Do you have an alternative method of avoiding this common pitfall while building apps? Let us know what you think!

The post Catching Strong Params Problems Early appeared first on Big Nerd Ranch.

]]>
https://bignerdranch.com/blog/catching-strong-params-problems-early/feed/ 0
Testing External Dependencies with Fakes https://bignerdranch.com/blog/testing-external-dependencies-with-fakes/ https://bignerdranch.com/blog/testing-external-dependencies-with-fakes/#respond Tue, 26 Jul 2016 10:00:53 +0000 https://nerdranchighq.wpengine.com/blog/testing-external-dependencies-with-fakes/ Third-party solutions can solve difficult problems when you're writing code, but relying on remote services makes writing automated tests more complicated. You can use fakes to solve this problem.

The post Testing External Dependencies with Fakes appeared first on Big Nerd Ranch.

]]>

Communication with remote services is often an inevitable part of writing interesting software. Difficult problems (e.g., address validation) are easily solved by integrating with third-party solutions.

Unfortunately, relying on remote services complicates the goal of writing automated tests for your application. Without decoupling your code from these external factors, your test suite grows continually slower each time an example communicates with the outside. Additionally, services may or may not be available during the run, causing test failures unrelated to your code.

An effective strategy to address this problem is to draw a line at the boundary between your code and the service. At that boundary, replace the external integration with a stand-in, a fake.

A Testing Strategy

To get familiar with this approach, consider an example. Say you want to add a feature to your application that fetches page titles using the Open Graph protocol. After some research, you settle on a small library that gets the job done.

OpenGraph.new("https://nerdranchighq.wpengine.com").title
# time passes as request is made and response is processed...
# => "Big Nerd Ranch - App Development, Training, & Programming Guides"

Wrap It

Justin Searls recently wrote an article that clarifies a phrase you often see: “Don’t mock what you don’t own.” As he points out, the purpose of that phrase is to encourage test writers to wrap external dependencies in an application-owned adapter.

The benefit is two-part:

  • It establishes a consistent interface for accessing the behavior you need.
  • It serves “as a specification of what you’re using in that dependency.”

Go ahead and introduce a trivial adapter for your Open Graph integration.

class WebPage
  def initialize(url)
    @open_graph = OpenGraph.new(url)
  end

  def title
    open_graph.title
  end

  private

  attr_reader :open_graph
end

This step may seem unnecessary, but there is value in establishing a consistent interface in your application. The benefit is quickly felt if you later decide to use a different Open Graph library, or the initialization of the library you’re using is awkward. With your adapter’s interface established, you’re ready to create a test fake.

Note: It is important to write tests that verify your adapter correctly integrates with the library it wraps. However, those tests feel very off-subject for this post. I’ve written a pull request to demonstrate how one might test the adapter.

Fake It

To avoid the pitfalls of external influence on your tests, implement a fake that is a duck type for WebPage. The fake should be simple, but as Martin Fowler says, it must have a working implementation to support dependent code.

Consider this implementation that returns a URL’s hostname as its title:

require "uri"

class FakeWebPage
  attr_reader :url

  def initialize(url)
    @url = url
  end

  def title
    host
  end

  private

  def host
    uri.host
  end

  def uri
    URI(url)
  end
end

This stand-in provides an alternative strategy for determining the page title without having to make a web request. Try it out for yourself, and see that it’s good enough.

FakeWebPage.new("http://some_web_page.com").title
# => "some_web_page.com"

It seems to get the job done. Now you can configure your application to use the test fake when running tests.

Here’s a pull request that shows a similar implementation in the context of an app.

Confidence in the Implementation

No real integration happens in test. How can you be sure it really works? That’s a genuine concern.

The library code (the bit performing external communication) must itself have integration tests written that verify its own external behavior. When using a third-party library, this is often a responsibility that maintainers take seriously. If you have written the integration (or you don’t trust theirs), test the integration directly in isolation. You might use a tool like VCR to record and play back web requests for tests.

For critical integrations, you may even want to allow external communication by an isolated portion of your tests that is only run in certain environments. This provides the absolute confidence that the real integration works. However, realize that it comes at the cost of speed and reliability (e.g. external service may go down).

Responsibility

Fundamentally, the struggle of how to test external integrations is one of responsibility. Is it really your application code that should bear the task of maintaining this integration? No. That burden falls to library code. It might be that the library is born out of your application’s codebase (see Rails’ ./lib directory). But the library itself is not concerned with the domain of your application, e.g. selling widgets. Conversely, the application should not be concerned with the domain of the library, e.g. fetching and parsing Open Graph metadata. These distinctions become easier to see when library code is extracted as a dependency from your application.

You might say that an application should be built solely out of domain-specific code, libraries and configuration.

Summary

  • Create thin wrappers to establish consistent, app-owned interfaces for faking.
  • Replace these adapters with fakes to avoid external dependencies in your tests.
  • Enjoy speedy tests that are unaffected by remote services you do not control.

The post Testing External Dependencies with Fakes appeared first on Big Nerd Ranch.

]]>
https://bignerdranch.com/blog/testing-external-dependencies-with-fakes/feed/ 0
Getting Started with Elixir Metaprogramming https://bignerdranch.com/blog/getting-started-with-elixir-metaprogramming/ https://bignerdranch.com/blog/getting-started-with-elixir-metaprogramming/#respond Wed, 18 May 2016 10:00:53 +0000 https://nerdranchighq.wpengine.com/blog/getting-started-with-elixir-metaprogramming/ After spending a little time with Elixir, you might have found out its secret. Elixir embraces metaprogramming. In fact most of Elixir is written in Elixir. Let that sink in.

The post Getting Started with Elixir Metaprogramming appeared first on Big Nerd Ranch.

]]>

After spending a little time with Elixir, you might have found out its secret. Elixir embraces metaprogramming. In fact most of Elixir is written in Elixir.

Let that sink in.

Elixir is Elixir

Even if you exclude stdlib and tests, the majority of Elixir—some 75% of it—is Elixir. What is this magic?

Elixir is Macros

Mostly, it’s macros—major core features of Elixir are implemented with macros.

But what is a macro?

macro: a single instruction that expands automatically into a set of instructions to perform a particular task
— Dictionary.app

That’s exactly it.
Elixir uses macros to provide interfaces for expanding complex sets of instructions during compilation. For example, the if construct in Elixir is a macro. It expands to a case statement, and it exists to make your Elixir code easier to read. So an if statement expands to a case statement similar to the following:

if worked? do
  IO.puts("It worked!")
end
case worked? do
  x when x in [false, nil] -> nil
  _ -> IO.puts("It worked!")
end

The compiled result of using the if macro is exactly the same as writing the case itself. In practice, macro implementations end up being way more complex than that, but it’s all just expanding statements. No magic.

Quoted Expressions

When programs are executed, expressions are often converted into abstract syntax trees (AST) for evaluation.
Elixir is no exception. In fact, you can access these structures yourself by using Elixir’s quote function.

You might think of quote as being similar to eval in other languages like Ruby. However, eval takes a string of code that is evaluated at runtime. This might lead to confusing, bug-ridden code or significant security concerns (remote code execution). Quoted expressions, on the other hand, are compiled so you still have the convenience of building code dynamically without the concern of runtime issues.

Say you want to build an expression that calls a function foo/1 with the argument :bar:

expr = quote do
  foo(:bar)
end

IO.inspect(expr)
# {:foo, [], [:bar]}

The resulting 3-element tuple is an AST. Turns out these tuples are the building blocks of Elixir. Each position in the tuple has a purpose. They are often thought of as {form, meta, args}. form is an atom representing the name of the function being called in the expression. meta is used for context, e.g. imported modules (see below). args are the arguments to the function. Complex Elixir statements are accomplished by combining these quoted expressions:

expr = quote do
  1 + 3 + 5
end

IO.inspect(expr)
# {:+, [context: Elixir, import: Kernel], [{:+, [context: Elixir, import: Kernel], [1, 3]}, 5]}

This expression includes metadata ([context: Elixir, import: Kernel]). In this case it’s used to inform its reader where to find the + function. If you were to manually evaluate this expressions, it would go something like this:

  • The outer expression states: call + with arguments [{:+, ...}, 5].
  • The first argument hasn’t been evaluated, so it must be evaluated itself by calling + with arguments [1, 3], which results in 4.
  • Finally outer expression can be evaluated, calling + with args [4, 5] which results in 9.

Traversing Expressions

Complex quoted expressions are structured as deeply nested trees of nodes. Elixir provides a mechanism for traversing these ASTs with Macro.traverse/4:

pre_traversal = fn node, acc ->
  IO.puts("before: #{IO.inspect(node)}")
  {node, acc}
end
post_traversal = fn node, acc ->
  IO.puts("after: #{IO.inspect(node)}")
  {node, acc}
end

expr = quote do
  "foo"
end
IO.inspect(expr)
# "foo"

Macro.traverse(expr, nil, pre_traversal, post_traversal)
# before: "foo"
# after: "foo"

As you can see, before and after each node is traversed, the respective function is called. These “pre” and “post” functions accept two arguments, the “node” in the expression and the “accumulator”, we’ll touch on this later. Additionally they must return a tuple of the node and accumulator. These functions can be used to gather information or make changes to the expression as it is traversed.

You might be wondering about the second argument to Macro.traverse/4. This argument is an “accumulator” that is passed into the function called at each node. Use the accumulator to count the number of sub-expressions in a quoted expression. For your convenience, Elixir provides shortcut functions Macro.prewalk/3 and Macro.postwalk/3 to call before or after traversal respectively:

counter = fn node, acc -> {node, acc+1}

expr = quote do
  foo(:bar)
end
IO.inspect(expr)
# {:foo, [], [:bar]}

{_expr, count} = Macro.prewalk(expr, 0, counter)
count
# => 2, the literal :bar and the function call foo/1

Despite being called “accumulator”, this value may not only be used to gather information. Sometimes it is used to inject information…

Warning: You are approaching metaprogramming. Do not be afraid.

Metaprogramming

As you might have concluded from what you’ve seen so far, metaprogramming is fundamental to the implementation of Elixir. Metaprogramming in Elixir is all about manipulating quoted expressions.

One of the most basic examples of Elixir metaprogramming is transforming a quoted expression. In this contrived example, a typo is fixed in the expression:

expr = quote do
  langth([1,2,3])
end

IO.inspect(expr)
# => {:langth, [], [[1, 2, 3]]}

Code.eval_quoted(expr)
# (CompileError) undefined function langth/1

expr = put_elem(expr, 0, :length)
# => {:length, [], [[1, 2, 3]]}

Code.eval_quoted(expr)
# => {3, []}

Armed with your knowledge of Macro.prewalk/3, you could traverse the expression and fix all the typos. Since you don’t need the accumulator, take advantage of the simpler Macro.prewalk/2:

expr = quote do
  langth([1,2]) + langth([3,4])
end

fix_langth = fn
  {:langth, meta, args} -> {:length, meta, args}
  node -> node
end

fixed_expr = Macro.prewalk(expr, fix_langth)
Code.eval_quoted(fixed_expr)
# => {4, []}

Look at you! Writing Elixir with Elixir. :blush:

A Practical Example

I’ve spent some time recently working on koans for Elixir. Projects like this are used for learning programming languages. In general, they are examples that contain missing pieces to be filled in by the learner. The body of a koan might look like this:

assert ___ + ___ == 3

In order to progress to the next lesson, the user must replace the blank (___) with the value that makes the test pass. This works well for learners, but as a project author, it is desirable to know that, given the right answer, the koan pass without having to repeatedly solve the lessons yourself. Using what you know about Elixir metaprogramming, answers can be injected into these expressions before they are evaluated. Give it a shot!

koan = quote do
  ___ + ___ == 3
end

replace_blank = fn
  {:___, _meta, _args}, [answer|rest] -> {answer, rest}
  node, acc -> {node, acc}
end

answers = [1, 2]
{answered_koan, []} = Macro.prewalk(koan, answers, replace_blank)
{result, _bindings} = Code.eval_quoted(answered_koan)
result
# => true, because 1 + 2 == 3

This implementation traverses the expression with a list of values to substitute for blanks. Each time a blank is encountered, the expression is replaced with the head of accumulator list. The accumulator is being used as a queue. As long as the answers are in the correct order, the code is updated at compile time and the expected result is returned!

In fact, recently I had the hilarious realization that I accidentally implemented Macro.prewalk/3 to solve this very problem.

If you’re interested in seeing the above examples in code, check them out on GitHub.

The post Getting Started with Elixir Metaprogramming appeared first on Big Nerd Ranch.

]]>
https://bignerdranch.com/blog/getting-started-with-elixir-metaprogramming/feed/ 0
An Adventure in Hacking Arduino Firmata with NeoPixels https://bignerdranch.com/blog/an-adventure-in-hacking-arduino-firmata-with-neopixels/ https://bignerdranch.com/blog/an-adventure-in-hacking-arduino-firmata-with-neopixels/#respond Wed, 01 Apr 2015 10:00:00 +0000 https://nerdranchighq.wpengine.com/blog/an-adventure-in-hacking-arduino-firmata-with-neopixels/

Learn ALL the things! That’s basically the motto at Big Nerd Ranch. And in my last post, I wrote about how my team, The Artists Formally Known As (╯°□°)╯︵ ɥsɐןɔ, learned a lot of new things when we tackled hardware hacking with Arduino, NeoPixels and Artoo.

The post An Adventure in Hacking Arduino Firmata with NeoPixels appeared first on Big Nerd Ranch.

]]>

Learn ALL the things! That’s basically the motto at Big Nerd Ranch. And in my last post, I wrote about how my team, The Artists Formally Known As (╯°□°)╯︵ ɥsɐןɔ, learned a lot of new things when we tackled hardware hacking with Arduino, NeoPixels and Artoo.

Arduino is a great platform for beginning to learning about hardware. If you’re into Ruby, you
might checkout Artoo, a robotics framework supporting a myriad of
platforms, including Arduino. During our Clash project, my team and I wanted to use Artoo
with NeoPixels, but there was no integration.

So we fixed it!

learn all the things

artoo-neopixel

We packaged our integration between NeoPixels and Artoo into a rubygem
for your hacking convenience.

Firmata

First, you need to prepare the Arduino by uploading our custom Firmata. This extends the Standard Firmata to add protocols for setting up and
communicating with NeoPixels. To get started:

  • Open the Arduino IDE.
  • Copy the custom Firmata from Github.
  • Upload it to your device.

arduino ide

If you need more help with setting up the Arduino, check out [their guides][arduino-guides].

RubyGem

Next, install our gem. This extends Artoo with support for NeoPixel LED strips
and matrices.

$ gem install artoo-neopixel

Blinky Lights

Now you just need to write something magical. Here’s an example script for a
NeoPixel 40 RGB LED Matrix that will light up your room:

# example.rb
require "artoo"
require "artoo-neopixel"

# Update the below port with your device's port
ARDUINO_PORT = "/dev/cu.usbmodem1411"

connection :arduino, adaptor: :firmata, port: ARDUINO_PORT

MATRIX_WIDTH = 5
MATRIX_HEIGHT = 8

device(
  :matrix,
  driver: :neomatrix,
  pin: 6,
  width: MATRIX_WIDTH,
  height: MATRIX_HEIGHT,
)

work do
  # You should see a bunch of blinky, beautiful lights! WOWOW
  loop do
    # Generate some random coordinates
    x = (MATRIX_WIDTH * rand).round
    y = (MATRIX_HEIGHT * rand).round

    # Generate some random RGB values between 0 and 100
    red = (100 * rand).round
    green = (100 * rand).round
    blue = (100 * rand).round

    matrix.on(x, y, red, green, blue)

    # the matrix sometimes need a little time to keep up
    sleep 0.01
  end
end

Run it and enjoy!

$ ruby example.rb

If you’ve played with Artoo at all, this will be completely familiar to you.
Either way, there isn’t too much going on here, but the resulting strobe of
blinky colors is quite satisfying!

Hack on!

We’re thankful to be able to open source this work. If you catch any gaping
memory leaks or anything at all, feel free to open an issue.

What will you make? Show us in the comments!

The post An Adventure in Hacking Arduino Firmata with NeoPixels appeared first on Big Nerd Ranch.

]]>
https://bignerdranch.com/blog/an-adventure-in-hacking-arduino-firmata-with-neopixels/feed/ 0
Hacking Arduino Firmata with NeoPixels https://bignerdranch.com/blog/hacking-arduino-firmata-with-neopixels/ https://bignerdranch.com/blog/hacking-arduino-firmata-with-neopixels/#respond Thu, 26 Mar 2015 10:00:00 +0000 https://nerdranchighq.wpengine.com/blog/hacking-arduino-firmata-with-neopixels/

We recently had our annual app-building competition, Clash of the Coders.
It’s a fantastic opportunity for us nerds to experiment with unfamiliar
technologies, stretching ourselves and our tools. It’s all about learning, a
fundamental value of Big Nerd Ranch. After an intense 72-hour coding marathon,
our team (The Artists Formally Known As (╯°□°)╯︵ ɥsɐןɔ, yep) came out with
an online, multiplayer game equipped with clients crossing four platforms: iOS,
Android, Web… and Arduino!

The post Hacking Arduino Firmata with NeoPixels appeared first on Big Nerd Ranch.

]]>

We recently had our annual app-building competition, Clash of the Coders.
It’s a fantastic opportunity for us nerds to experiment with unfamiliar
technologies, stretching ourselves and our tools. It’s all about learning, a
fundamental value of Big Nerd Ranch. After an intense 72-hour coding marathon,
our team (The Artists Formally Known As (╯°□°)╯︵ ɥsɐןɔ, yep) came out with
an online, multiplayer game equipped with clients crossing four platforms: iOS,
Android, Web… and Arduino!

The Clash

Our project was a great opportunity to get familiar with hardware. Thanks to
the kindness of fellow nerd Chae O’Keefe, we had plenty
of Arduino hardware and the brilliant idea to incorporate
individually addressable NeoPixels, maximizing the aesthetic of our
physical client 😄.

arduino client

Eventually, we settled on using Artoo, a robotics framework written in
Ruby, to communicate with the Arduino. We chose it because of our familiarity with
Ruby (and by the time we got around to working on the Arduino client there was
only seven hours left in the competition).

Unfortunately, we quickly hit a showstopper. Artoo communicates with Arduino
hardware using the Firmata protocol. Firmata doesn’t have support
built in for NeoPixels.

So the adventure began…

Serial for Breakfast

In the early hours of the last day of our competition, we began peeling back
the layers of software used to communicate between Artoo and Firmata. We
quickly figured out that communication with Firmata is done with specially
formatted serial messages containing command bytes. These bytes indicate the
purpose of the message, such as a write to a digital pin.

gh screenshot

Thankfully, the Firmata designers had the forethought to consider others’
desires to extend the protocol. This is accomplished via SysEx messages,
interestingly part of the MIDI standard. By defining custom
SysEx commands, we were able to send special messages to Firmata which were
received, triggering routines to setup and control NeoPixels!

Hack on!

We had an absolute blast this year! If you’re interested in the nitty gritty
details, check out my upcoming post, out next week. And don’t forget to show us what you make in the comments below!

The post Hacking Arduino Firmata with NeoPixels appeared first on Big Nerd Ranch.

]]>
https://bignerdranch.com/blog/hacking-arduino-firmata-with-neopixels/feed/ 0
No Date.today? It’s DateTime! https://bignerdranch.com/blog/no-date-today-its-datetime/ https://bignerdranch.com/blog/no-date-today-its-datetime/#respond Thu, 12 Mar 2015 02:10:22 +0000 https://nerdranchighq.wpengine.com/blog/no-date-today-its-datetime/ A while back, I got all fluffy talking about love and coffee and all that other stuff. Programming isn't all butterflies and rainbows, folks. Eventually it will happen. You're going to have to deal with time zones.

The post No Date.today? It’s DateTime! appeared first on Big Nerd Ranch.

]]>

A while back, I got all fluffy talking about love and coffee and all that other stuff. Programming isn’t all butterflies and rainbows, folks.

Eventually it will happen. You’re going to have to deal with time zones.

NOOOO

Learn from my mistakes.

TL;DR

Always parse user input time.

Background

Not long ago, I was happily working away on a client project when a feature came along.

time should also go Eastern US instead of UTC

They said.

“No problem,” I thought. “We’ll configure the application’s time zone to use Eastern time and go on our merry way.”

Generally, this is true, but there’s a big catch when you’re taking user input time and throwing it against the database.

An Example

Let’s set the stage. Pretend we have an app with a very simple
data model. It consists of a single model, Plan,
which has a single datetime attribute going_out_at.

Our Goal

The goal is simple. We want users to be able to find plans to go out between
two points in time. How they enter this information is irrelevant, but we can assume that the parameters will be parsable time strings.

Our First Try

We create a range using the user input values and query the database.

> Plan.all
=> [#<Plan id: 1, going_out_at: "2014-09-25 09:39:30">]
> from = '2014-09-25 09:00'
> to = '2014-09-25 10:00'
> Plan.where(going_out_at: from..to)
  SELECT "plans".* FROM "plans" WHERE ("plans"."going_out_at" BETWEEN '2014-09-25 09:00' AND '2014-09-25 10:00')
=> [#<Plan id: 1, going_out_at: "2014-09-25 09:39:30">]

By default, Rails apps are configured to use the UTC time zone. This also
happens to be the time zone (or lack thereof) that the database stores things in.

Due to this coincidence, using the input times as strings works fine when sent directly to the database. That is, they match the database’s format and return the expected results without having to cast the values. Unfortunately, the database doesn’t expect to be given zoned times in this query.

Our Grand Disappointment

This is where we attempt to localize our app to our time zone. We whip out
application.rb and set the app’s time zone.

config.time_zone = 'Eastern Time (US & Canada)'

With this configuration set, times in the app will be automatically localized to the application’s timezone.

> Plan.first.going_out_at
=> Thu, 25 Sep 2014 05:39:30 EDT -04:00

Now, let’s have a look at the same scenario as above in Eastern U.S. time.

You can see the date stored in the database is UTC:

> Plan.all
=> [#<Plan id: 1, going_out_at: "2014-09-25 09:39:30">]

The user inputs the following times which present as 09:00-10:00 in UTC.

> from = '2014-09-25 05:00 -04:00'
> to = '2014-09-25 06:00 -04:00'
> Plan.where(going_out_at: from..to)
SELECT "plans".* FROM "plans" WHERE ("plans"."going_out_at" BETWEEN '2014-09-25 05:00 -04:00' AND '2014-09-25 06:00 -04:00')
=> []

As you can see, the query returns an empty set because the database doesn’t
know to interpret these values as zoned times. Let’s jump into the database to prove our theory.

sqlite> SELECT "plans".* FROM "plans"
   ...> WHERE "plans"."going_out_at"
   ...> BETWEEN '2014-09-25 05:00 -04:00' AND '2014-09-25 06:00 -04:00';
# nothing here...

In order to query using zoned times, we must explicitly cast the times as the datetime type.

sqlite> SELECT "plans".* FROM "plans"
   ...> WHERE "plans"."going_out_at"
   ...> BETWEEN datetime('2014-09-25 05:00 -04:00') AND datetime('2014-09-25 06:00 -04:00'));
1|2014-09-25 09:39:30.961636

So how do we fix this issue in our app? Do we need to cast these columns in our query? Yuck…

The Solution

Thankfully, the solution is relatively simple. Always deal in date and time
objects. This allows Rails to do the heavy lifting of making sure queries
get zoned in a way that is compatible with the database. Check it out.

> from_time = Time.zone.parse(from)
=> Thu, 25 Sep 2014 05:00:00 EDT -04:00
> to_time = Time.zone.parse(to)
=> Thu, 25 Sep 2014 06:00:00 EDT -04:00
> Plan.where(going_out_at: from_time..to_time)
SELECT "plans".* FROM "plans" WHERE ("plans"."going_out_at" BETWEEN '2014-09-25 09:00:00.000000' AND '2014-09-25 10:00:00.000000')
=> [#<Plan id: 1, going_out_at: "2014-09-25 09:39:30">]

You can see that when the range’s values are zoned time, Rails takes care of
converting them to UTC for the database queries. Just make sure you
parse using Time.zone!

Time is Hard

Let’s face it, time is hard. Being vigilant helps you watch out
for these crazy time zone-related quirks!

Editor’s note: A version of this post first appeared on Jay’s blog.

The post No Date.today? It’s DateTime! appeared first on Big Nerd Ranch.

]]>
https://bignerdranch.com/blog/no-date-today-its-datetime/feed/ 0