James Holdren - Big Nerd Ranch Tue, 29 Nov 2022 12:58:54 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.5 Go Generics in API Design https://bignerdranch.com/blog/go-generics-in-api-design/ https://bignerdranch.com/blog/go-generics-in-api-design/#respond Wed, 17 Aug 2022 20:45:47 +0000 https://bignerdranch.com/?p=9493 Go 1.18 has finally landed, and with it comes its own flavor of generics. In a previous post, we went over the accepted proposal and dove into the new syntax. For this post, I’ve taken the last example in the first post and turned it into a working library that uses generics to design a more type-safe […]

The post Go Generics in API Design appeared first on Big Nerd Ranch.

]]>
Go 1.18 has finally landed, and with it comes its own flavor of generics. In a previous post, we went over the accepted proposal and dove into the new syntax. For this post, I’ve taken the last example in the first post and turned it into a working library that uses generics to design a more type-safe API, giving a good look at how to use this new feature in a production setting. So grab yourself an update to Go 1.18, and settle in for how we can start to use our new generics to accomplish things the language couldn’t before.

A note on when to use generics

Before we discuss how we’re using generics in the library, I wanted to make a note: generics are just a tool that has been added to the language. Like many tools in the language, it’s not recommended to use all of them all of the time. For example, you should try to handle errors before using panic since the latter will end up exiting your program. However, if you’re completely unable to recover the program after an error, panic might be a perfectly fine option. Similarly, a sentiment has been circulating with the release of Go 1.18 about when to use generics. Ian Lance Taylor, whose name you may recognize from the accepted generics proposal, has a great quote in a talk of his:

Write Go by writing code, not by designing types.

This idea fits perfectly within the “simple” philosophy of Go: do the smallest, working thing to achieve our goal before evolving the solution to be more complex. For example, if you’ve ever found yourself writing similar functions to:

func InSlice(s string, ss []string) bool {
    for _, c := range ss {
        if s != c {
            continue
        }

        return true
    }

    return false
}

And then you duplicate this function for other types, like int, it may be time to start thinking about codifying the more abstract behavior the code is trying to show us:

func InSlice[T constraints.Ordered](t T, ts []T) bool {
    for _, c := range ss {
        if s != c {
            continue
        }

        return true
    }

    return false
}

Overall: don’t optimize for the problems you haven’t solved for yet. Wait to start designing generic types since your project will make abstractions become visible to you the more you work with it. A good rule of thumb here is to keep it simple until you can’t.

Designing Upfront

Although we just discussed how we shouldn’t try to design types before coding and learning the abstractions hidden in our project, there’s an area where I believe we cannot and should not get away from designing the types first: API-first design. After all, once our server starts to respond to and accepts request bodies from clients, careless changes to either one can result in an application no longer working. However, the way we currently write HTTP handlers in Go has a bit of a lack of types. Let’s go through all the ways this can subtly break or introduce issues to our server, starting with a pretty vanilla example:

func ExampleHandler(w http.RepsonseWriter, r *http.Request) {	
    var reqBody Body
    if err := json.NewDecoder(r.Body).Decode(&reqBody); err != nil {
        http.Error(w, err.Error(), http.StatusBadRequest)
        return
    }	

    resp, err := MyDomainFunction(reqBody)
    if err != nil {
        // Write out an error to the client...
    }

    byts, err := json.Marshal(resp)
    if err != nil {
        http.Error(w, err.Error(), http.StatusInternalServerError)
        return
    }

    w.Header().Set("Content-Type", "application/json")
    w.Write(byts)
    w.WriteHeader(http.StatusCreated)
}

Just to be clear on what this HTTP handler does: it ingests a body and decodes it from JSON, which can return an error. It then passes that decoded struct to MyDomainFunction, which gives us either a response or an error. Finally, we marshal the response back to JSON, set our headers, and write the response to the client.

Picking apart the function: Changing return types

Imagine a small change on the return type of the MyDomainFunction function. Say it was returning this struct:

type Response struct {
    Name string
    Age int
}

And now it returns this:

type Response struct {
    FirstName string
    LastName string
    Age int
}

Assuming that MyDomainFunction compiles, so, too, will our example function. It’s great that it still compiles, but this may not be a great thing since the response will change and a client may depend on a certain structure, e.g., there’s no longer a Name field in the new response. Maybe the developer wanted to massage the response so it would look the same despite the change to MyDomainFunction. Worse yet is that since this compiles, we won’t know this broke something until we deploy and get the bug report.

Picking apart the function: Forgetting to return

What happens if we forgot to return after we wrote our error from unmarshaling the request body?

var reqBody RequestBody
if err := json.NewDecoder(r.Body).Decode(&reqBody); err != nil {
    http.Error(w, err.Error(), http.StatusBadRequest)
    return
}

Because http.Error is part of an imperative interface for dealing with responses back to HTTP clients, it does not cause the handler to exit. Instead, the client will get their response, and go about their merry way, while the handler function continues to feed a zero-value RequestBody struct to MyDomainFunction. This may not be a complete error, depending on what your server does, but this is likely an undesired behavior that our compiler won’t catch.

Picking apart the function: Ordering the headers

Finally, the most silent error is writing a header code at the wrong time or in the wrong order. For instance, I bet many readers didn’t notice that the example function will write back a 200 status code instead of the 201 that the last line of the example wanted to return. The http.ResponseWriter API has an implicit order that requires that you write the header code before you call Write, and while you can read some documentation to know this, it’s not something that is immediately called out when we push up or compile our code.

Being Upfront about it

Given all these (albeit minor) issues exist, how can generics help us to move away from silent or delayed failures toward compile-time avoidance of these issues? To answer that, I’ve written a small library called Upfront. It’s just a collection of functions and type signatures to apply generics to these weakly-typed APIs in HTTP handler code. We first have library consumers implement this function:

type BodyHandler[In, Out, E any] func(i BodyRequest[In]) Result[Out, E]

As a small review of the syntax, this function takes any three types for its parameters: In, for the type that is the output of decoding the body, Out, for the type you want to return, and E the possible error type you want to return to your client when something goes awry. Next, your function will accept an upfront.BodyRequest type, which is currently just a wrapper for the request and the JSON-decoded request body:

// BodyRequest is the decoded request with the associated body

type BodyRequest[T any] struct {
    Request *http.Request
    Body    T
}

And finally, the Result type looks like this:

// Result holds the necessary fields that will be output for a response
type Result[T, E any] struct {
    StatusCode int // If not set, this will be a 200: http.StatusOK

    value      T
    err        *E
}

The above struct does most of the magic when it comes to fixing the subtle, unexpected pieces of vanilla HTTP handlers. Rewriting our function a bit, we can see the end result and work backward:

func ExampleHandler[Body, DomainResp, error](in upfront.BodyRequest[Body]) Result[DomainResp, error] { 

    resp, err := MyDomainFunction(in.Body)
    if err != nil {
        return upfront.ErrResult(
            fmt.Errorf("error from MyDomainFunction: %w"),
            http.StatusInternalServerError,
        )
    }

    return upfront.OKResult(
        resp,
        http.StatusCreated,
    )
}

We’ve eliminated a lot of code, but hopefully, we’ve also eliminated a few of the “issues” from the original example function. You’ll first notice that the JSON decoding and encoding are handled by the upfront package, so there’s a few less places to forget return‘s. We also use our new Result type to exit the function, and it takes in a status code. The Result type we’re returning has a type parameter for what we want to send back from our handler. This means if MyDomainFunction changes its return type, the handler will fail compilation, letting us know we broke our contract with our callers long before we git push. Finally, the Result type also takes a status code, so it will handle the ordering of setting it at the right time (before writing the response).

And what’s with the two constructors, upfront.ErrResult and upfront.OKResult? These are used to set the package private fields value and err inside the Result struct. Since they’re private, we can enforce that any constructors of the type can’t set both value and err at the same time. In other languages, this would be similar (definitely not the same) to an Either type.

Final thoughts

This is a small example, but with this library, we can get feedback about silent issues at compile time, rather than when we redeploy the server and get bug reports from customers. And while this library is for HTTP handlers, this sort of thinking can apply to many areas of computer science and areas where we’ve been rather lax with our types in Go. With this blog and library, we’ve sort of reimplemented the idea of algebraic data types, which I don’t see being added to Go in the foreseeable future. But still, it’s a good concept to understand: it might open your mind to think about your current code differently.

Having worked with this library in a sample project, there are a few areas for improvement that I hope we see in future patches. The first being that we cannot use type parameters on type aliases. That would save a bunch of writing and allow library consumers to create their own Result type with an implicit error type instead of having to repeat it everywhere. Secondly, the type inference is a little lackluster. It’s caused the resulting code to be very verbose about the type parameters. On the other hand, Go has never embraced the idea of being terse. If you’re interested in the library’s source code, you can find it here.

All that being said, generics are ultimately a really neat tool. They let us add some type safety to a really popular API in the standard library without getting too much in the way. But as with any too, use them sparingly and when they apply. As always: keep things simple until you can’t.

The post Go Generics in API Design appeared first on Big Nerd Ranch.

]]>
https://bignerdranch.com/blog/go-generics-in-api-design/feed/ 0
Exploring Go v1.18’s Generics https://bignerdranch.com/blog/exploring-go-v1-18s-generics/ https://bignerdranch.com/blog/exploring-go-v1-18s-generics/#respond Wed, 01 Dec 2021 15:00:22 +0000 https://bignerdranch.com/?p=9193 Generics in Go are nearly here! Here's what that means for you and some real use cases.

The post Exploring Go v1.18’s Generics appeared first on Big Nerd Ranch.

]]>
Go 1.18 is set to arrive in February 2022, and with it comes the long-awaited addition of generics to the language. It’s been a long process to find something that works with the current Go ecosystem, but a proposal has been accepted that tries to protect the objectives of the language while adding the largest changes to the language in over a decade. Will developers add more complexity and make things less maintainable with generics? Or will this enable new heights and capabilities for gophers everywhere?

In this post, we’ll go over specifically what type parameters and constraints look like in Go 1.18, but we won’t be covering every detail of the proposal itself: we’ll be giving an overview enough to use them, and then some real-life examples of where generics are going to solve headaches for gophers. As such, no article will ever be a replacement for going over the proposal itself. It’s quite long, but each piece is well explained and still approachable.

Getting set up

To start playing with generics (or really just the next Go version), there are two simple ways:

Go Playground

You can use the Go 2 playground in your browser to execute Go 1.18 samples. A word of caution—this uses a tool that was made to help those trying out the new syntax for proposals, and is no longer maintained. So if something doesn’t work here, it’s likely due to this tool not keeping up with changes since the specifications were finalized.

gotip

This tool is used to compile and run the Go development branch on your local machine without replacing your normal go terminal tools.

  1. go install golang.org/dl/gotip@latest
  2. gotip download
  3. Set your version to 1.18 in your go.mod file for your repo directory.

Once that’s all done, the gotip command can be used anywhere you’ve been using the stable go command, e.g. gotip test ./....

Type parameters

The big change enabling generic structures and data types is the introduction of a type-parameter for type aliases, structs, methods, and standalone functions. Here’s some sample syntax for a generic-looking Node type:

type Node[T any] struct {
    Value T
    Left  *Node[T]
    Right *Node[T]
}

This looks a lot like the existing Go code (which is great), just the addition of T any inside of square brackets. We can read that as a normal parameter to a function, like id string, except this refers to a type that will determine what type the Value field will hold. This could be any type you want, as specified by the any type constraint that we’ll touch on later.

You can instantiate a generic type like this:

node := Node[int]{}

In this snippet, we’ve added square brackets behind the type name to specify that int is the type-parameter, which creates an instance of the Node struct, except it’s typed such that its Value field is an int. You can omit to specify the type-parameter and allow the compiler to infer the type being used:

node := Node{
    Value: 17,
}

Similar to the example above, this results in a Node type that holds an int. Type parameters can also be used on methods belonging to a generic struct, like so:

func (n Node[T]) Val() T {
    return n.Value
}

But they’re really pointing to the types on the receiver (Node) type; methods cannot have type parameters, they can only reference parameters already declared on the base type. If a method doesn’t need type parameters or all x number of them, it can omit them entirely or leave out a few.

Type alias declarations can also accept parameters:

type Slice[T any] []T

Without generics, library authors would write this to accommodate any type:

type Node struct {
    Value interface{}
    Left  *Node
    Right *Node
}

Using this Node definition, library consumers would have to cast each time each time the Value field was accessed:

v, ok := node.Value.(*myType)
if !ok {
    // Do error handling...
}

Additionally, interface{} can be dangerous by allowing any type, even different types for separate instances of the struct like so:

node := Node{
    Value: 62,
    Left: &Node{
        Value: "64",
    },
    Right: &Node{
        Value: []byte{},
    },
}

This results in a very unwieldy tree of heterogeneous types, yet it still compiles. Not only can casting types be tiresome, each time that a typecast happens you open yourself to an error happening at runtime, possibly in front of a user or critical workload. But when the compiler handles type safety and removes the need for casting, a number of bugs and mishaps are avoided by giving us errors that prevent the program from building. This is a great argument in favor of strong-type systems: errors at compilation are easier to fix than debugging bugs at runtime when they’re already in production.

Generics solve a longstanding complaint where library authors have to resort to using interface{}, or manually creating variants of the same structure, or even use code generation just to define functions that could accept different types. With the addition of generics, it removes a lot of headache and effort from common developer use cases.

Type constraints

Back to the any keyword that was mentioned earlier. This is the other half of the generics implementation, called a type constraint, and it tells the compiler to narrow down what types can be used in the type parameter. This is useful for when your generic struct can take most things, but needs a few details about the type it’s referring to, like so:

func length[T any](t T) int {
    return len(t)
}

This won’t compile, since any could allow an int or any sort of struct or interface, so calling len on those would fail. We can define a constraint to help with this:

type LengthConstraint interface {
    string | []byte
}

func length[T LengthConstraint](t T) int {
    return len(t)
}

This type of constraint defines a typeset: a collection of types (separated by a pipe delimiter) that we’re allowing as good answers to the question of what can be passed to our generic function. Because we’ve only allowed two types, the compiler can verify that len works on both of those types; the compiler will check that we can call len on a string (yes) and a byte array (also yes). What if someone hands us a type alias with string as the underlying type? That won’t work since the type alias is a new type altogether and not in our set. But worry not, we can use a new piece of syntax:

type LengthConstraint interface {
    ~string | ~[]byte
}

The tilde character specifies an approximation constraint ~T that can be fulfilled by types that use T as an underlying type. This makes values of type info []byte satisfactory arguments to our length function:

type info []byte

func main() {
    var i info
    a := length(i)
    fmt.Printf("%#v", a)
}

Type constraints can also enforce that methods are present, just like regular interfaces:

type LengthConstraint interface {
    ~string | ~[]byte
    String() string
}

This reads as before, making sure that the type is either based on a string or byte array, but now ensures implementing types also have a method called String that returns a string. A quick note: the compiler won’t stop you from defining a constraint that’s impossible to implement, so be careful:

type Unsatifiable interface {
    int
    String() string
}

Finally, Go 1.18 will also add a package called constraints that will define some utility type sets, like constraints.Ordered, which will contain all types that the less than or greater than operators can use. So don’t feel like you need to redefine your own constraints all the time, odds are you’ll have one defined by the standard library already.

No type erasure

Go’s proposal for generics also mentions that there will not be any type of erasure when using them. That is, all compile-time information about the types used in a generic function will still be available at runtime; no specifics about type-parameters will be lost when using reflect on instances of generic functions and structs. Consider this example of type erasure in Java:

// Example generic class
public class ArrayList<E extends Number> {
    // ...
};

ArrayList<Integer> li = new ArrayList<Integer>();
ArrayList<Float> lf = new ArrayList<Float>();
if (li.getClass() == lf.getClass()) { // evaluates to true
    System.out.println("Equal");
}

Despite the classes having differing types contained within, that information goes away at runtime, which can be useful to prevent code from relying on information being abstracted away.

Compare that to a similar example in Go 1.18:

type Numeric interface {
    int | int64 | uint | float32 | float64
}

type ArrayList[T Numeric] struct {}

var li ArrayList[int64]
var lf ArrayList[float64]
if reflect.TypeOf(li) == reflect.TypeOf(lf) { // Not equal
    fmt.Printf("they're equal")
    return
}

All of the type information is retained when using reflect.TypeOf, which can be important for those wanting to know about what sort of type was passed in and want to act accordingly. However, it should be noted that no information will be provided about the constraint or approximation constraint that the type matched. For example, say we had a parameter of type MyType string that matched a constraint of ~string, reflect will only tell us about MyType and not the constraint that it fulfilled.

Real-life example

There will be many examples of generics being used in traditional computer science data models, especially as the Go community adopts generics and updates their libraries after their release. But to finish out this tour of generics, we’ll look at a real-life example of something that gophers work with almost every single day: HTTP handlers. Gophers will always remember the function signature of a vanilla HTTP handler:

func(w http.ResponseWriter, r *http.Request)

Which honestly doesn’t say much about what the function can take in or return. It can write anything back out of it and we’d have to manually call the function to get the response for unmarshaling and testing.

What follows is a function for constructing HTTP handlers out of functions that take in and return useful types rather than writers and requests that say nothing about the problem domain.

package main

import (
    "encoding/json"
    "fmt"
    "net/http"
)

func AutoDecoded[T any, O any](f func(body T, r *http.Request) (O, error)) http.HandlerFunc {
    return func(w http.ResponseWriter, r *http.Request) {
        // Decode the body into t
        var t T
        if err := json.NewDecoder(r.Body).Decode(&t); err != nil {
            http.Error(w, fmt.Sprintf("error decoding body: %s", err), http.StatusBadRequest)
            return
        }

        // Call the inner function
        result, err := f(t, r)
        if err != nil {
            http.Error(w, err.Error(), http.StatusInternalServerError)
            return
        }

        marshaled, err := json.Marshal(result)
        if err != nil {
            http.Error(w, fmt.Sprintf("error marshaled response: %s", err), http.StatusInternalServerError)
            return
        }

        if _, err := w.Write(marshaled); err != nil {
            http.Error(w, fmt.Sprintf("error writing response: %s", err), http.StatusInternalServerError)
            return
        }
    }
}

type Hello struct {
    Name string `json:"name"`
}

type HelloResponse struct {
    Greeting string `json:"greeting"`
}

func handleApi(body Hello, r *http.Request) (HelloResponse, error) {
    return HelloResponse{
        Greeting: fmt.Sprintf("Hello, %s", body.Name),
    }, nil
}

func main() {
    mux := http.NewServeMux()
    mux.HandleFunc("/api", AutoDecoded(handleApi))

    http.ListenAndServe(":3000", mux)
}

AutoDecoded is a generic function with two type parameters: T and O. These types represent the incoming body type we should expect from the request, and the output type we expect to hand back to the requester, respectively. Looking at the rest of AutoDecoded‘s function signature, we see that it wants a function that maps an instance of T and an *http.Request to an instance of Result[O]. Stepping through AutoDecoded‘s logic, it starts by decoding the incoming JSON request into an instance of T. With the body stored in a variable, the function is ready to call the inner function that was provided as the argument using variable tAutoDecoded then handles if the inner function errored, and if it didn’t, then takes the output and writes it back to the client as JSON.

Below AutoDecoded is our HTTP handler called handleApi, and we can see that it doesn’t follow the normal patter for an http.HandlerFunc; this function accepts and returns types Hello and HelloResponse defined above it. Compared to a function using an http.ResponseWriter, this handleApi function is a lot easier to test since it returns a specific type or error, rather than writing bytes back out. Plus, the types on this function now serve as a bit of documentation: other developers reading this can look at handleApi‘s type signature and know what it is expecting and what it will return. Finally, the compiler can tell us if we’re trying to return the wrong type from our handler. Before you could write anything back to the client and the compiler wouldn’t care:

func(w http.ResponseWriter, r *http.Request) {
    // Decoding logic...

    // Our client was expecting a `HelloResponse`!
    w.Write([]byte{"something else entirely"})
}

In the main function, we see a bit of type-parameter inference happen when we no longer need to specify the type parameters when linking the pieces together in AutoDecoded(handleApi). Since AutoDecoded returns an http.Handler, we can take the result of the function call and plug it into any Go router, like the http.ServeMux used here. And with that, we’ve introduced more concrete types into our program.

As you can hopefully now see, it’s going to be nice to have compiler functionality in Go to help us write more generic types and functions while avoiding the warts of the language where we might resort to using interface{}. The introduction of type parameters will be a gradual process overall, as Russ Cox has made the suggestion to not touch the existing standard library types or structures in the next Go release. Even if you don’t want to use generics, this next release will be compatible with your existing code, so adopt what features make sense for your project and team. Congrats to the Go team on another release, and congrats to the gophers who have patiently awaited these features for so long. Have fun being generic!

The post Exploring Go v1.18’s Generics appeared first on Big Nerd Ranch.

]]>
https://bignerdranch.com/blog/exploring-go-v1-18s-generics/feed/ 0
Embracing Cloud Native https://bignerdranch.com/blog/embracing-cloud-native/ https://bignerdranch.com/blog/embracing-cloud-native/#respond Sat, 02 Oct 2021 18:34:05 +0000 https://bignerdranch.com/?p=7781 Cloud infrastructure has pushed software towards abstracting the developer away from the operating hardware, making global networks and copious amounts of computing power available over API’s, and managing large swaths of lower tiers of the tech stack with autonomous software. Gone are the days of buying bulky servers to own and here are the times […]

The post Embracing Cloud Native appeared first on Big Nerd Ranch.

]]>
Cloud infrastructure has pushed software towards abstracting the developer away from the operating hardware, making global networks and copious amounts of computing power available over API’s, and managing large swaths of lower tiers of the tech stack with autonomous software. Gone are the days of buying bulky servers to own and here are the times of renting pieces of a data center to host applications. But how does designing for a cloud environment change your application? How do software teams take advantage of all the advancements coming with this new set of infrastructure? This article will go over three pillars of a “Cloud-Native” application and how you can embrace them in your own software.

Embracing Failure

One incredible paradigm the Cloud has brought forth is represented in the Pets vs Cattle analogy. It differentiates how we treat our application servers between pets, things that we love and care for and never want to have die or be replaced, and cattle, things that are numbered and if one leaves another can take its place. It may sound cold and disconnected, but it embraces the failure and accepts it using the same methodology of “turning it off and on again”. This aligns with the Cloud mentality of adding more virtual machines and disposing of them at will, rather than the old ways of keeping a number of limited, in-house servers running because you didn’t have a whole data center available to you.

To utilize this methodology, it must be easy for your app to be restarted. One way to reflect this in your app is to make your server stateless, meaning it doesn’t persist state on its own disk: it delegates state to a database or a managed service for handling state in a resilient way. For connections or stateful attachments to dependencies, don’t fight it and try to reconnect when something goes down: just restart the application and let the initialization logic connect again. In cases where this isn’t possible, the orchestration software will kill the application, thinking it’s unhealthy (which it is) and try to restart it again, giving you a faux-exponential-backoff loop.

The above thinks of failure as binary: either the application is working or it isn’t, and let the orchestration software handle the unhealthy parts. But there’s another method to compliment these failure states, and that’s handling degraded functionality. In this scenario, some of your servers are unhealthy, but not all of them. If you’re already using an orchestration layer, you’ll likely already have something to handle this scenario: the software managing your application sees that certain instances are down, so it reroutes traffic to healthy instances and will return traffic when the instances are healthy again. But in the scenario where entire chunks of functionality are down, you can handle this state and plan for it. For example, you can return data and errors in a graphql response:

{
  "data": {
    "user": {
      "name": "James",
      "favoriteFood": "omelettes",
    },
    "comments": null,
  },
  "errors": [
    {
      "path": [
        "comments"
      ],
      "locations": [
        {
          "line": 2,
          "column": 3
        }
      ],
      "message": "Could not fetch comments for user"
    }
  ]
}

Here parts of the application were able to return user data, but comments weren’t available, so we return what we have, accepting that failure and working with it rather than returning no data. Just because parts of your application aren’t healthy doesn’t mean the user can’t still get things done with the other parts.

Embracing Agility

A more agile application means it’s quicker to start and schedule should you need more instances of it. In scenarios where the system has determined it needs more clones of your app, you don’t want to wait 5 or more minutes for it to get going. After all, in the Cloud we’re no longer buying physical servers: we’re renting the space and computing power that we need, so waiting for applications to reach a healthy state is wasting money. For your users, bulky, “slow-to-schedule” applications mean a delay in getting more resources and degraded performance or, in worse scenarios, an outage because servers are overloaded while they wait on reinforcements.

Whether you’re coming from an existing application or looking to make a Cloud-Native one from the start, the best way to make an application more agile is to think smaller. This means that the server you’re constructing is doing less, reducing start time and becoming less bloated with features. If your application is large and has unwieldy dependencies on prerequisite software installed on the server, consider removing those dependencies by delegating them to a third party or making it into a service to be started elsewhere. If your application is still too large, consider microservices, where appropriately sized and cohesive pieces of the total application are deployed separately and communicate over a network. It can increase complexity in operating the total application, but microservices can also lessen the cognitive load required to manage any individual piece with it or also coupled to the rest of the whole.

Embracing Elasticity

Image Credit: https://systeminterview.com

Following the points from above, if it’s easier to run instances of your application, it’s easier for the software to autonomously manage how many are running. This means the infrastructure managing your app can monitor the traffic or resource usage and start to add more instances to handle the increased load. In times where there’s less usage, it can scale down your resources to match. This is a huge departure from how elasticity was thought of using a traditional model: previously, you bought servers and maintained them, so you didn’t plan for just adding them on the fly. To compensate for dynamic amounts of load, you had to take the topmost estimate and add buffer room for extra heavy traffic times. During normal operation, there was capacity just sitting around unused. And to increase capacity, you likely tried to add more capacity to a single machine with upgrades and newer internals.

Again, to benefit from the elasticity that the Cloud gives you, it’s best to make it easier to enjoy that benefit. You can follow the tips on agility to make your application smaller, but before that, it might important to make it possible to run many instances of your application in the first place. This can mean removing any logic that counts on a fixed number of instances running, like using a single instance of a server because you need locks to handle concurrent logic. For scenarios like that, you can use locks provided by your database or your caching solution. All in all, the idea should be to look at logical factors that prevent you from running a second or third instance of your application in parallel. Ask yourself what are the downsides or complications of just adding one more instance of your app, and create a list of barriers. Once you’ve removed those, you’ll find that running tens or hundreds of instances in parallel is now possible.

Conclusion

The Cloud has changed the way we think about and run software, and your existing application may need to change to best utilize it. With so much of the infrastructure managed with autonomous software, new tooling has made it easier than ever to manage entire fleets of applications, removing the developer further away from the gritty details. It has pushed software deployments to be more agile, embrace failure as normal, and allow scaling by adding instances instead of making faster machines. If you’re not already running with all the Cloud has to offer, give it another look and see if it aligns with your future needs, both for your business and your application.

The post Embracing Cloud Native appeared first on Big Nerd Ranch.

]]>
https://bignerdranch.com/blog/embracing-cloud-native/feed/ 0
Using the httptest package in Golang https://bignerdranch.com/blog/using-the-httptest-package-in-golang/ https://bignerdranch.com/blog/using-the-httptest-package-in-golang/#respond Tue, 14 Sep 2021 14:22:31 +0000 https://bignerdranch.com/?p=7727 Testing at the edge Testing your code is a great practice, and can give confidence to the developers shipping it to production. Unit and integration tests are great for testing things like application logic or independent pieces of functionality, but there are other areas of code at the “edges” of the application that are a […]

The post Using the httptest package in Golang appeared first on Big Nerd Ranch.

]]>
Testing at the edge

Testing your code is a great practice, and can give confidence to the developers shipping it to production. Unit and integration tests are great for testing things like application logic or independent pieces of functionality, but there are other areas of code at the “edges” of the application that are a bit harder to test because they deal with incoming or outgoing requests from third parties. Luckily Go has baked into its standard library the httptest package, a small set of structures and functions to help create end-to-end tests for these edges of your application.

Using the ResponseRecorder and NewRequest

A common “edge” in a Go application is where a server exposes http.Handler functions to respond to web traffic. Normally, to test these, it would require standing up your server somewhere, but the httptest package gives us NewRequest and NewRecorder to help simplify these sorts of test cases.

Testing an HTTP handler

This test calls an HTTP handler function and checks it a few behaviors, namely: a 200 response code is sent back and a header with the API version is returned.

func Handler(w http.ResponseWriter, r *http.Request) {
    // Tell the client that the API version is 1.3
    w.Header().Add("API-VERSION", "1.3")
    w.Write([]byte("ok"))
}

func TestHandler(t *testing.T) {
    req := httptest.NewRequest(http.MethodGet, "http://example.com", nil)
    w := httptest.NewRecorder()

    Handler(w, req)

    // We should get a good status code
    if want, got := http.StatusOK, w.Result().StatusCode; want != got {
        t.Fatalf("expected a %d, instead got: %d", want, got)
    }

    // Make sure that the version was 1.3
    if want, got := "1.3", w.Result().Header.Get("API-VERSION"); want != got {
        t.Fatalf("expected API-VERSION to be %s, instead got: %s", want, got)
    }
}

httptest.NewRequest provides a convenience wrapper around http.NewRequest so you don’t have to check the error making a Request object. Below that httptest.NewRecorder makes a recorder that the HTTP handler writes to as its http.ResponseWriter, and it captures all of the changes that would have been returned to a client caller. Using this, there’s no need to start your server: just hand the recorder directly to the function and it invokes it the same way it would if the request came in over HTTP. After the handler call, the recorder’s Result call provides the values written to it for checking any behaviors you may need to to assert in the rest of your test.

Using the Test Server

While servers often intake requests, there’s another “edge” to be tested on the other side where a server makes outbound requests. Testing these behaviors can be difficult since it requires that you either mock the code calling out or call out to the real thing (maybe even a test instance). Thankfully httptest gives us Server, a way to start a local server to respond to real HTTP requests inside of a test.

Test Setup

func TestTrueSundayResponseReturnsTrue(t *testing.T) {
    // Create a server that returns a static JSON response
    s := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
        w.Write([]byte(`{"isItSunday": true}`))
    }))
    // Be sure to clean up the server or you might run out of file descriptors!
    defer s.Close()

    // The pretend client under test
    c := client.New()

    // Ask the client to reach out to the server and see if it's Sunday
    if !c.Sunday(s.URL) {
        t.Fatalf("expected client to return true!")
    }
}

httptest.NewServer accepts an http.Handler, so for this test, we gave it a function that responds in JSON that it’s Sunday. When you make a new test server, it binds to a random free port on the local machine, but we can access the URL it exposed by passing the URL field on the server struct to the client code. From there, the client can make an actual request to a server, not a mock, and parse the response like it would in production. Note that you should clean up the server by calling Close when the test finishes to free up resources or you may find yourself out of available ports for running further tests.

Leveraging the http.Handler interface

Because NewServer accepts an instance of an http.Handler, the test server can do a lot more than just return static responses, like providing multiple route handlers. In the next example, the test server will provide two endpoints:

  1. /token which returns some secret created during the test.
  2. /resource returns what day it is, but only from requests that have the secret bearer token in their header.

The goal of this test is to ensure that the client code calls both endpoints and takes information from one endpoint and properly passes it to the other.

func TestPseudOAuth(t *testing.T) {
    // Make a secret for the instance of this test
    secret := fmt.Sprintf("secret_%d", time.Now().Unix())

    // Implements the http.Handler interface to be passed to httptest.NewServer
    mux := http.NewServeMux()

    // The /resource handler checks the headers for a non-expired token.
    // It returns a 401 if it is, otherwise returns the treasure inside.
    mux.HandleFunc("/resource", func(w http.ResponseWriter, r *http.Request) {
        auth := r.Header.Get("Auth")
        if auth != secret {
            http.Error(w, "Auth header was incorrect", http.StatusUnauthorized)
            return
        }

        // Header was good, tell 'em what day it is
        w.Write([]byte(`{ "day": "Sunday" }`))
    })

    // The /token handler mints a new token that's good for 5 minutes to
    // access the /resource endpoint
    mux.HandleFunc("/token", func(w http.ResponseWriter, r *http.Request) {
        w.Write([]byte(secret))
    })

    s := httptest.NewServer(mux)
    defer s.Close()

    // The pretend client under test
    c := client.New()

    // Make the call and make sure it's Sunday
    got, err := c.GetDay(s.URL)
    if err != nil {
        t.Fatalf("unexpected error: %s", err)
    }
    if want := "Sunday"; want != got {
        t.Fatalf("I thought it was %s! Instead it's: %s", want, got)
    }
}

This test looks a lot like the previous one, except we’re passing a different implementation of an http.Handler to the test server. Although the code uses http.NewServeMux, you can use anything, like gorilla/mux, so long as it implements the interface. Just by changing the http.Handler to be a more elaborate route handler, the tests can make more detailed assertions about outgoing HTTP requests and flows.

Avoiding fragile End-To-End tests

End-To-End tests by nature call every part of your application required to serve a request, and so they can rely on quite a few components within your code to function. While they’re great for adding test coverage and spreading that coverage to the very edges of your application, they can also be flakier than their unit/integration test counterparts. To avoid writing flaky tests, be sure to only test for observable behaviors and avoid testing for the internals as those are more likely to change than the output of the feature under test.

Conclusion

Go’s httptest package provides a small, but incredibly useful set of tools for testing the edge portions of HTTP handling code, both for servers and their clients. It provides some neat tools to add test coverage to the “edges” of your applications using real servers and requests. Best of all, it’s included in Go’s standard library, so the Continuous Integration pipeline for your Go code already has support for it without further hassle. If you have any interest in the additional utilities the httptest package provides, you can read the documentation itself. Additionally, if you need to test external dependencies like databases or other dockerized applications, you might want to check out Ory Dockertest.

The post Using the httptest package in Golang appeared first on Big Nerd Ranch.

]]>
https://bignerdranch.com/blog/using-the-httptest-package-in-golang/feed/ 0