Back-End - Big Nerd Ranch Tue, 29 Nov 2022 12:58:54 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.5 Four Key Reasons to Learn Markdown https://bignerdranch.com/blog/four-key-reasons-to-learn-markdown/ https://bignerdranch.com/blog/four-key-reasons-to-learn-markdown/#respond Wed, 16 Nov 2022 16:10:41 +0000 https://bignerdranch.com/?p=9563 Writing documentation is fun—really, really fun. I know some engineers may disagree with me, but as a technical writer, creating quality documentation that will help folks has been the most engaging and fulfilling job I’ve ever had. One of the coolest aspects has been using Markdown to write and format almost all of that documentation. […]

The post Four Key Reasons to Learn Markdown appeared first on Big Nerd Ranch.

]]>
Writing documentation is fun—really, really fun. I know some engineers may disagree with me, but as a technical writer, creating quality documentation that will help folks has been the most engaging and fulfilling job I’ve ever had. One of the coolest aspects has been using Markdown to write and format almost all of that documentation.

What is Markdown?

Markdown is a lightweight markup language created in 2004 by John Gruber and Aaron Swartz. It creates formatted text via a plain-text editor. Unlike HTML or XML, it is still easily digestible by readers of all backgrounds in its source form. You don’t need to be a programmer to get the gist of things. And although it borrows its syntax from HTML, Markdown is easier and quicker to learn.

The tags—that is, the syntax used to format text (<b>word</b> to bold text, for example)—are simpler than HTML, but they still automatically convert to HTML. So, if you’d prefer, you can still use HTML tags when working with Markdown.

Markdown is used almost everywhere, from GitHub to Slack. It’s the unofficial text writing and formatting standard on big coding sites, like coding repositories. Most engineering readme files are written with and formatted using Markdown. Most text editors accept it as well.

Beyond the fact that it’s easy to use, quick to learn, and easily converts to HTML, Markdown is also pretty futureproof. By this, I mean Markdown will be usable as long as plain text is the official and unofficial standard. It was designed to be quickly parsed and digested as a raw file, but also has its own file extension (.md). Suffice to say, Markdown isn’t going anywhere—especially in the world of engineering and engineering documentation.

Why use Markdown?

I’m answering this question from my perspective as a technical writer, but you can leverage the benefits of Markdown whenever you write online.

1: It’s simple

Markdown is very simple, as far as markup languages are concerned. That is honestly its biggest benefit. It takes maybe 30 minutes to learn and about an hour to become proficient. Another added benefit both within and outside of engineering orgs is that Markdown text is easy to parse and read in its raw form. This is important because both XML and HTML have a learning curve, so folks who aren’t versed in those languages might not be able to read text packaged in either of those markup languages. Markdown fixes that. It is unobtrusive to the actual text so anyone can read text packaged within Markdown’s syntax.

2: It’s a soft introduction to programming

If you’re new to the world of software engineering, Markdown works as an interesting peek into the power of code. Yes, Markdown’s syntax is simple, but if you’ve never coded, even formatting in Markdown might feel like coding. Seeing your formatting come to life on a webpage or text editor is very cool for those new to programming or markup languages, and I firmly believe that it can inspire people to dive deeper into the world of coding.

3: It’s fast

Now, from a technical writer’s perspective, Markdown makes my job easier. I can write with Markdown at a faster cadence than I could with HTML or XML. Plus, I’ve found that Markdown has been an invaluable bridge between engineering and content writing (a massive umbrella that technical writing falls under).

If a subject matter expert (SME) hands me a piece of documentation he wrote for an API process he’s been working on, I can jump right in because, as I’ve said, Markdown (even in its raw form) can be read by anyone. It puts the engineer and myself on the same page, and it keeps us there together. Plus, most Integrated Development Environments (IDEs) feature text edit areas where Markdown acts as the default markup language for writing.

So, from a technical writer’s perspective, Markdown is a writing tool that keeps documentation formatting a breeze, but it also moves us technical writers closer to developers because it allows us to speak (and write) in the same language and use the same basic formatting syntax. And the best, most useful documentation is created when developers and technical writers are on the same page.

4: It’s collaborative

Markdown is more than just a simplified language. The power of Markdown is that it levels the playing field for technical writers and fosters collaboration between them and engineers—especially technical writers without deep, technical backgrounds.

A technical writing organization that sets Markdown as their default markup language for all documentation opens the door for more technical writers to be hired from diverse backgrounds. This is because one can upskill into being proficient with Markdown quite quickly, as opposed to XML and HTML. I like to call it the great equalizer. Documentation, in many ways, is the unsung hero of every product and engineering org. And in the end, it all comes back to Markdown.

Where to go from here

I’ve long been interested in programming, and I’ve learned a lot of programming skills in my free time. But when I knew I wanted to pivot to technical writing, the first thing I learned (and doubled down on) was Markdown. Before I started my career in this field, every engineer and technical writer I talked to recommended it as, literally, one of the first things I should learn. So I did—and I’m so glad!

Now, as I’m sure those who work closely with me know, I evangelize Markdown whenever I’m given the chance—with colleagues, with folks who come to me wanting to transition to technical writing, and with clients. In my eyes, Markdown is the backbone of modern technical writing and documentation, and it isn’t going anywhere. It is the soft standard for documentation in the world of tech and, eventually, I believe it will just be the standard across the board.

Markdown is the future of technical documentation. And as more and more companies, IDEs, and coding repositories use it as the default markup format for editing and writing documentation, that future is starting now. If you’re starting to write documentation or are considering technical writing, I highly recommend learning Markdown. It will serve you well.

The post Four Key Reasons to Learn Markdown appeared first on Big Nerd Ranch.

]]>
https://bignerdranch.com/blog/four-key-reasons-to-learn-markdown/feed/ 0
An introduction to GitHub Copilot Using the Plugin for Neovim https://bignerdranch.com/blog/github-copilot-using-the-plugin-for-neovim/ https://bignerdranch.com/blog/github-copilot-using-the-plugin-for-neovim/#respond Thu, 13 Oct 2022 14:59:03 +0000 https://bignerdranch.com/?p=9484 Humanity has come a long way in its technological journey. We have reached the cusp of an age in which the concepts we have collectively known as science fiction are quickly becoming science fact. The concept of artificial intelligence is highly anticipated yet controversial. Parts of society are apprehensive about its use, and perhaps for […]

The post An introduction to GitHub Copilot Using the Plugin for Neovim appeared first on Big Nerd Ranch.

]]>
Humanity has come a long way in its technological journey. We have reached the cusp of an age in which the concepts we have collectively known as science fiction are quickly becoming science fact. The concept of artificial intelligence is highly anticipated yet controversial. Parts of society are apprehensive about its use, and perhaps for good reason. There are valid concerns about the safe and ethical use of this new technology. Regardless of any societal contention, the robots have arrived! This particular robot’s name is GitHub Copilot; if you’re a software developer, or learning to be one, it may be your new best friend.

What is GitHub Copilot?

GitHub Copilot is an AI code assistant tool that integrates with your IDE. It was developed by OpenAI in cooperation with GitHub. It uses the OpenAI Codex language model as its source to suggest generated code in real-time as you type based on the content of your editor’s buffer. There are currently plugins available for Visual Studio, Visual Studio Code, JetBrains, and Neovim. As a Vim user, I created this article to provide real examples of what these suggestions look like using the GitHub Copilot plugin for Neovim.

How does it work?

The OpenAI Codex parses public code sources in order to train its engine to transform natural language into runnable code. It’s currently in private beta, so if you’re interested, you can join their waitlist for access.

Getting Started with GitHub Copilot

Installing GitHub Copilot is a simple process that involves the following steps:

  1. Join the technical preview waitlist
  2. Download and install the plugin
  3. Activate the plugin

Join the Technical Preview Waitlist

At the time of this writing, GitHub Copilot is only available to a small group of developers. Before you can use it, you will need to apply for access to the technical preview in order to activate the plugin. Sign up for the wait list at https://github.com/features/copilot/signup.

Install the Plugin

GitHub Copilot has plugins available for the most common code editors. This article focuses on the Neovim plugin, but if you’d like to try out one of the others, read the documentation at https://github.com/github/copilot-docs, otherwise follow along to install the GitHub Copilot Neovim plugin.

  1. Install the plugin by following the official documentation available in the plugin’s GitHub repository https://github.com/github/copilot.vim I highly recommend that you use a plugin manager such as vim-plug to install the GitHub Copilot plugin for Neovim, and all your other Neovim plugins as well.

  2. Run the :Copilot setup command, and you will be prompted to enter your activation code.

  3. Agree to the telemetry terms and conditions, and you’re ready to start receiving AI-generated code directly in Neovim.

Configure the Plugin

By default, the following key bindings are configured:

Key Action
Tab Accept the suggestion
Ctrl-] Dismiss the current suggestion
Alt-[ Cycle to the next suggestion
Alt-] Cycle to the previous suggestion

I have the Alt key on my system reserved for another application, so I chose to remap these to Ctrl-\Ctrl-j, and Ctrl-k respectively. I did this by adding the following to my Neovim config. The Neovim config file is typically located at ~/.config/nvim/config.

imap <silent> <C-j> <Plug>(copilot-next)
imap <silent> <C-k> <Plug>(copilot-previous)
imap <silent> <C-\> <Plug>(copilot-dismiss)

As I was editing my Neovim config I was already getting suggestions! Line 137 in the image below is an auto-suggestion from the GitHub Copilot plugin.

Fun with Copilot

This fun example below is written using Copilot’s suggestions, with minimal help from me. I only wrote the initial comment to get it started.

Copilot gave us the basic outline of the steps to take, although, Copilot needs a little more help to figure out what we are doing. Adding the API URL should help. Let’s see what happens…

I do find it funny that Copilot would give us the GitHub API as its suggestion for the API URL. I really like how it was able to populate the import block for me based solely on the comments. Thanks Copilot!

Now I’m just going to use the suggestions for each line and see what Copilot can do on its own.

Pretty handy. This is essentially a valid program. Copilot understood we are working with user data and injected a User type in there, but the definition doesn’t exist. Copilot will need to know the structure of our data to give us suggestions on parsing it. Let’s get that in there and then continue with the logic to parse the data.

But wait! There is another issue, if you caught it. The response from the GitHub API will return a single model, while Copilot set us up to parse an array; however, for the sake of this example, I’m going to leave it as is. I’m not planning on running this code anyway.

Neat! Copilot even gave us some fields to work with, though they aren’t the actual fields that are returned from the GitHub API – I know, I know: I’m expecting a lot from a robot. Next, let’s jump into some parsing logic. I’m just going to add some comments to ask Copilot to suggest some solutions for me and see what happens.

Well… I’m impressed. I can absolutely see how this can increase velocity for development. This is especially true for a language like Go that can require more explicit instructions to perform common operations, such as deleting an element from an array of models.

Final Thoughts

Copilot is the perfect name for this tool. I found that it does a terrific job at predicting my intentions as I’m authoring code. Copilot can become a developer’s best friend, and I’m certain that it will quickly change the way programmers write code. I can see how Copilot could be an incredible learning tool. It can be extremely beneficial for students, as well as experienced programmers that are learning the fundamentals of a new language.

Technology is exciting, and it makes our lives more convenient. Perhaps, however, we need to be aware of our dependency on the innovative tools we have available. GitHub Copilot is so useful and convenient that perhaps it’s possible for programmers to depend on it too much. Copilot can save us a lot of time referencing documentation, but this comes with the cost of not fully understanding critical concepts or having our coding muscles atrophy from lack of practice. One should make sure to always take the time to review the suggestions from Copilot, and to research why the solutions it provides work.

Links and Further Reading

If you’d like to learn more about Copilot, here is a list of links that can help you take a deeper dive:

The post An introduction to GitHub Copilot Using the Plugin for Neovim appeared first on Big Nerd Ranch.

]]>
https://bignerdranch.com/blog/github-copilot-using-the-plugin-for-neovim/feed/ 0
Go Generics in API Design https://bignerdranch.com/blog/go-generics-in-api-design/ https://bignerdranch.com/blog/go-generics-in-api-design/#respond Wed, 17 Aug 2022 20:45:47 +0000 https://bignerdranch.com/?p=9493 Go 1.18 has finally landed, and with it comes its own flavor of generics. In a previous post, we went over the accepted proposal and dove into the new syntax. For this post, I’ve taken the last example in the first post and turned it into a working library that uses generics to design a more type-safe […]

The post Go Generics in API Design appeared first on Big Nerd Ranch.

]]>
Go 1.18 has finally landed, and with it comes its own flavor of generics. In a previous post, we went over the accepted proposal and dove into the new syntax. For this post, I’ve taken the last example in the first post and turned it into a working library that uses generics to design a more type-safe API, giving a good look at how to use this new feature in a production setting. So grab yourself an update to Go 1.18, and settle in for how we can start to use our new generics to accomplish things the language couldn’t before.

A note on when to use generics

Before we discuss how we’re using generics in the library, I wanted to make a note: generics are just a tool that has been added to the language. Like many tools in the language, it’s not recommended to use all of them all of the time. For example, you should try to handle errors before using panic since the latter will end up exiting your program. However, if you’re completely unable to recover the program after an error, panic might be a perfectly fine option. Similarly, a sentiment has been circulating with the release of Go 1.18 about when to use generics. Ian Lance Taylor, whose name you may recognize from the accepted generics proposal, has a great quote in a talk of his:

Write Go by writing code, not by designing types.

This idea fits perfectly within the “simple” philosophy of Go: do the smallest, working thing to achieve our goal before evolving the solution to be more complex. For example, if you’ve ever found yourself writing similar functions to:

func InSlice(s string, ss []string) bool {
    for _, c := range ss {
        if s != c {
            continue
        }

        return true
    }

    return false
}

And then you duplicate this function for other types, like int, it may be time to start thinking about codifying the more abstract behavior the code is trying to show us:

func InSlice[T constraints.Ordered](t T, ts []T) bool {
    for _, c := range ss {
        if s != c {
            continue
        }

        return true
    }

    return false
}

Overall: don’t optimize for the problems you haven’t solved for yet. Wait to start designing generic types since your project will make abstractions become visible to you the more you work with it. A good rule of thumb here is to keep it simple until you can’t.

Designing Upfront

Although we just discussed how we shouldn’t try to design types before coding and learning the abstractions hidden in our project, there’s an area where I believe we cannot and should not get away from designing the types first: API-first design. After all, once our server starts to respond to and accepts request bodies from clients, careless changes to either one can result in an application no longer working. However, the way we currently write HTTP handlers in Go has a bit of a lack of types. Let’s go through all the ways this can subtly break or introduce issues to our server, starting with a pretty vanilla example:

func ExampleHandler(w http.RepsonseWriter, r *http.Request) {	
    var reqBody Body
    if err := json.NewDecoder(r.Body).Decode(&reqBody); err != nil {
        http.Error(w, err.Error(), http.StatusBadRequest)
        return
    }	

    resp, err := MyDomainFunction(reqBody)
    if err != nil {
        // Write out an error to the client...
    }

    byts, err := json.Marshal(resp)
    if err != nil {
        http.Error(w, err.Error(), http.StatusInternalServerError)
        return
    }

    w.Header().Set("Content-Type", "application/json")
    w.Write(byts)
    w.WriteHeader(http.StatusCreated)
}

Just to be clear on what this HTTP handler does: it ingests a body and decodes it from JSON, which can return an error. It then passes that decoded struct to MyDomainFunction, which gives us either a response or an error. Finally, we marshal the response back to JSON, set our headers, and write the response to the client.

Picking apart the function: Changing return types

Imagine a small change on the return type of the MyDomainFunction function. Say it was returning this struct:

type Response struct {
    Name string
    Age int
}

And now it returns this:

type Response struct {
    FirstName string
    LastName string
    Age int
}

Assuming that MyDomainFunction compiles, so, too, will our example function. It’s great that it still compiles, but this may not be a great thing since the response will change and a client may depend on a certain structure, e.g., there’s no longer a Name field in the new response. Maybe the developer wanted to massage the response so it would look the same despite the change to MyDomainFunction. Worse yet is that since this compiles, we won’t know this broke something until we deploy and get the bug report.

Picking apart the function: Forgetting to return

What happens if we forgot to return after we wrote our error from unmarshaling the request body?

var reqBody RequestBody
if err := json.NewDecoder(r.Body).Decode(&reqBody); err != nil {
    http.Error(w, err.Error(), http.StatusBadRequest)
    return
}

Because http.Error is part of an imperative interface for dealing with responses back to HTTP clients, it does not cause the handler to exit. Instead, the client will get their response, and go about their merry way, while the handler function continues to feed a zero-value RequestBody struct to MyDomainFunction. This may not be a complete error, depending on what your server does, but this is likely an undesired behavior that our compiler won’t catch.

Picking apart the function: Ordering the headers

Finally, the most silent error is writing a header code at the wrong time or in the wrong order. For instance, I bet many readers didn’t notice that the example function will write back a 200 status code instead of the 201 that the last line of the example wanted to return. The http.ResponseWriter API has an implicit order that requires that you write the header code before you call Write, and while you can read some documentation to know this, it’s not something that is immediately called out when we push up or compile our code.

Being Upfront about it

Given all these (albeit minor) issues exist, how can generics help us to move away from silent or delayed failures toward compile-time avoidance of these issues? To answer that, I’ve written a small library called Upfront. It’s just a collection of functions and type signatures to apply generics to these weakly-typed APIs in HTTP handler code. We first have library consumers implement this function:

type BodyHandler[In, Out, E any] func(i BodyRequest[In]) Result[Out, E]

As a small review of the syntax, this function takes any three types for its parameters: In, for the type that is the output of decoding the body, Out, for the type you want to return, and E the possible error type you want to return to your client when something goes awry. Next, your function will accept an upfront.BodyRequest type, which is currently just a wrapper for the request and the JSON-decoded request body:

// BodyRequest is the decoded request with the associated body

type BodyRequest[T any] struct {
    Request *http.Request
    Body    T
}

And finally, the Result type looks like this:

// Result holds the necessary fields that will be output for a response
type Result[T, E any] struct {
    StatusCode int // If not set, this will be a 200: http.StatusOK

    value      T
    err        *E
}

The above struct does most of the magic when it comes to fixing the subtle, unexpected pieces of vanilla HTTP handlers. Rewriting our function a bit, we can see the end result and work backward:

func ExampleHandler[Body, DomainResp, error](in upfront.BodyRequest[Body]) Result[DomainResp, error] { 

    resp, err := MyDomainFunction(in.Body)
    if err != nil {
        return upfront.ErrResult(
            fmt.Errorf("error from MyDomainFunction: %w"),
            http.StatusInternalServerError,
        )
    }

    return upfront.OKResult(
        resp,
        http.StatusCreated,
    )
}

We’ve eliminated a lot of code, but hopefully, we’ve also eliminated a few of the “issues” from the original example function. You’ll first notice that the JSON decoding and encoding are handled by the upfront package, so there’s a few less places to forget return‘s. We also use our new Result type to exit the function, and it takes in a status code. The Result type we’re returning has a type parameter for what we want to send back from our handler. This means if MyDomainFunction changes its return type, the handler will fail compilation, letting us know we broke our contract with our callers long before we git push. Finally, the Result type also takes a status code, so it will handle the ordering of setting it at the right time (before writing the response).

And what’s with the two constructors, upfront.ErrResult and upfront.OKResult? These are used to set the package private fields value and err inside the Result struct. Since they’re private, we can enforce that any constructors of the type can’t set both value and err at the same time. In other languages, this would be similar (definitely not the same) to an Either type.

Final thoughts

This is a small example, but with this library, we can get feedback about silent issues at compile time, rather than when we redeploy the server and get bug reports from customers. And while this library is for HTTP handlers, this sort of thinking can apply to many areas of computer science and areas where we’ve been rather lax with our types in Go. With this blog and library, we’ve sort of reimplemented the idea of algebraic data types, which I don’t see being added to Go in the foreseeable future. But still, it’s a good concept to understand: it might open your mind to think about your current code differently.

Having worked with this library in a sample project, there are a few areas for improvement that I hope we see in future patches. The first being that we cannot use type parameters on type aliases. That would save a bunch of writing and allow library consumers to create their own Result type with an implicit error type instead of having to repeat it everywhere. Secondly, the type inference is a little lackluster. It’s caused the resulting code to be very verbose about the type parameters. On the other hand, Go has never embraced the idea of being terse. If you’re interested in the library’s source code, you can find it here.

All that being said, generics are ultimately a really neat tool. They let us add some type safety to a really popular API in the standard library without getting too much in the way. But as with any too, use them sparingly and when they apply. As always: keep things simple until you can’t.

The post Go Generics in API Design appeared first on Big Nerd Ranch.

]]>
https://bignerdranch.com/blog/go-generics-in-api-design/feed/ 0
Why the Usage of Instrumentation Within Monitoring Tools Should be Implemented in Your Next Web Project https://bignerdranch.com/blog/why-the-usage-of-instrumentation-within-monitoring-tools-should-be-implemented-in-your-next-web-project/ https://bignerdranch.com/blog/why-the-usage-of-instrumentation-within-monitoring-tools-should-be-implemented-in-your-next-web-project/#respond Tue, 10 May 2022 14:00:19 +0000 https://bignerdranch.com/?p=9389 When designing a web application, a strategy that has often been used is to use a monitoring tool such as Grafana or Datadog. There are many benefits to doing this such as log querying, monitoring health of applications, and viewing performance; instrumentation of custom metrics and log tags can help when identifying problems. How is […]

The post Why the Usage of Instrumentation Within Monitoring Tools Should be Implemented in Your Next Web Project appeared first on Big Nerd Ranch.

]]>
When designing a web application, a strategy that has often been used is to use a monitoring tool such as Grafana or Datadog. There are many benefits to doing this such as log querying, monitoring health of applications, and viewing performance; instrumentation of custom metrics and log tags can help when identifying problems. How is this instrumentation setup and how can this be visualized within the monitoring tools?

Web Monitoring

Monitoring tools such as Grafana and Datadog are used in everyday commercial applications for tasks such as log querying, monitoring the health of applications, and viewing performance. Log querying can be integrated with tools such as Loki for log integration into these monitoring tools. The health of web applications can be alerted by using features such as dashboards which allow for a birds-eye view of how an application is doing from a high level. Many choose to include metrics such as failures from a particular endpoint or error logs to determine the application’s health. Service performance can be considered in the service health as well in determining the health. Custom metrics can help with determining failure reasons or counts of specific failures within an application.

Prometheus is a framework that can help capture these custom metrics. By instrumenting an application and exposing it, these metrics can be scraped by a monitoring tool for visualization and querying purposes. The example below shows how Prometheus can be used to instrument a sample Go application and how the metrics can be transformed in Grafana.

An Example of Instrumenting a Web Application

The below example demonstrates how Prometheus metrics are instrumented into a Go web application. The code in this article is part of a larger runnable demo available in BNR-Blog-Prometheus-Monitoring. It relies on:

Kubernetes Setup

The Kubernetes setup for this example was configured using Helm. This allows a service chart to be instantiated along with any dependencies noted in the Chart.yaml file. A lot of the upfront work of defining the service configurations is handled automatically by Helm with the helm create command.

Skaffold is paired alongside Helm for making local deployment easy. This will handle Dockerfile image building/caching, deployment through Helm, and port-forwarding the service ports to the host machine. The sample repo listed above contains the instructions for how to run this example locally.

Handler Metrics Instrumentation

func HandleRoute(logger *zap.Logger) http.HandlerFunc {
    return func(writer http.ResponseWriter, request *http.Request) {
        queryParams := request.URL.Query()

        if _, ok := queryParams["failure"]; ok {
            if failureReasons, ok := queryParams["reason"]; ok {
                failureCounter.With(prometheus.Labels{"reason": failureReasons[0]}).Inc()
                logger.Error("error with sample route", zap.String("reason", failureReasons[0]))
            } else {
                failureCounter.With(prometheus.Labels{"reason": "server_error"}).Inc()
                logger.Error("error with sample route")
            }
            writer.WriteHeader(http.StatusInternalServerError)
        } else {
            successCounter.Inc()
            logger.Info("successful call to sample route")
            writer.WriteHeader(http.StatusOK)
        }
    }
}

HandleRoute() defines the handler function for the route: /sample-route. The route is intended to be used to trigger a success or failure depending on if the failure query parameter is set. When a success occurs, the Prometheus success metric counter is incremented by one. When a failure occurs, the Prometheus failure metric counter is incremented by one. If a failure reason is provided, that is provided as a label on the metric or defaults to server_error if it is not provided. There are also formatted JSON logs for each case that can be found in Grafana.

Using the above custom metrics/logs allows for greater customization on specific failure cases in Grafana when monitoring the service. The Grafana dashboard shows the endpoint health based on failure reason and displays the failure rate based on the metrics. The dashboard also shows the error logs associated with the service. An example of the Grafana dashboard that utilizes these captured metrics and logs is below.

Grafana Service Health Dashboard

The dashboard shows the high-level status of the sample-route endpoint and thus the sample app since this is the only route served. The success/failure rate is calculated based on the exposed Prometheus success/failure metrics. The specific failure reason is shown in a time series so specific error spikes can be observed. There are error logs below if more inspection on the error reasons is desired. If tracing is connected to the logs, one could click on the error log instance to view the full timeline of the request to see a detailed view of what occurred.

Conclusion

In summary, the instrumentation of a web application can be accomplished using Prometheus or a similar framework. These metrics are then scraped through an exposed endpoint into the preferred monitoring solution. Grafana or a similar solution can then be used to ingest these metrics and visualize them through dashboards. These dashboards can be useful for determining the health of an application and the details of a failure. In addition to metrics, structured logs can be useful in these dashboards for showing relevant information associated with failures. Attaching traces to these logs is also beneficial as a developer can trace the lifecycle of the request to see how a specific request might have failed.

The post Why the Usage of Instrumentation Within Monitoring Tools Should be Implemented in Your Next Web Project appeared first on Big Nerd Ranch.

]]>
https://bignerdranch.com/blog/why-the-usage-of-instrumentation-within-monitoring-tools-should-be-implemented-in-your-next-web-project/feed/ 0
Which .NET Design Pattern is Best For Your Next Project https://bignerdranch.com/blog/which-net-design-pattern-is-best-for-your-next-project/ https://bignerdranch.com/blog/which-net-design-pattern-is-best-for-your-next-project/#respond Wed, 04 May 2022 14:53:03 +0000 https://bignerdranch.com/?p=9383 There are several design patterns used these days in the .NET ecosystem. What are they? What are the benefits and drawbacks of each pattern? When given a task to design a solution, which pattern should you choose?

The post Which .NET Design Pattern is Best For Your Next Project appeared first on Big Nerd Ranch.

]]>
Today’s .NET projects are more than just a few lines of code hidden behind a windows form. They are a construction of classes, projects, or even entire solutions grouped by responsibility. By grouping classes in specific ways, we are able to build the solution quickly and implement many features that assist you in creating robust and testable solutions.

Just a quick note, the last letter in a design pattern is where the business logic lies. It’s where you morph data. Others say it’s the “driver” of the application—it’s used to navigate the application. While the latter holds true for most patterns, there are outliers that make this false. We will look more into this below. But first, let’s jump into the four patterns we’ll be discussing today—MVP, MVC, MVVM, and MVVMC.

Model-View-Presenter (MVP)

Although it may seem like MVP is the most valuable (slow clap), it is rarely used these days since it’s been replaced by more robust and testable design patterns. In MVP, the presentation layer (the layer with all the awesome-looking controls) is where the business logic lies. In .NET an MVP pattern is commonly used in Windows Forms. The presentation’s code behind is in charge of most of the business logic. As you can see, oftentimes the View sits in the presentation layer, so this design can be redundant.

While this is great in small projects, unit testing is almost impossible since you have to spin up the application to test any logic. Mocking data is possible, but automating unit testing is not (or not easily possible without doing some major overhauling of the structure. Most of your tests will have to rely on reading data that may be formatted for the presentation layer, which may result in more assertions and more data conversions to test your application.

Model-View-Controller (MVC)

If you have worked in .NET web applications, you may have run across the MVC pattern. While this is not a true MVC pattern, it does hold many characteristics of MVC. In MVC, you have a Controller, a View, and a Model. The Controller is exactly that—a controller. It controls the business logic and navigation. The Model (a.k.a. a Data Transport Object, or DTO) is introduced here and is used to transport data from the data source to the View. The View binds to the model and displays what the model contains in a user-readable way.

There are a number of advantages this pattern has over its predecessor – MVP. You can now mock the data source and fill the Models with data without using a hard data source. With this, you can now test your domain (where all the magic happens) without running the application. This is good news in the automation world where you build and test your application in an automated build. The builds can not only test your application without running it, but it can also run multiple tests in parallel.

Model-View-ViewModel (MVVM)

If you have worked with Windows Presentation Foundation (WPF), then you have at least heard of the MVVM pattern. The MVVM pattern is very similar to MVC (after all, it manifested from MVC), except there are some differences. MVVM uses a shared (or portable) domain. Meaning that the domain can be placed in several front ends, or presentation layers. It also has multiple data stores.

In the beginning, MVVM was used to share your domain in Windows Desktop, Windows Phone, and Windows Apps. Since the developer only had to develop and test the domain once for each type of device, it saved a lot of time in development and testing. Much like the controller in MVC, The ViewModel is at the heart of the design. It contains the business logic and controls how the data moves in and out of the application. Also, much like the controller, the ViewModel sends and receives data with the use of a Model. However, the ViewModel (unlike the Controller) also holds properties the View will bind to. It does not pass a Model to the View, instead, the View binds to the ViewModel. Because of this, the View can now retrieve real-time updates from the ViewModel. All while keeping a separation between the View and the ViewModel.

Since Microsoft abandoned the beloved Windows Phone and increased the portability of Windows Apps, MVVM has changed over the years from a portable domain that can be used in Silverlight Applications, Windows Phone Applications, Desktop Applications, and Windows Apps to more of a power design pattern use to rely on data-binding to the View while keeping the separation between the layers—all while allowing unit tests and build automation. You can see why it is still used today.

Model-View-ViewModel-Controller (MVVMC)

While this pattern looks very confusing, it’s a pattern used very frequently in the ASP.NET MVC space. Instead of binding the View to a Model that comes from a database, the controller will take many models and combine them into a ViewModel. The View will then bind to the ViewModel. The ViewModel will not have real-time updates from the controller like in MVVM, but it will ease the pain of dealing with multiple Models in a View.

There is no real advantage as far as testing goes with this pattern. The Models are still used to transport data to and from the data source. The advantage strictly deals with the Presentation layer of the application.

Which Design Pattern Do You Choose?

While the world remains heavily invested in web technologies, there is (and always will be) a place for native applications. Knowing what design pattern to use is just as crucial as using the design pattern correctly:

  • MVP is appropriate for very small applications that do not require unit testing or where scalability is minimal.
  • MVC is appropriate for applications that usually have one front end, and run on one platform, such as small enterprise applications. This pattern is also used widely in the webspace.
  • MVVM is appropriate for large, complex enterprise applications that require unit testing and real-time updates to the front end.
  • MVVMC is used in applications where multiple models are combined for use in the Presentation Layer. This pattern is also used alongside the MVC pattern.

There is more to these design patterns as well as design pattern infrastructure than this post covers – such as Dependency Injection, Mocking, Automated Builds, and Automated Testing. I felt it was out of scope. This article aims to contextualize the common design patterns you will come across when working with .NET.

The post Which .NET Design Pattern is Best For Your Next Project appeared first on Big Nerd Ranch.

]]>
https://bignerdranch.com/blog/which-net-design-pattern-is-best-for-your-next-project/feed/ 0
Top Five Things I Learned at SRECon22 Americas https://bignerdranch.com/blog/top-five-things-i-learned-at-srecon22-americas/ https://bignerdranch.com/blog/top-five-things-i-learned-at-srecon22-americas/#respond Thu, 24 Mar 2022 19:14:13 +0000 https://bignerdranch.com/?p=9352 As a full-stack web developer, I attended SRECon to expand my thinking about the reliability and observability of the services I develop. Here are my top 5 takeaways: 1. Evaluating Your Program – Reaction, Learning, Behavior, Results Casey Rosenthal’s talk titled “The success in SRE is silent” reminded us that while nobody thanks you for […]

The post Top Five Things I Learned at SRECon22 Americas appeared first on Big Nerd Ranch.

]]>
As a full-stack web developer, I attended SRECon to expand my thinking about the reliability and observability of the services I develop. Here are my top 5 takeaways:

1. Evaluating Your Program – Reaction, Learning, Behavior, Results

Casey Rosenthal’s talk titled “The success in SRE is silent” reminded us that while nobody thanks you for the incident that didn’t happen, you can still evaluate how the people around you are learning. First, check their reaction, thumbs up or thumbs down, about the changes. Eventually, they will be able to gauge that they’ve learned something. After that, you may notice shifts in behavior, such as asking for help setting up a monitor on Slack (where before, they might not have added a monitor). Finally, the results of new things making it to production, such as the new monitor.

2. Brownouts – Intentional Degradation to Avoid Blackout

Alper Selcuk shared Microsoft’s response to the massive expansion in the use of Microsoft Teams within education at the beginning of the pandemic. One of their techniques for avoiding service blackouts was brownouts, such as no longer displaying the cursor locations of other users on a shared document, preloading fewer events on the calendar, and decreasing the quality of videos on conference calls. This allowed Microsoft to keep the services online while increasing capacity and optimizing the service for the new load level. What brownouts could be applied to your service if it were to experience a sudden increase in demand?

3. Skydiving and SRE – When to Stop Fixing and Fail to the Backup

Victor Lei applied experience skydiving to disaster recovery. There is a specific altitude in skydiving to stop trying to fix your main parachute and decide what is next. Then, there is another altitude where the skydiver automatically fails their backup parachute. Timeboxing is a technique for limiting the time spent testing a new idea or optimization, but it’s easy to lose track of time during a disaster. I’d like to see more guidelines for how long the on-call engineer should try to fix a problem before failing to the backup or calling in additional support.

4. Emergent Organizational Failure – Trust

Mattie Toia discussed emergent organizational failure. One point was forgetting how hard prioritization is, which can be helped by collaborating on mental models and making sharing and communication easy. Another was using incentives to replace dedication when the organization needs to demonstrate trust through actions. At the center of all five points was trust, how to build that, and recognizing that each organization member is complex and has their views of the world and the organization.

5. Scientific Method for Resilience – Observe, Research, Hypothesis, Test, Analyze, Report

Christina Yakomin explained how to use the scientific method to test the resilience of systems.

  • First, consider your system and all its parts. Then, research all the ways the system might be able to fail. (Newer engineers are especially helpful with this since they are less likely to dismiss failure paths that long-time engineers might ignore.)
  • For each failure path, hypothesize about what will happen. (Make sure everyone can share their thoughts on what will happen rather than just agreeing with the first person to respond.)
  • Then, test the failure path and see what happens (Note: If you’re planning to try something extreme like taking the entire database offline, you might have to test in staging instead of production but be sure to simulate real load during the test.)
  • Analyze your findings. Even if the results matched what was expected, is that the behavior you want your system to have?
  • Report the findings and document the test process since you will likely want to repeat this test in the future.
  • Finally, repeat this process regularly (perhaps quarterly or yearly).

Summary

I look forward to helping each project I’m on continue to grow in features, reliability, and observability to weather the good times and the bad.

SRECon is an open-access conference. Videos of all the talks will be free from Usenix in the following weeks.

The post Top Five Things I Learned at SRECon22 Americas appeared first on Big Nerd Ranch.

]]>
https://bignerdranch.com/blog/top-five-things-i-learned-at-srecon22-americas/feed/ 0
Exploring Go v1.18’s Generics https://bignerdranch.com/blog/exploring-go-v1-18s-generics/ https://bignerdranch.com/blog/exploring-go-v1-18s-generics/#respond Wed, 01 Dec 2021 15:00:22 +0000 https://bignerdranch.com/?p=9193 Generics in Go are nearly here! Here's what that means for you and some real use cases.

The post Exploring Go v1.18’s Generics appeared first on Big Nerd Ranch.

]]>
Go 1.18 is set to arrive in February 2022, and with it comes the long-awaited addition of generics to the language. It’s been a long process to find something that works with the current Go ecosystem, but a proposal has been accepted that tries to protect the objectives of the language while adding the largest changes to the language in over a decade. Will developers add more complexity and make things less maintainable with generics? Or will this enable new heights and capabilities for gophers everywhere?

In this post, we’ll go over specifically what type parameters and constraints look like in Go 1.18, but we won’t be covering every detail of the proposal itself: we’ll be giving an overview enough to use them, and then some real-life examples of where generics are going to solve headaches for gophers. As such, no article will ever be a replacement for going over the proposal itself. It’s quite long, but each piece is well explained and still approachable.

Getting set up

To start playing with generics (or really just the next Go version), there are two simple ways:

Go Playground

You can use the Go 2 playground in your browser to execute Go 1.18 samples. A word of caution—this uses a tool that was made to help those trying out the new syntax for proposals, and is no longer maintained. So if something doesn’t work here, it’s likely due to this tool not keeping up with changes since the specifications were finalized.

gotip

This tool is used to compile and run the Go development branch on your local machine without replacing your normal go terminal tools.

  1. go install golang.org/dl/gotip@latest
  2. gotip download
  3. Set your version to 1.18 in your go.mod file for your repo directory.

Once that’s all done, the gotip command can be used anywhere you’ve been using the stable go command, e.g. gotip test ./....

Type parameters

The big change enabling generic structures and data types is the introduction of a type-parameter for type aliases, structs, methods, and standalone functions. Here’s some sample syntax for a generic-looking Node type:

type Node[T any] struct {
    Value T
    Left  *Node[T]
    Right *Node[T]
}

This looks a lot like the existing Go code (which is great), just the addition of T any inside of square brackets. We can read that as a normal parameter to a function, like id string, except this refers to a type that will determine what type the Value field will hold. This could be any type you want, as specified by the any type constraint that we’ll touch on later.

You can instantiate a generic type like this:

node := Node[int]{}

In this snippet, we’ve added square brackets behind the type name to specify that int is the type-parameter, which creates an instance of the Node struct, except it’s typed such that its Value field is an int. You can omit to specify the type-parameter and allow the compiler to infer the type being used:

node := Node{
    Value: 17,
}

Similar to the example above, this results in a Node type that holds an int. Type parameters can also be used on methods belonging to a generic struct, like so:

func (n Node[T]) Val() T {
    return n.Value
}

But they’re really pointing to the types on the receiver (Node) type; methods cannot have type parameters, they can only reference parameters already declared on the base type. If a method doesn’t need type parameters or all x number of them, it can omit them entirely or leave out a few.

Type alias declarations can also accept parameters:

type Slice[T any] []T

Without generics, library authors would write this to accommodate any type:

type Node struct {
    Value interface{}
    Left  *Node
    Right *Node
}

Using this Node definition, library consumers would have to cast each time each time the Value field was accessed:

v, ok := node.Value.(*myType)
if !ok {
    // Do error handling...
}

Additionally, interface{} can be dangerous by allowing any type, even different types for separate instances of the struct like so:

node := Node{
    Value: 62,
    Left: &Node{
        Value: "64",
    },
    Right: &Node{
        Value: []byte{},
    },
}

This results in a very unwieldy tree of heterogeneous types, yet it still compiles. Not only can casting types be tiresome, each time that a typecast happens you open yourself to an error happening at runtime, possibly in front of a user or critical workload. But when the compiler handles type safety and removes the need for casting, a number of bugs and mishaps are avoided by giving us errors that prevent the program from building. This is a great argument in favor of strong-type systems: errors at compilation are easier to fix than debugging bugs at runtime when they’re already in production.

Generics solve a longstanding complaint where library authors have to resort to using interface{}, or manually creating variants of the same structure, or even use code generation just to define functions that could accept different types. With the addition of generics, it removes a lot of headache and effort from common developer use cases.

Type constraints

Back to the any keyword that was mentioned earlier. This is the other half of the generics implementation, called a type constraint, and it tells the compiler to narrow down what types can be used in the type parameter. This is useful for when your generic struct can take most things, but needs a few details about the type it’s referring to, like so:

func length[T any](t T) int {
    return len(t)
}

This won’t compile, since any could allow an int or any sort of struct or interface, so calling len on those would fail. We can define a constraint to help with this:

type LengthConstraint interface {
    string | []byte
}

func length[T LengthConstraint](t T) int {
    return len(t)
}

This type of constraint defines a typeset: a collection of types (separated by a pipe delimiter) that we’re allowing as good answers to the question of what can be passed to our generic function. Because we’ve only allowed two types, the compiler can verify that len works on both of those types; the compiler will check that we can call len on a string (yes) and a byte array (also yes). What if someone hands us a type alias with string as the underlying type? That won’t work since the type alias is a new type altogether and not in our set. But worry not, we can use a new piece of syntax:

type LengthConstraint interface {
    ~string | ~[]byte
}

The tilde character specifies an approximation constraint ~T that can be fulfilled by types that use T as an underlying type. This makes values of type info []byte satisfactory arguments to our length function:

type info []byte

func main() {
    var i info
    a := length(i)
    fmt.Printf("%#v", a)
}

Type constraints can also enforce that methods are present, just like regular interfaces:

type LengthConstraint interface {
    ~string | ~[]byte
    String() string
}

This reads as before, making sure that the type is either based on a string or byte array, but now ensures implementing types also have a method called String that returns a string. A quick note: the compiler won’t stop you from defining a constraint that’s impossible to implement, so be careful:

type Unsatifiable interface {
    int
    String() string
}

Finally, Go 1.18 will also add a package called constraints that will define some utility type sets, like constraints.Ordered, which will contain all types that the less than or greater than operators can use. So don’t feel like you need to redefine your own constraints all the time, odds are you’ll have one defined by the standard library already.

No type erasure

Go’s proposal for generics also mentions that there will not be any type of erasure when using them. That is, all compile-time information about the types used in a generic function will still be available at runtime; no specifics about type-parameters will be lost when using reflect on instances of generic functions and structs. Consider this example of type erasure in Java:

// Example generic class
public class ArrayList<E extends Number> {
    // ...
};

ArrayList<Integer> li = new ArrayList<Integer>();
ArrayList<Float> lf = new ArrayList<Float>();
if (li.getClass() == lf.getClass()) { // evaluates to true
    System.out.println("Equal");
}

Despite the classes having differing types contained within, that information goes away at runtime, which can be useful to prevent code from relying on information being abstracted away.

Compare that to a similar example in Go 1.18:

type Numeric interface {
    int | int64 | uint | float32 | float64
}

type ArrayList[T Numeric] struct {}

var li ArrayList[int64]
var lf ArrayList[float64]
if reflect.TypeOf(li) == reflect.TypeOf(lf) { // Not equal
    fmt.Printf("they're equal")
    return
}

All of the type information is retained when using reflect.TypeOf, which can be important for those wanting to know about what sort of type was passed in and want to act accordingly. However, it should be noted that no information will be provided about the constraint or approximation constraint that the type matched. For example, say we had a parameter of type MyType string that matched a constraint of ~string, reflect will only tell us about MyType and not the constraint that it fulfilled.

Real-life example

There will be many examples of generics being used in traditional computer science data models, especially as the Go community adopts generics and updates their libraries after their release. But to finish out this tour of generics, we’ll look at a real-life example of something that gophers work with almost every single day: HTTP handlers. Gophers will always remember the function signature of a vanilla HTTP handler:

func(w http.ResponseWriter, r *http.Request)

Which honestly doesn’t say much about what the function can take in or return. It can write anything back out of it and we’d have to manually call the function to get the response for unmarshaling and testing.

What follows is a function for constructing HTTP handlers out of functions that take in and return useful types rather than writers and requests that say nothing about the problem domain.

package main

import (
    "encoding/json"
    "fmt"
    "net/http"
)

func AutoDecoded[T any, O any](f func(body T, r *http.Request) (O, error)) http.HandlerFunc {
    return func(w http.ResponseWriter, r *http.Request) {
        // Decode the body into t
        var t T
        if err := json.NewDecoder(r.Body).Decode(&t); err != nil {
            http.Error(w, fmt.Sprintf("error decoding body: %s", err), http.StatusBadRequest)
            return
        }

        // Call the inner function
        result, err := f(t, r)
        if err != nil {
            http.Error(w, err.Error(), http.StatusInternalServerError)
            return
        }

        marshaled, err := json.Marshal(result)
        if err != nil {
            http.Error(w, fmt.Sprintf("error marshaled response: %s", err), http.StatusInternalServerError)
            return
        }

        if _, err := w.Write(marshaled); err != nil {
            http.Error(w, fmt.Sprintf("error writing response: %s", err), http.StatusInternalServerError)
            return
        }
    }
}

type Hello struct {
    Name string `json:"name"`
}

type HelloResponse struct {
    Greeting string `json:"greeting"`
}

func handleApi(body Hello, r *http.Request) (HelloResponse, error) {
    return HelloResponse{
        Greeting: fmt.Sprintf("Hello, %s", body.Name),
    }, nil
}

func main() {
    mux := http.NewServeMux()
    mux.HandleFunc("/api", AutoDecoded(handleApi))

    http.ListenAndServe(":3000", mux)
}

AutoDecoded is a generic function with two type parameters: T and O. These types represent the incoming body type we should expect from the request, and the output type we expect to hand back to the requester, respectively. Looking at the rest of AutoDecoded‘s function signature, we see that it wants a function that maps an instance of T and an *http.Request to an instance of Result[O]. Stepping through AutoDecoded‘s logic, it starts by decoding the incoming JSON request into an instance of T. With the body stored in a variable, the function is ready to call the inner function that was provided as the argument using variable tAutoDecoded then handles if the inner function errored, and if it didn’t, then takes the output and writes it back to the client as JSON.

Below AutoDecoded is our HTTP handler called handleApi, and we can see that it doesn’t follow the normal patter for an http.HandlerFunc; this function accepts and returns types Hello and HelloResponse defined above it. Compared to a function using an http.ResponseWriter, this handleApi function is a lot easier to test since it returns a specific type or error, rather than writing bytes back out. Plus, the types on this function now serve as a bit of documentation: other developers reading this can look at handleApi‘s type signature and know what it is expecting and what it will return. Finally, the compiler can tell us if we’re trying to return the wrong type from our handler. Before you could write anything back to the client and the compiler wouldn’t care:

func(w http.ResponseWriter, r *http.Request) {
    // Decoding logic...

    // Our client was expecting a `HelloResponse`!
    w.Write([]byte{"something else entirely"})
}

In the main function, we see a bit of type-parameter inference happen when we no longer need to specify the type parameters when linking the pieces together in AutoDecoded(handleApi). Since AutoDecoded returns an http.Handler, we can take the result of the function call and plug it into any Go router, like the http.ServeMux used here. And with that, we’ve introduced more concrete types into our program.

As you can hopefully now see, it’s going to be nice to have compiler functionality in Go to help us write more generic types and functions while avoiding the warts of the language where we might resort to using interface{}. The introduction of type parameters will be a gradual process overall, as Russ Cox has made the suggestion to not touch the existing standard library types or structures in the next Go release. Even if you don’t want to use generics, this next release will be compatible with your existing code, so adopt what features make sense for your project and team. Congrats to the Go team on another release, and congrats to the gophers who have patiently awaited these features for so long. Have fun being generic!

The post Exploring Go v1.18’s Generics appeared first on Big Nerd Ranch.

]]>
https://bignerdranch.com/blog/exploring-go-v1-18s-generics/feed/ 0
Where is Ruby Headed in 2021? https://bignerdranch.com/blog/where-is-ruby-headed-in-2021/ https://bignerdranch.com/blog/where-is-ruby-headed-in-2021/#respond Tue, 16 Nov 2021 14:16:31 +0000 https://bignerdranch.com/?p=9180 Where is the Ruby language headed? At RubyConf 2021, the presentations on the language focused on type checks and performance—where performance can be subdivided into execution speed, I/O throughput, and parallelism. These efforts are geared toward expanding the set of systems for which Ruby is a good fit.

The post Where is Ruby Headed in 2021? appeared first on Big Nerd Ranch.

]]>
Where is the Ruby language headed? At RubyConf 2021, the presentations about the language focused on static typing and performance—where performance can be subdivided into execution speed, I/O throughput, and parallel execution on multiple cores. These efforts are geared toward expanding the set of systems for which Ruby is a good fit.

Static Typing

Static type checking can improve the developer experience by preventing type errors and improving editor assistance. Static type checking is most commonly implemented with type declarations in code, and the third-party Sorbet library implements type declarations for Ruby. However, Yukihiro Matsumoto (the creator of Ruby, aka “Matz”) has emphasized in several RubyConf keynotes that Ruby will not add type declarations officially. Instead, the Ruby approach to static type checks is via Ruby Signature (RBS) files: separate files that record type information, analogous to TypeScript .d.ts files. RBS files can be written by hand or automatically generated by tooling.

RBS was introduced in Ruby 3.0, but up until now, its benefits have been largely theoretical. But those benefits are now starting to manifest with TypeProf-IDE, a VS Code extension created by Ruby core member Yusuke Endoh. This extension runs TypeProf on Ruby code as you edit it, inferring types, generating RBS type signature files, providing inline documentation and autocomplete, and calling out type errors. You can try out TypeProf-IDE today by following along with my TypeProf-IDE tutorial blog post.

Performance

Execution speed, I/O throughput, and parallel processing can all be grouped under the heading of performance, and work is happening in Ruby in all these areas.

Ruby execution speed is being worked on from multiple angles. First, performance enhancements are being made within YARV, the main CRuby interpreter. One significant example is Eileen Uchitelle from GitHub presenting “How we sped up CVARs in Ruby 3.1+”. Although CVARs, or class variables, aren’t used very widely in Ruby application code, they are heavily used within Rails itself, so any applications built on Rails will benefit.

Besides the interpreter, there are several ongoing efforts to compile Ruby to native code. MJIT is a just-in-time compiler that was included in Ruby 2.6. At this year’s RubyConf Shopify presented another JIT compiler, YJIT, which is included in the first Ruby 3.1 preview releaseStripe presented the Sorbet Compiler, an AOT (ahead-of-time) compiler that is in development, which leans on the static type information in code that uses Sorbet type declarations.

I/O throughput has been significantly improved via async fibers. Fibers are a lightweight concurrency mechanism that has been in Ruby since 1.9. The async gem, created by Samuel Williams, uses fibers to allow a Ruby program to switch to other fibers when blocked on supported I/O operations, without requiring any special language syntax. Bruno Sutic posted a helpful overview of the async gem, and presented more detail in his RubyConf session on Async Ruby. Async can be used back to Ruby 2.5 if you use async-specific I/O gems. But in Ruby 3.0 and above, all blocking operations are compatible with async, whether in the Ruby standard library or other gems. Fibers do not provide parallel execution: even with multiple cores, only one fiber can be actively executing at a time. But one Ruby process on one core can run millions of fibers concurrently, as Samuel Williams has demonstrated via a Ruby server handling one million WebSocket connections.

While async fibers do not run in parallel on multiple cores, Ractors were added in Ruby 3.0 as an experimental multicore concurrency mechanism. Each Ractor has an isolated object space and allows only limited sharing of data to other Ractors, which avoids threading issues like race conditions and deadlocks. As a result, Ractors are not bound by the Global Interpreter Lock, allowing true parallel processing on separate cores. Currently, each Ractor has its own native thread, but future work will allow Ractors to share threads to reduce the memory consumption of Ractors and make them easier to work with. At RubyConf Vinicius Stock demonstrated using Ractors to run tests on multiple CPUs.

Matz had set a “Ruby3x3″ goal to speed up Ruby 3 over Ruby 2.0 three times for some significant benchmarks, and Ruby 3.0 met that goal. This push for performance will continue: in this year’s keynote, Matz set a new performance goal, “Ruby3x3 Redux”: a future Ruby 3.x release will be three times faster than Ruby 3.0 in some benchmarks.

How to Think About Ruby

These new features seek to reduce the distance between Ruby and other languages: to gain some of the type safety of statically-typed languages, the I/O throughput of async languages like Node, and the parallelism of channel-based languages like Go. But there are several factors that limit how far Ruby can go in these directions. The first is compatibility: the core team doesn’t want to break existing Ruby programs if at all possible. The second limiting factor is language design: Matz called Ruby a human-oriented language, and although type safety and performance are being prioritized, they won’t be pursued in a way that compromises what the core team sees as Ruby’s human-oriented design.

The point of these language improvements is not that they erase the advantages other languages have. When the most important factor in your system is I/O throughput, multicore processing, or type safety, you wouldn’t want to choose Ruby from the start: you would want to go with a language like Node, Go, or Haskell respectively.

The way to think about these improvements to Ruby is that they incrementally increase the set of problems for which Ruby is a good solution.

For organizations already using Ruby, these improvements mean that they will be able to do more in Ruby before needing to write a native extension in C or Rust, before needing to extract a microservice in another language, or before considering a rewrite.

For organizations considering what technology to use for a new project, these improvements to Ruby mean that they don’t need to so quickly give up Ruby’s productivity benefits for the sake of other needs. There are still many systems for which the controlling factor is the ability to deliver functionality quickly and with minimal development cost, including startups needing to find a product or market fit and internal teams with a limited budget. Systems like these benefit tremendously from Ruby’s high level of abstraction, its rich and mature library ecosystem, and Rails’ support for delivering web services and applications with minimal effort. Each improvement to Ruby removes one more “what if” that could make decision-makers hesitate:

  • “What if the development team gets big enough that we need type safety? Oh, then we can use RBS or Sorbet.”
  • “What if we need to handle lots of WebSocket traffic? Oh, then we can use async fibers.”
  • “What if we need to maximize CPU core usage for a lot of computation?” Okay, that one would still be a stretch for Ruby, but at least Ractors mean you won’t be locked into one core.

These enhancements to Ruby are expanding the set of systems for which the language can offer that velocity benefit. Companies can get a leg up on their competition by recognizing when they have a system for which development velocity is the controlling factor and taking advantage of Ruby’s strengths. RubyConf 2021 demonstrated that Ruby continues to evolve as the core team, individual contributors, and large companies make substantial investments in it.

The post Where is Ruby Headed in 2021? appeared first on Big Nerd Ranch.

]]>
https://bignerdranch.com/blog/where-is-ruby-headed-in-2021/feed/ 0
Live Ruby Type Checking with TypeProf-IDE https://bignerdranch.com/blog/live-ruby-type-checking-with-typeprof-ide/ https://bignerdranch.com/blog/live-ruby-type-checking-with-typeprof-ide/#respond Wed, 10 Nov 2021 20:44:41 +0000 https://bignerdranch.com/?p=9155 At RubyConf 2021, TypeProf-IDE was announced. It's a Visual Studio Code integration for Ruby’s TypeProf tool to allow real-time type analysis and developer feedback. This functionality is available for us to try today in Ruby 3.1.0 preview 1. So let’s give it a try!

The post Live Ruby Type Checking with TypeProf-IDE appeared first on Big Nerd Ranch.

]]>
In his RubyConf 2021 keynote, the creator of Ruby, Yukihiro Matsumoto, announced TypeProf-IDE, a Visual Studio Code integration for Ruby’s TypeProf tool to allow real-time type analysis and developer feedback. In another session, the creator of TypeProf-IDE, Yusuke Endoh, demoed the extension in more detail. This functionality is available for us to try today in Ruby 3.1.0 preview 1, which was released during RubyConf. So let’s give it a try!

Setup

First, install Ruby 3.1.0 preview 1. If you’re using rbenv on macOS, you can install the preview by executing the following commands in order:

  • brew update
  • brew upgrade ruby-build
  • rbenv install 3.1.0-preview1

Next, create a project folder:

  • mkdir typeprof_sandbox
  • cd typeprof_sandbox

If you’re using rbenv, you can configure the preview to be the version of Ruby used in that directory:

  • rbenv local 3.1.0-preview1

Next, initialize the Gemfile:

  • bundle init

Next, let’s set up Visual Studio Code. Install the latest version, then add the TypeProf VS Code extension and the RBS Syntax Highlighting extension.

Open your typeprof_sandbox folder in VS Code. Next, open the Gemfile and add the typeprof gem:

 git_source(:github) { |repo_name| "https://github.com/#{repo_name}" }

 #gem "rails"
+gem 'typeprof', '0.20.3'

Now install it:

  • bundle install

Getting Type Feedback

To see TypeProf in action, let’s create a class for keeping track of members of a meetup group. Create a file meetup.rb and add the following:

class Meetup
  def initialize
    @members = []
  end

  def add_member(member)
    @members.push(member)
  end

  def first_member
    @members.first
  end
end

It’s possible you will already see TypeProf add type signatures to the file, but more likely you won’t see anything yet. If not, to find out what’s going on, click the “View” menu, then choose “Output”. From the dropdown at the right, choose “Ruby TypeProf”. You’ll see the output of the TypeProf extension, which will likely include a Ruby-related error. What I see is:

[vscode] Try to start TypeProf for IDE
[vscode] stderr: --- ERROR REPORT TEMPLATE -------------------------------------------------------
[vscode] stderr:
[vscode] stderr: ```
[vscode] stderr: Gem::GemNotFoundException: can't find gem bundler (= 2.2.31) with executable bundle
[vscode] stderr:   /Library/Ruby/Site/2.6.0/rubygems.rb:278:in `find_spec_for_exe'

In my case, the command is running in my macOS system Ruby (/Library/Ruby/Site/2.6.0) instead of my rbenv-specified version. I haven’t been able to figure out how to get it to use the rbenv version. As a workaround, I switched to the system Ruby and updated Bundler:

  • rbenv local system
  • sudo gem update bundler
  • rbenv local 3.1.0-preview1

For more help getting the extension running, check the TypeProf-IDE Troubleshooting docs. Of note is the different places that the extension tries to invoke typeprof from. Ensure that your default shell is loading Ruby 3.1.0-preview1 and that a typeprof binary is available wherever the extension is looking.

After addressing whatever error you see in the output, quit and reopen VS Code to get the extension to reload. When it succeeds, you should see output like the following:

[vscode] Try to start TypeProf for IDE
[vscode] typeprof version: typeprof 0.20.2
[vscode] Starting Ruby TypeProf (typeprof 0.20.2)...
[vscode] Ruby TypeProf is running
[Info  - 9:03:49 AM] TypeProf for IDE is started successfully

You should also see some type information added above the methods of the class:

screenshot of a code editor showing the Meetup class with type signatures added above each method

Well, that’s not a lot of information. We see that #add_member takes in an argument named member, but its type is listed as untyped (which means, the type information is unknown). It returns an Array[untyped], meaning an array containing elements whose type is unknown. Then #first_member says it returns nil, which is incorrect.

Improving the Types and Code

For our first change, let’s look at the return value of #add_member. It’s returning an Array, but I didn’t intend to return a value; this is just a result of Ruby automatically returning the value of the last expression in a method. Let’s update our code to remove this unintentional behavior. Add a nil as the last expression of the method:

 def add_member(member)
   @members.push(member)
+  nil
 end

Now the return type is updated to be NilClass, which is better:

screenshot of an editor showing an add_member method definition, with a type signature showing it returns NilClass

Next, how can we fix the untyped? Endoh recommends a pattern of adding some example code to the file showing the use of the class. Add the following at the bottom of meetup.rb:

if $PROGRAM_NAME == __FILE__
  meetup = Meetup.new
end

Next, type meetup.ad below the line where meetup is assigned. (We’ll explain the $PROGRAM_NAME line in a bit.) An autocomplete dropdown will appear, with add_member selected:

screenshot of an editor showing a variable meetup with the letters "ad" typed as the beginning of a method call. an autocomplete dropdown is shown with the method add_member highlighted

Because TypeProf can see that meetup is an instance of class Meetup, it can provide autocomplete suggestions for methods.

Click add_member in the list, then type an opening parenthesis (. VS Code will add the closing parenthesis ) after the cursor, and another popup will appear with information about the method’s arguments:

screenshot of an editor showing an add_member method call, with parenthesis but no arguments passed. a tooltip shows the method signature

It indicates that the method takes one argument, member, and returns nil. Also note that the type of member is still listed as untyped; we’re still working toward fixing that.

Pass a string containing your name as the argument, then add the rest of the code below:

if $PROGRAM_NAME == __FILE__
  meetup = Meetup.new
  meetup.add_member('Josh')
  first_member = meetup.first_member
  puts first_member
end

What’s the significance of the if $PROGRAM_NAME == __FILE__ conditional? $PROGRAM_NAME is the name of the currently running program, and __FILE__ is the name of the current source file. If they are equal, that means that this Ruby file is being executed directly, which includes when TypeProf runs the file. So this is a way to provide supplemental information to TypeProf.

When you added this code, the type information should have updated to:

screenshot of an editor showing two method definitions, add_member and first_member, along with type signatures above them. both have been updated to show the type String instead of untyped

Why does this added code affect the type of information? TypeProf executes the code to see the types that are actually used by the program. By supplying an example of using the class, TypeProf has more type information to work with. Future TypeProf development may allow it to be more intelligent about inferring type information from RSpec tests and usages elsewhere in the project.

Note that TypeProf now indicates that the member argument is a String, and that #first_member may return either a NilClass or a String. (The reason it might return a NilClass is if the array is empty.)

Making the Code Generic with Type Variables

Let’s put our object-oriented design hats on and think about these types. Is it specific to Strings? No, the code doesn’t make any assumptions about what the members are. But TypeProf has coupled our class to one specific other class to use!

To prevent this, we can manually edit the RBS type signatures generated for our class to indicate just how generic we want Meetup to be.

Create an empty typeprof.rbs file in your project folder. Next, command-click on the type signature above #add_member. The typeprof.rbs file will open, and the RBS type signature for that method will be automatically added to it:

screenshot of an editor showing an RBS file with a type definition for the add_member method of the Meetup class

Next, go back to meetup.rb and right-click the type signature above #first_member. This adds the signature for that method to the RBS file too, but as a separate class declaration:

screenshot of an editor showing an RBS file with two separate Meetup class definitions. each one has a type signature for a different method: first_member and add_member

To keep things simpler, edit the RBS file so there’s a single class with two methods in the same order as in the Ruby file, and save the file:

screenshot of an editor showing an RBS file with a single Meetup class definition containing signatures for two methods: add_member and first_member

Now, let’s edit the signature to use type variables. A type variable is a place where, instead of referencing a specific type, you use a variable that can represent any type. Everywhere the same type variable appears, the type must be the same.

First, add a [MemberT] after the Meetup class name:

screenshot of an editor showing an RBS file with a Meetup class definition. an arrow points to a type variable MemberT that has been added to the class

Next, replace the two occurrences of String with MemberT:

screenshot of an editor showing an RBS file with arrows pointing to the type variable MemberT in two places: as an argument to method add_member, and as part of the return type of method first_member, along with NilClass

What this means is, a given Meetup instance applies to a certain type, called MemberT. That’s the type of the member you pass in to #add_member. That is the same type as what the return value of #first_member should be. So if you pass in a String you should get a String back. If you pass in a Hash, you should get a Hash.

Switch back to meetup.rb. If you don’t see the type signatures updated, you may need to close and reopen meetup.rb. Then, you should see updated type signatures:

screenshot of an editor showing a Ruby class Meetup. type signatures appear over the methods, including the MemberT type variable as an argument to method add_member, and as part of the return type of method first_member

Note that our MemberT types appear in the signatures of #add_member and #first_member. Also note that the signatures have a # in front of them: this indicates that they’re manually specified in the RBS file.

Now, let’s see what help this gives us. In the statement puts first_member, start typing .up after it. Note that an autocomplete dropdown appears and #upcase is selected:

screenshot of an editor showing a variable first_member with the letters "up" typed as the beginning of a method call. an autocomplete dropdown is shown with the method upcase highlighted

TypeProf knows that member is a Meetup object. Because you passed a String into the #add_member method of the meetup object, TypeProf can tell that meetup’s type variable MemberT is equal to the type String. As a result, it can see that its #first_member method will also return a String. So it knows first_member is a string, and therefore it can suggest String’s methods for the autocomplete.

Click upcase to autocomplete it. Now note that first_member.upcase has a red squiggly underlining it. Hover over it to see the error:

screenshot of an editor showing an error indicator under the method call upcase on variable first_member. the error message says "undefined method: nil#upcase"

It says [error] undefined method: nil#upcase. But wait, isn’t first_member a String? The answer is maybe. But it could also be a nil if the meetup hasn’t had any members added to it. And if it is nil, this call to #upcase will throw a NoMethodError. Now, in this trivial program we know there will be a member present. But for a larger program, TypeProf will have alerted us to an unhandled edge case!

To fix this, we need to change the way the type signature is written slightly. In the RBS file, replace (NilClass | MemberT) with MemberT? (don’t miss the question mark):

screenshot of an editor showing an RBS file, with an arrow pointing to the return type of a first_member method, which is MemberT followed by a question mark

? indicates an optional type, a case where a value could be a certain type or it could be nil.

Now, in the Ruby file, wrap the puts call in a conditional:

 first_member = meetup.first_member
-puts first_member.upcase
+if first_member
+  puts first_member.upcase
+else
+  puts 'first_member is nil'
+end

If the red squiggly under the call to #upcase doesn’t disappear, close and reopen meetup.rb to get TypeProf to rerun. After that, if you made the changes correctly, the underline should disappear:

screenshot of an editor showing Ruby code with a call to the first_member method of object meetup. the result is assigned to variable first_member. a conditional checks if first_member is truthy. if so, upcase is called on it and the result is outputted. if first_member is not truthy, the string "first_member is nil" is outputted

TypeProf has guided us to write more robust code! Note that currently TypeProf requires the check to be written as if variable; other idioms like unless variable.nil? and if variable.present? will not yet work.

Next Steps

If you’d like to learn more about TypeProf-IDE, Endoh’s RubyConf 2021 talk should be uploaded to YouTube within a few months. In the meantime, check out the TypeProf-IDE documentation and the RBS syntax docs. And you can help with the continued development of TypeProf-IDE by opening a GitHub Issue on the typeprof repo.

Thank you to Yusuke Endoh for his hard work building the TypeProf-IDE integration, for his presentation, and for helping me work through issues using it during RubyConf!

If you’d like to work at a place that explores the cutting edge of Ruby and other languages, join us at Big Nerd Ranch!

The post Live Ruby Type Checking with TypeProf-IDE appeared first on Big Nerd Ranch.

]]>
https://bignerdranch.com/blog/live-ruby-type-checking-with-typeprof-ide/feed/ 0
Embracing Cloud Native https://bignerdranch.com/blog/embracing-cloud-native/ https://bignerdranch.com/blog/embracing-cloud-native/#respond Sat, 02 Oct 2021 18:34:05 +0000 https://bignerdranch.com/?p=7781 Cloud infrastructure has pushed software towards abstracting the developer away from the operating hardware, making global networks and copious amounts of computing power available over API’s, and managing large swaths of lower tiers of the tech stack with autonomous software. Gone are the days of buying bulky servers to own and here are the times […]

The post Embracing Cloud Native appeared first on Big Nerd Ranch.

]]>
Cloud infrastructure has pushed software towards abstracting the developer away from the operating hardware, making global networks and copious amounts of computing power available over API’s, and managing large swaths of lower tiers of the tech stack with autonomous software. Gone are the days of buying bulky servers to own and here are the times of renting pieces of a data center to host applications. But how does designing for a cloud environment change your application? How do software teams take advantage of all the advancements coming with this new set of infrastructure? This article will go over three pillars of a “Cloud-Native” application and how you can embrace them in your own software.

Embracing Failure

One incredible paradigm the Cloud has brought forth is represented in the Pets vs Cattle analogy. It differentiates how we treat our application servers between pets, things that we love and care for and never want to have die or be replaced, and cattle, things that are numbered and if one leaves another can take its place. It may sound cold and disconnected, but it embraces the failure and accepts it using the same methodology of “turning it off and on again”. This aligns with the Cloud mentality of adding more virtual machines and disposing of them at will, rather than the old ways of keeping a number of limited, in-house servers running because you didn’t have a whole data center available to you.

To utilize this methodology, it must be easy for your app to be restarted. One way to reflect this in your app is to make your server stateless, meaning it doesn’t persist state on its own disk: it delegates state to a database or a managed service for handling state in a resilient way. For connections or stateful attachments to dependencies, don’t fight it and try to reconnect when something goes down: just restart the application and let the initialization logic connect again. In cases where this isn’t possible, the orchestration software will kill the application, thinking it’s unhealthy (which it is) and try to restart it again, giving you a faux-exponential-backoff loop.

The above thinks of failure as binary: either the application is working or it isn’t, and let the orchestration software handle the unhealthy parts. But there’s another method to compliment these failure states, and that’s handling degraded functionality. In this scenario, some of your servers are unhealthy, but not all of them. If you’re already using an orchestration layer, you’ll likely already have something to handle this scenario: the software managing your application sees that certain instances are down, so it reroutes traffic to healthy instances and will return traffic when the instances are healthy again. But in the scenario where entire chunks of functionality are down, you can handle this state and plan for it. For example, you can return data and errors in a graphql response:

{
  "data": {
    "user": {
      "name": "James",
      "favoriteFood": "omelettes",
    },
    "comments": null,
  },
  "errors": [
    {
      "path": [
        "comments"
      ],
      "locations": [
        {
          "line": 2,
          "column": 3
        }
      ],
      "message": "Could not fetch comments for user"
    }
  ]
}

Here parts of the application were able to return user data, but comments weren’t available, so we return what we have, accepting that failure and working with it rather than returning no data. Just because parts of your application aren’t healthy doesn’t mean the user can’t still get things done with the other parts.

Embracing Agility

A more agile application means it’s quicker to start and schedule should you need more instances of it. In scenarios where the system has determined it needs more clones of your app, you don’t want to wait 5 or more minutes for it to get going. After all, in the Cloud we’re no longer buying physical servers: we’re renting the space and computing power that we need, so waiting for applications to reach a healthy state is wasting money. For your users, bulky, “slow-to-schedule” applications mean a delay in getting more resources and degraded performance or, in worse scenarios, an outage because servers are overloaded while they wait on reinforcements.

Whether you’re coming from an existing application or looking to make a Cloud-Native one from the start, the best way to make an application more agile is to think smaller. This means that the server you’re constructing is doing less, reducing start time and becoming less bloated with features. If your application is large and has unwieldy dependencies on prerequisite software installed on the server, consider removing those dependencies by delegating them to a third party or making it into a service to be started elsewhere. If your application is still too large, consider microservices, where appropriately sized and cohesive pieces of the total application are deployed separately and communicate over a network. It can increase complexity in operating the total application, but microservices can also lessen the cognitive load required to manage any individual piece with it or also coupled to the rest of the whole.

Embracing Elasticity

Image Credit: https://systeminterview.com

Following the points from above, if it’s easier to run instances of your application, it’s easier for the software to autonomously manage how many are running. This means the infrastructure managing your app can monitor the traffic or resource usage and start to add more instances to handle the increased load. In times where there’s less usage, it can scale down your resources to match. This is a huge departure from how elasticity was thought of using a traditional model: previously, you bought servers and maintained them, so you didn’t plan for just adding them on the fly. To compensate for dynamic amounts of load, you had to take the topmost estimate and add buffer room for extra heavy traffic times. During normal operation, there was capacity just sitting around unused. And to increase capacity, you likely tried to add more capacity to a single machine with upgrades and newer internals.

Again, to benefit from the elasticity that the Cloud gives you, it’s best to make it easier to enjoy that benefit. You can follow the tips on agility to make your application smaller, but before that, it might important to make it possible to run many instances of your application in the first place. This can mean removing any logic that counts on a fixed number of instances running, like using a single instance of a server because you need locks to handle concurrent logic. For scenarios like that, you can use locks provided by your database or your caching solution. All in all, the idea should be to look at logical factors that prevent you from running a second or third instance of your application in parallel. Ask yourself what are the downsides or complications of just adding one more instance of your app, and create a list of barriers. Once you’ve removed those, you’ll find that running tens or hundreds of instances in parallel is now possible.

Conclusion

The Cloud has changed the way we think about and run software, and your existing application may need to change to best utilize it. With so much of the infrastructure managed with autonomous software, new tooling has made it easier than ever to manage entire fleets of applications, removing the developer further away from the gritty details. It has pushed software deployments to be more agile, embrace failure as normal, and allow scaling by adding instances instead of making faster machines. If you’re not already running with all the Cloud has to offer, give it another look and see if it aligns with your future needs, both for your business and your application.

The post Embracing Cloud Native appeared first on Big Nerd Ranch.

]]>
https://bignerdranch.com/blog/embracing-cloud-native/feed/ 0