John Iwasyk - Big Nerd Ranch Wed, 17 Aug 2022 21:17:11 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.5 Why the Usage of Instrumentation Within Monitoring Tools Should be Implemented in Your Next Web Project https://bignerdranch.com/blog/why-the-usage-of-instrumentation-within-monitoring-tools-should-be-implemented-in-your-next-web-project/ https://bignerdranch.com/blog/why-the-usage-of-instrumentation-within-monitoring-tools-should-be-implemented-in-your-next-web-project/#respond Tue, 10 May 2022 14:00:19 +0000 https://bignerdranch.com/?p=9389 When designing a web application, a strategy that has often been used is to use a monitoring tool such as Grafana or Datadog. There are many benefits to doing this such as log querying, monitoring health of applications, and viewing performance; instrumentation of custom metrics and log tags can help when identifying problems. How is […]

The post Why the Usage of Instrumentation Within Monitoring Tools Should be Implemented in Your Next Web Project appeared first on Big Nerd Ranch.

]]>
When designing a web application, a strategy that has often been used is to use a monitoring tool such as Grafana or Datadog. There are many benefits to doing this such as log querying, monitoring health of applications, and viewing performance; instrumentation of custom metrics and log tags can help when identifying problems. How is this instrumentation setup and how can this be visualized within the monitoring tools?

Web Monitoring

Monitoring tools such as Grafana and Datadog are used in everyday commercial applications for tasks such as log querying, monitoring the health of applications, and viewing performance. Log querying can be integrated with tools such as Loki for log integration into these monitoring tools. The health of web applications can be alerted by using features such as dashboards which allow for a birds-eye view of how an application is doing from a high level. Many choose to include metrics such as failures from a particular endpoint or error logs to determine the application’s health. Service performance can be considered in the service health as well in determining the health. Custom metrics can help with determining failure reasons or counts of specific failures within an application.

Prometheus is a framework that can help capture these custom metrics. By instrumenting an application and exposing it, these metrics can be scraped by a monitoring tool for visualization and querying purposes. The example below shows how Prometheus can be used to instrument a sample Go application and how the metrics can be transformed in Grafana.

An Example of Instrumenting a Web Application

The below example demonstrates how Prometheus metrics are instrumented into a Go web application. The code in this article is part of a larger runnable demo available in BNR-Blog-Prometheus-Monitoring. It relies on:

Kubernetes Setup

The Kubernetes setup for this example was configured using Helm. This allows a service chart to be instantiated along with any dependencies noted in the Chart.yaml file. A lot of the upfront work of defining the service configurations is handled automatically by Helm with the helm create command.

Skaffold is paired alongside Helm for making local deployment easy. This will handle Dockerfile image building/caching, deployment through Helm, and port-forwarding the service ports to the host machine. The sample repo listed above contains the instructions for how to run this example locally.

Handler Metrics Instrumentation

func HandleRoute(logger *zap.Logger) http.HandlerFunc {
    return func(writer http.ResponseWriter, request *http.Request) {
        queryParams := request.URL.Query()

        if _, ok := queryParams["failure"]; ok {
            if failureReasons, ok := queryParams["reason"]; ok {
                failureCounter.With(prometheus.Labels{"reason": failureReasons[0]}).Inc()
                logger.Error("error with sample route", zap.String("reason", failureReasons[0]))
            } else {
                failureCounter.With(prometheus.Labels{"reason": "server_error"}).Inc()
                logger.Error("error with sample route")
            }
            writer.WriteHeader(http.StatusInternalServerError)
        } else {
            successCounter.Inc()
            logger.Info("successful call to sample route")
            writer.WriteHeader(http.StatusOK)
        }
    }
}

HandleRoute() defines the handler function for the route: /sample-route. The route is intended to be used to trigger a success or failure depending on if the failure query parameter is set. When a success occurs, the Prometheus success metric counter is incremented by one. When a failure occurs, the Prometheus failure metric counter is incremented by one. If a failure reason is provided, that is provided as a label on the metric or defaults to server_error if it is not provided. There are also formatted JSON logs for each case that can be found in Grafana.

Using the above custom metrics/logs allows for greater customization on specific failure cases in Grafana when monitoring the service. The Grafana dashboard shows the endpoint health based on failure reason and displays the failure rate based on the metrics. The dashboard also shows the error logs associated with the service. An example of the Grafana dashboard that utilizes these captured metrics and logs is below.

Grafana Service Health Dashboard

The dashboard shows the high-level status of the sample-route endpoint and thus the sample app since this is the only route served. The success/failure rate is calculated based on the exposed Prometheus success/failure metrics. The specific failure reason is shown in a time series so specific error spikes can be observed. There are error logs below if more inspection on the error reasons is desired. If tracing is connected to the logs, one could click on the error log instance to view the full timeline of the request to see a detailed view of what occurred.

Conclusion

In summary, the instrumentation of a web application can be accomplished using Prometheus or a similar framework. These metrics are then scraped through an exposed endpoint into the preferred monitoring solution. Grafana or a similar solution can then be used to ingest these metrics and visualize them through dashboards. These dashboards can be useful for determining the health of an application and the details of a failure. In addition to metrics, structured logs can be useful in these dashboards for showing relevant information associated with failures. Attaching traces to these logs is also beneficial as a developer can trace the lifecycle of the request to see how a specific request might have failed.

The post Why the Usage of Instrumentation Within Monitoring Tools Should be Implemented in Your Next Web Project appeared first on Big Nerd Ranch.

]]>
https://bignerdranch.com/blog/why-the-usage-of-instrumentation-within-monitoring-tools-should-be-implemented-in-your-next-web-project/feed/ 0
Using Dockertest With Golang https://bignerdranch.com/blog/using-dockertest-with-golang/ https://bignerdranch.com/blog/using-dockertest-with-golang/#respond Tue, 20 Jul 2021 17:29:45 +0000 https://bignerdranch.com/?p=7641 Why should Dockertest be part of your next Golang web project? When designing a web application, a strategy that has often been used when testing is to mock third-party dependencies. There are many benefits to doing this such as producing hard-to-provoke responses on-demand; however getting actual responses seems better than simulating them. Why might a […]

The post Using Dockertest With Golang appeared first on Big Nerd Ranch.

]]>
Why should Dockertest be part of your next Golang web project?

When designing a web application, a strategy that has often been used when testing is to mock third-party dependencies. There are many benefits to doing this such as producing hard-to-provoke responses on-demand; however getting actual responses seems better than simulating them. Why might a developer want to use Dockertest as an alternative for producing realistic responses?

Testing Strategies

One of the main strategies for testing in Go is to instantiate a test server and mock responses for specific paths related to the requests under test. While this strategy works fine for simulating errors and edge-cases, it would be nice to not worry about creating fake responses for each request. In addition, developers could make mistakes producing these responses which could be problematic for integration tests. Developers write these tests as a final check to verify that applications produce correct results given many scenarios.

Dockertest is a library meant to help accomplish this goal. By creating actual instances of these third party services through Docker containers, realistic responses can be obtained. The example below shows the setup of the container using the Dockertest library and how it is used for testing.

An Example of Using Dockertest

The below example demonstrates how Dockertest is utilized for testing a simple CRUD application. This application, a phonebook, manages phone numbers using Postgres as the external database. The code in this article is part of a larger runnable demo available in BNR-Blog-Dockertest. It relies on:

Test Setup

The test file will verify the CRUD functionality of this phonebook and ensure the storage code actually manages to store data in an actual Postgres database, that is, properly integrates with Postgres. This is done by running Postgres in a Docker container alongside the test process. Before any test runs, a Docker connection must be established and the Postgres container launched with the configuration expected by the test code:

var testPort string

const testUser = "postgres"
const testPassword = "password"
const testHost = "localhost"
const testDbName = "phone_numbers"

// getAdapter retrieves the Postgres adapter with test credentials
func getAdapter() (*PgAdapter, error) {
    return NewAdapter(testHost, testPort, testUser, testDbName, WithPassword(testPassword))
}

// setup instantiates a Postgres docker container and attempts to connect to it via a new adapter
func setup() *dockertest.Resource {
    pool, err := dockertest.NewPool("")
    if err != nil {
        log.Fatalf("could not connect to docker: %s", err)
    }

    // Pulls an image, creates a container based on it and runs it
    resource, err := pool.Run("postgres", "13", []string{fmt.Sprintf("POSTGRES_PASSWORD=%s", testPassword), fmt.Sprintf("POSTGRES_DB=%s", testDbName)})
    if err != nil {
        log.Fatalf("could not start resource: %s", err)
    }
    testPort = resource.GetPort("5432/tcp") // Set port used to communicate with Postgres

    var adapter *PgAdapter
    // Exponential backoff-retry, because the application in the container might not be ready to accept connections yet
    if err := pool.Retry(func() error {
        adapter, err = getAdapter()
        return err
    }); err != nil {
        log.Fatalf("could not connect to docker: %s", err)
    }

    initTestAdapter(adapter)

    return resource
}

func TestMain(m *testing.M) {
    setup()
    code := m.Run()
    os.Exit(code)
}

TestMain() ensures the setup() function runs before any test runs. Within the setup() function, a pool resource is created to represent a connection to the docker API used for pulling Docker images. The internal pool client pulls the latest Postgres image with specified environment options and then starts the container based on this image. POSTGRES_PASSWORD is set to testPassword which specifies the password for connecting to Postgres. POSTGRES_DB specifies the database name to be created on startup and is set to a constant: phone_numbers. The container port value is the port published for communication with Postgres based on the service port value. Afterward, the postgres instance instantiates and attempts to connect to the new docker container. If there is an error doing this, the test suite exits with an error.

An example of a test that utilizes the Postgres instance is below.

Test

func TestCreatePhoneNumber(t *testing.T) {
    testNumber := "1234566656"
    adapter, err := getAdapter()
    if err != nil {
        t.Fatalf("error creating new test adapter: %v", err)
    }

    cases := []struct {
        error       bool
        description string
    }{
        {
            description: "Should succeed with valid creation of a phone number",
        },
        {
            description: "Should fail if database connection closed",
            error:       true,
        },
    }
    for _, c := range cases {
        t.Run(c.description, func(t *testing.T) {
            if c.error {
                adapter.conn.Close()
            }
            id, err := adapter.CreatePhoneNumber(testNumber)
            if !c.error && err != nil {
                t.Errorf("expecting no error but received: %v", err)
            } else if !c.error { // Remove test number from db so not captured by following tests
                err = adapter.RemovePhoneNumber(id)
                if err != nil {
                    t.Fatalf("error removing test number from database")
                }
            }
        })
    }
}

The table-driven test case above is defined to verify the create method of the postgres storage adapter. The first case assumes that a test phone number successfully inserts into the docker Postgres instance. The second case forces the database connection to close and then assumes that the create method fails on the Postgres instance.

Conclusion

In summary, mocking is a fine way to test third-party dependencies but using a library such as Dockertest can allow for a more realistic and robust integration testing environment. With the capability to launch any Docker container, an entire portion of a web application can be tested with real results in a controlled test environment. Such a library can be useful within a unit test or integration test environment. Dockertest can also be set up in CI environments, as with GitHub Actions’ service containers. For more examples, see the Dockertest repository.

The post Using Dockertest With Golang appeared first on Big Nerd Ranch.

]]>
https://bignerdranch.com/blog/using-dockertest-with-golang/feed/ 0