Leveling Up - Big Nerd Ranch Fri, 21 Apr 2023 18:02:38 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.5 Replacing Myself: Writing Unit Tests with ChatGPT https://bignerdranch.com/blog/replacing-myself-writing-unit-tests-with-chatgpt/ https://bignerdranch.com/blog/replacing-myself-writing-unit-tests-with-chatgpt/#respond Thu, 20 Apr 2023 17:53:46 +0000 https://bignerdranch.com/?p=9679 The Bot that’s got everyone talking The science-fiction future is among us as we find ourselves on the precipice of an AI revolution. As with all technological milestones, from the wheel to the computer, what we can do and how fast we can do it shift towards ever-increasing productivity. While the philosophers and keyboard warriors […]

The post Replacing Myself: Writing Unit Tests with ChatGPT appeared first on Big Nerd Ranch.

]]>
The Bot that’s got everyone talking

The science-fiction future is among us as we find ourselves on the precipice of an AI revolution. As with all technological milestones, from the wheel to the computer, what we can do and how fast we can do it shift towards ever-increasing productivity. While the philosophers and keyboard warriors among us discuss the philosophical ramifications of AI, here at Big Nerd Ranch, we concern ourselves with more practical matters.

In this series, I plan on investigating ChatGPT’s answer to one of the oldest questions in human history: How can I make my life easier?

Every Devs Favorite Pastime

Recognizing that a robust suite of unit tests is essential for ensuring a healthy codebase isn’t rocket surgery. But let’s face it – writing tests can be tedious at times. If you’re anything like me, you’ve probably caught yourself daydreaming about having a trusty assistant to either take care of it all for you or, at the very least, lend a helping hand. We’re going to put ChatGPT through some tests today, starting with something easy to calibrate ourselves and moving on to testing a React reducer and React Component.

Factorials

Let’s walk before we run, shall we? We’ll ask it to write some unit tests for a recursive function to see if it gets tripped up. The simplest recursive function I can think of is the factorial, something we should have all seen in CS101:

function factorial(n) {
  if (n === 0 || n === 1) {
    return 1;
  } else {
    return n * factorial(n - 1);
  }
}

Simple enough, right? Now we’ll ask for some unit tests:

Prompt: Can you write me some unit tests for the following javascript function:

test("factorial of 0 is 1", () => {
  expect(factorial(0)).toBe(1);
});

test("factorial of 1 is 1", () => {
  expect(factorial(1)).toBe(1);
});

test("factorial of 5 is 120", () => {
  expect(factorial(5)).toBe(120);
});

test("factorial of negative number throws an error", () => {
  expect(() => factorial(-1)).toThrow();
});

Impressive, ChatGPT not only accounted for the special case of 0 factorial but also considered the possibility of negative integers and real numbers as input.

That gave us a decent idea as to how it will handle writing tests for a simple recursive function, now that we’re calibrated, let’s switch gears to a more practical example.

React Reducer

The following code snippet is a React reducer for an online shopping cart:

const cartReducer = (state, action) => {
  switch (action.type) {
    case "ADD_ITEM":
      return {
        ...state,
        items: [...state.items, action.payload],
        total: state.total + action.payload.price,
      };
    case "REMOVE_ITEM":
      const itemToRemove = state.items.find(
        (item) => item.id === action.payload.id
      );
      return {
        ...state,
        items: state.items.filter((item) => item.id !== action.payload.id),
        total: state.total - itemToRemove.price,
      };
    case "CLEAR_CART":
      return {
        ...state,
        items: [],
        total: 0,
      };
    default:
      throw new Error(`Invalid action type ${action.type}`);
  }
};

We can identify a few tests that need to be written by skimming through it. At a minimum, we’d want tests for adding items to the cart, removing them, and emptying the cart. Before we start writing anything ourselves, let’s turn to our trusty assistant, ChatGPT, and see if it agrees with us.

Prompt: Can you write me some unit tests for the following react reducer:

This is what ChatGPT generated:

describe("cartReducer", () => {
  let initialState;

  beforeEach(() => {
    initialState = { items: [], total: 0 };
  });

  it("should handle ADD_ITEM action", () => {
    const action = { type: "ADD_ITEM", payload: { id: 1, name: "Item 1", price: 10 } };
    const newState = cartReducer(initialState, action);
    expect(newState.items.length).toEqual(1);
    expect(newState.items[0]).toEqual(action.payload);
    expect(newState.total).toEqual(action.payload.price);
  });

  it("should handle REMOVE_ITEM action", () => {
    const itemToRemove = { id: 1, name: "Item 1", price: 10 };
    const action = { type: "REMOVE_ITEM", payload: itemToRemove };
    const state = { items: [itemToRemove], total: itemToRemove.price };
    const newState = cartReducer(state, action);
    expect(newState.items.length).toEqual(0);
    expect(newState.total).toEqual(0);
  });

  it("should handle CLEAR_CART action", () => {
    const state = { items: [{ id: 1, name: "Item 1", price: 10 }], total: 10 };
    const action = { type: "CLEAR_CART" };
    const newState = cartReducer(state, action);
    expect(newState.items.length).toEqual(0);
    expect(newState.total).toEqual(0);
  });

  it("should throw an error for invalid action types", () => {
    const action = { type: "INVALID_ACTION" };
    expect(() => cartReducer(initialState, action)).toThrowError(
      "Invalid action type INVALID_ACTION"
    );
  });

  it("should throw an error for invalid action types", () => {
    expect(() => cartReducer(initialState)).toThrowError(
      "Invalid action type INVALID_ACTION"
    );
  });
});

Not only did it whip up some tests for adding, removing, and emptying the cart, but it also included tests for handling empty and invalid actions. Now all that’s left is to run these tests. However, keep in mind that since ChatGPT doesn’t have any knowledge of our specific file structure, we might need to make some minor adjustments in order to get everything running smoothly. Small price to pay I say.

Let’s run these tests.

Looks like everything passed except for one test. Looking through the cartReducer code we can see why the test failed: The code we originally wrote doesn’t seem to be built to handle the case where an empty action has been passed to it and as a result we’re getting a null reference exception when we attempt to throw the invalid action error. While it can be considered a mistake on ChatGPTs part to create that unit test, it did alert us to a potential unhandled exception in our cartReducer code.

React Components

Let’s feed it a React component and see how it does. Here we have an Alert react component in Typescript.

import React from "react";
import PropTypes from "prop-types";

const Banner = ({ title, subtitle, image, hidden }) => {
  return (
    <div className="banner">
      <img src={image} alt={title} hidden={hidden} />
      <div className="banner__content">
        <h1 className="banner__title">{title}</h1>
        <p className="banner__subtitle">{subtitle}</p>
      </div>
    </div>
  );
};

Banner.propTypes = {
  title: PropTypes.string.isRequired,
  subtitle: PropTypes.string.isRequired,
  image: PropTypes.string.isRequired,
  hidden: PropTypes.bool.isRequired,
};

export default Banner;

Let’s see how ChatGPT handles it.

Prompt: Can you write me some unit tests for the following React component:

import React from "react";
import { render, screen } from "@testing-library/react";
import Banner from "./Banner";

describe("Banner component", () => {
  it("renders the title", () => {
    render(
      <Banner
        title="Test Title"
        subtitle="Test Subtitle"
        image="test.png"
        hidden={false}
      />
    );
    const titleElement = screen.getByText("Test Title");
    expect(titleElement).toBeInTheDocument();
  });

  it("renders the subtitle", () => {
    render(
      <Banner
        title="Test Title"
        subtitle="Test Subtitle"
        image="test.png"
        hidden={false}
      />
    );
    const subtitleElement = screen.getByText("Test Subtitle");
    expect(subtitleElement).toBeInTheDocument();
  });

  it("renders the image with the correct alt text", () => {
    render(
      <Banner
        title="Test Title"
        subtitle="Test Subtitle"
        image="test.png"
        hidden={false}
      />
    );
    const imageElement = screen.getByAltText("Test Title");
    expect(imageElement).toBeInTheDocument();
    expect(imageElement).toHaveAttribute("src", "test.png");
  });

  it("hides the image when hidden prop is true", () => {
    render(
      <Banner
        title="Test Title"
        subtitle="Test Subtitle"
        image="test.png"
        hidden={true}
      />
    );
    const imageElement = screen.getByAltText("Test Title");
    expect(imageElement).not.toBeInTheDocument();
  });
});

We’ll go ahead and run these tests.

A single test failed. ChatGPT assumed that toBeInDocument would return false when an element is hidden. toBeInDocument checks for whether the element is in the tree, regardless of visibility, so in this case, it’s actually returning true. ChatGPT should have used toBeVisible instead. Let’s go ahead and make that change and see if we can get that test to pass.

import React from "react";
import { render, screen } from "@testing-library/react";
import Banner from "./Alert";

describe("Banner component", () => {
  it("renders the title", () => {
    render(
      <Banner
        title="Test Title"
        subtitle="Test Subtitle"
        image="test.png"
        hidden={false}
      />
    );
    const titleElement = screen.getByText("Test Title");
    expect(titleElement).toBeInTheDocument();
  });

  it("renders the subtitle", () => {
    render(
      <Banner
        title="Test Title"
        subtitle="Test Subtitle"
        image="test.png"
        hidden={false}
      />
    );
    const subtitleElement = screen.getByText("Test Subtitle");
    expect(subtitleElement).toBeInTheDocument();
  });

  it("renders the image with the correct alt text", () => {
    render(
      <Banner
        title="Test Title"
        subtitle="Test Subtitle"
        image="test.png"
        hidden={false}
      />
    );
    const imageElement = screen.getByAltText("Test Title");
    expect(imageElement).toBeInTheDocument();
    expect(imageElement).toHaveAttribute("src", "test.png");
  });

  it("hides the image when hidden prop is true", () => {
    render(
      <Banner
        title="Test Title"
        subtitle="Test Subtitle"
        image="test.png"
        hidden={true}
      />
    );
    const imageElement = screen.getByAltText("Test Title");
    expect(imageElement).not.toBeVisible();
  });
});

There we have it, the tests are all passing!

Drawbacks

As with all AI-powered chatbots, certain limitations exist.

Let’s identify a few of those weaknesses so we can sleep more soundly at night:

  1. Lack of Creativity: Chatbots may be capable of mimicking human-like conversation, but they lack true creativity. They are machines and cannot produce truly original output or come up with novel design patterns or hyper-efficient algorithms.
  2. Reliance on Training Data: The accuracy of AI is heavily dependent on the quality of its training data. If the data is out-of-date or inaccurate, it will severely affect the performance of the AI and, subsequently, its output.
  3. Need for Human Correction: Although a clever prompt engineer may be able to coax decent output from the AI, this article demonstrated some of the errors that ChatGPT can make. This presents an interesting trade-off, as I imagine that there is a relationship between the complexity of our code and the type of mistakes ChatGPT will make. This will require a keen eye. It was fortunate that the mistakes ChatGPT made in this exercise were easy to spot.

Conclusions

While ChatGPT may not be ready to replace humans just yet, it’s clear that AI has the potential to revolutionize the way we live and work. As we continue to develop and use these tools, we can expect AI to become even more intelligent and capable. This presents an opportunity for developers to focus on the more challenging aspects of coding while leaving the repetitive tasks to our AI assistants.

The post Replacing Myself: Writing Unit Tests with ChatGPT appeared first on Big Nerd Ranch.

]]>
https://bignerdranch.com/blog/replacing-myself-writing-unit-tests-with-chatgpt/feed/ 0
Tell Our BNR Story: Ramon C. https://bignerdranch.com/blog/tell-our-bnr-story-ramon-c/ https://bignerdranch.com/blog/tell-our-bnr-story-ramon-c/#respond Wed, 18 Jan 2023 15:02:51 +0000 https://bignerdranch.com/?p=9599 Big Nerd Ranch is chock-full of incredibly talented people. Today, we’re starting a series, Tell Our BNR Story, where folks within our industry share how the Ranch has impacted their careers. Kicking off our inaugural interview is Ramon C., a Senior Mobile Engineer here at Big Nerd Ranch. He was first introduced to the Ranch […]

The post Tell Our BNR Story: Ramon C. appeared first on Big Nerd Ranch.

]]>
Big Nerd Ranch is chock-full of incredibly talented people. Today, we’re starting a series, Tell Our BNR Story, where folks within our industry share how the Ranch has impacted their careers.

Kicking off our inaugural interview is Ramon C., a Senior Mobile Engineer here at Big Nerd Ranch. He was first introduced to the Ranch when he attended an Android bootcamp six years ago. Here’s his take on what makes for great dev training, what’s important when building a brand new team, and how BNR set him up for success in his career.

How did you find out about Big Nerd Ranch?

When I first discovered BNR, I was doing a rotational program where you try out a few positions and get to pick one at the end. I was in the middle of a rotation on Android development. It had been quite a while since I’d taken any Android classes. The last time I had done any extensive learning on this topic, I’d been in college.   

I asked myself: “What’s the difference between what I needed in college versus what I need professionally?”

I quickly figured out that they’re two different worlds. I realized to do the job properly, I needed to do some additional training. I started by Googling but didn’t find anything that stood out to me.

Then my boss at the time said, “Hey, I heard about this program called Big Nerd Ranch, you should take a look.”

I read through the program, and I thought, “Oh, this rules.”

I essentially put in the proposal with my company, got approved, and in the spring or summer of 2016, I went to Atlanta for the Advanced Android bootcamp training course. It was a really awesome experience. When I got back to my job, I couldn’t shut up about the BNR training I had just done. They literally could not get me to stop talking about it.

One of the things I did immediately after that training was to buy the BNR Android Programming Guide—I think it was the second or third version—and that’s been my bible for Android development ever since. Right after my bootcamp, I even gave a 20-minute presentation on why, if we ever got more developers, everyone should go through the BNR training that I did.

What made BNR stand out to you?

I don’t remember any of the other companies I researched, which I suppose says it all! I remember the enthusiastic Google reviews helped, too.

I think the biggest thing that I noticed—and I still remember this—is that it was very clear what you were going to learn in each class. At the time, I was still trying to figure out what the important technologies were. The BNR program listed out exactly the technologies that were going to be used and taught.

It was clear, it was easy to find, and it was easy to choose the level that I needed to be at. It was easy to justify saying, “Hey, this is exactly where we want to take our app, this is exactly what they’re going to teach us, and these are the skills I can take back with me.”

With some of the other training programs, it felt almost intentionally vague. It was difficult to discern exactly what knowledge and techniques we would be learning.

What did you initially want to get out of the BNR training? How did that match up with what you learned?

So obviously there was the technical stuff I wanted to take back, but one of the things that I was still trying to learn at the time was how to speak the language of Android as a new-ish Android developer. School doesn’t really teach you how to talk to other developers about the platforms you’re using.

I was one of two developers at the time at my company, so there weren’t a lot of opportunities to learn how to speak that language. Being in the bootcamp with other developers, all of us doing Android at the same time—it was a really good opportunity to train those skills.

How do developers talk to each other about the platform? How can I bring the right skills and knowledge back to the company? There were important questions to answer because we were going to start building a new team of developers and we had to set a baseline of how we were going to interact with each other.

What was the training experience like? What about it stood out to you?

Intense is definitely the number one way I would describe the BNR training experience. I was there for five days, and you’re essentially doing a full work week of training: eight hours a day with a lunch break.

What I really liked about the training—and again, this goes back to what I just said about learning the language of Android—was learning how to talk to other developers. Like, “Hey I’m stuck on this, I don’t really understand what’s going on here.” Being able to ask those technical questions to another developer, you know?

Those are the kind of soft skills that every developer should have. This was a prime opportunity to grow those skills outside of the workplace.

In a classroom setting like BNR’s, when you ask a trivial question, it can be a learning opportunity for the whole class. At work, you never know how a question might come off. BNR offered a risk-free environment where there were no bad questions, and I could freely talk with other devs. I really believe this was instrumental to me at the beginning of my career.

What were the results of the training? What impact did it have on you?

You take all this training material back with you, and you go, “All right, now I have to discern what we’re going to extract and use in our own apps. How do I make sure we’re building our apps in the most up-to-date way with the technology I was just taught?”

I ended up having to do a pretty big proof of concept for the company I was working for, explaining the technologies and giving an overall rating for our needs. At the time, we were a brand new dev team, so we had to ask more questions: for example, how much effort is it going to take to have a brand new team learn these technologies? We ended up not using the most cutting-edge standards to start with because, you know, it’s a brand new team; you have to make some concessions for the sake of efficiency.

Without the BNR overview, it might’ve taken us a lot longer to figure that out. Plus, we were able to pick and choose some of the up-and-coming technologies at the time—RxJava was on its way up, Retrofit 2 was on its way up—and we were able to have those conversations about tech stack, helping the team to start learning these new platforms and get familiar with them for future projects, all that jazz.

I ended up having a pretty big influence on these conversations. Being able to say, “These are the things I think we should use,” “This might be a faster way to solve this problem,” things like that. I became the go-to guy on a lot of this after just one course, which was kind of a weird experience for me: to go from feeling like I was on the outside looking in to being the resident expert consulted on a lot of those high-stakes decisions.

What do you think helped you to feel more comfortable stepping into that kind of role?

I would thank the literature—just having that information handy. When you’re reading something in a classroom setting, not everything sticks or resonates immediately.

But when you’re able to say, “I have this material right here, just take a look at this example. This is what I think we can get to, this is our end state, and this is how I think we should build things.” Having that reference gave me the confidence to share my opinion because I was talking with evidence to back up what I was saying.

The other resource that was great was the online portal BNR gave us access to, which had all the sample projects and everything we had done throughout the bootcamp, from start to finish. Being able to go back to that after the bootcamp was over was huge. Sometimes, parts of those projects would just pop into my head, and I could go back to check, “Oh, there’s a bug here that reminds me of one from the BNR training, let me go look that up.”

The BNR training gave me a bunch of different tools, both coding and non-coding-related, to make these big impactful statements at work and be confident in backing them up.

You mentioned that you were able to convince your team to give BNR a try as a group. Can you share a little more about that?  

So I finished the rotational work program and, at that point, I had been talking for eight months about the BNR training, going, “This is the one, this is it!”

And, well, BNR isn’t the cheapest option out there. We ended up going with a more affordable option for our first training, and it was just… awful. One of the worst bootcamps I have ever done in my life. The developer got up there and just didn’t have a plan: there was nothing prepared. We got nothing from it—we were basically following Android community forum documents at that point, and I was thinking, “I could’ve done this on my own.”

By comparison, after we finally invested in the BNR training, I don’t think a single person left that room going, “This wasn’t worth the time that we put into this,” because we as a company had learned how to talk to each other as developers. We also learned language skills: we had completely new people alongside more tenured developers who had been working on projects since the release of the platform, but we were able to get everyone on the same page about the platform we were going to use.

There were also questions about modernization that we had to start asking, but we didn’t really know where to start. BNR gave us the starting line to have that modernization conversation and pointed us to the most modern solutions they were using. That pushed us to the next stage: “Are our apps even in a position to get this modern? What’s the best compromise between our team’s capabilities and the technology available to us?”

We got to talk about which direction we wanted to go and then set up a plan: even if we didn’t use the most advanced tools now, we could work our way up to them.

Do you have any final thoughts you’d like to share?

I really do believe in our training. It has helped me in my career in ways that I can’t really put into words. I’ve been able to take the skills that I learned almost six years ago, both technical and non-technical, and apply them in the work that I do every day. Not to get too sappy, but I cannot thank Big Nerd Ranch enough for getting me to where I am—not just with those Android platforms but in my career as a whole.

The post Tell Our BNR Story: Ramon C. appeared first on Big Nerd Ranch.

]]>
https://bignerdranch.com/blog/tell-our-bnr-story-ramon-c/feed/ 0
Four Key Reasons to Learn Markdown https://bignerdranch.com/blog/four-key-reasons-to-learn-markdown/ https://bignerdranch.com/blog/four-key-reasons-to-learn-markdown/#respond Wed, 16 Nov 2022 16:10:41 +0000 https://bignerdranch.com/?p=9563 Writing documentation is fun—really, really fun. I know some engineers may disagree with me, but as a technical writer, creating quality documentation that will help folks has been the most engaging and fulfilling job I’ve ever had. One of the coolest aspects has been using Markdown to write and format almost all of that documentation. […]

The post Four Key Reasons to Learn Markdown appeared first on Big Nerd Ranch.

]]>
Writing documentation is fun—really, really fun. I know some engineers may disagree with me, but as a technical writer, creating quality documentation that will help folks has been the most engaging and fulfilling job I’ve ever had. One of the coolest aspects has been using Markdown to write and format almost all of that documentation.

What is Markdown?

Markdown is a lightweight markup language created in 2004 by John Gruber and Aaron Swartz. It creates formatted text via a plain-text editor. Unlike HTML or XML, it is still easily digestible by readers of all backgrounds in its source form. You don’t need to be a programmer to get the gist of things. And although it borrows its syntax from HTML, Markdown is easier and quicker to learn.

The tags—that is, the syntax used to format text (<b>word</b> to bold text, for example)—are simpler than HTML, but they still automatically convert to HTML. So, if you’d prefer, you can still use HTML tags when working with Markdown.

Markdown is used almost everywhere, from GitHub to Slack. It’s the unofficial text writing and formatting standard on big coding sites, like coding repositories. Most engineering readme files are written with and formatted using Markdown. Most text editors accept it as well.

Beyond the fact that it’s easy to use, quick to learn, and easily converts to HTML, Markdown is also pretty futureproof. By this, I mean Markdown will be usable as long as plain text is the official and unofficial standard. It was designed to be quickly parsed and digested as a raw file, but also has its own file extension (.md). Suffice to say, Markdown isn’t going anywhere—especially in the world of engineering and engineering documentation.

Why use Markdown?

I’m answering this question from my perspective as a technical writer, but you can leverage the benefits of Markdown whenever you write online.

1: It’s simple

Markdown is very simple, as far as markup languages are concerned. That is honestly its biggest benefit. It takes maybe 30 minutes to learn and about an hour to become proficient. Another added benefit both within and outside of engineering orgs is that Markdown text is easy to parse and read in its raw form. This is important because both XML and HTML have a learning curve, so folks who aren’t versed in those languages might not be able to read text packaged in either of those markup languages. Markdown fixes that. It is unobtrusive to the actual text so anyone can read text packaged within Markdown’s syntax.

2: It’s a soft introduction to programming

If you’re new to the world of software engineering, Markdown works as an interesting peek into the power of code. Yes, Markdown’s syntax is simple, but if you’ve never coded, even formatting in Markdown might feel like coding. Seeing your formatting come to life on a webpage or text editor is very cool for those new to programming or markup languages, and I firmly believe that it can inspire people to dive deeper into the world of coding.

3: It’s fast

Now, from a technical writer’s perspective, Markdown makes my job easier. I can write with Markdown at a faster cadence than I could with HTML or XML. Plus, I’ve found that Markdown has been an invaluable bridge between engineering and content writing (a massive umbrella that technical writing falls under).

If a subject matter expert (SME) hands me a piece of documentation he wrote for an API process he’s been working on, I can jump right in because, as I’ve said, Markdown (even in its raw form) can be read by anyone. It puts the engineer and myself on the same page, and it keeps us there together. Plus, most Integrated Development Environments (IDEs) feature text edit areas where Markdown acts as the default markup language for writing.

So, from a technical writer’s perspective, Markdown is a writing tool that keeps documentation formatting a breeze, but it also moves us technical writers closer to developers because it allows us to speak (and write) in the same language and use the same basic formatting syntax. And the best, most useful documentation is created when developers and technical writers are on the same page.

4: It’s collaborative

Markdown is more than just a simplified language. The power of Markdown is that it levels the playing field for technical writers and fosters collaboration between them and engineers—especially technical writers without deep, technical backgrounds.

A technical writing organization that sets Markdown as their default markup language for all documentation opens the door for more technical writers to be hired from diverse backgrounds. This is because one can upskill into being proficient with Markdown quite quickly, as opposed to XML and HTML. I like to call it the great equalizer. Documentation, in many ways, is the unsung hero of every product and engineering org. And in the end, it all comes back to Markdown.

Where to go from here

I’ve long been interested in programming, and I’ve learned a lot of programming skills in my free time. But when I knew I wanted to pivot to technical writing, the first thing I learned (and doubled down on) was Markdown. Before I started my career in this field, every engineer and technical writer I talked to recommended it as, literally, one of the first things I should learn. So I did—and I’m so glad!

Now, as I’m sure those who work closely with me know, I evangelize Markdown whenever I’m given the chance—with colleagues, with folks who come to me wanting to transition to technical writing, and with clients. In my eyes, Markdown is the backbone of modern technical writing and documentation, and it isn’t going anywhere. It is the soft standard for documentation in the world of tech and, eventually, I believe it will just be the standard across the board.

Markdown is the future of technical documentation. And as more and more companies, IDEs, and coding repositories use it as the default markup format for editing and writing documentation, that future is starting now. If you’re starting to write documentation or are considering technical writing, I highly recommend learning Markdown. It will serve you well.

The post Four Key Reasons to Learn Markdown appeared first on Big Nerd Ranch.

]]>
https://bignerdranch.com/blog/four-key-reasons-to-learn-markdown/feed/ 0
Why Immersive Training Still Matters https://bignerdranch.com/blog/why-immersive-training-still-matters/ https://bignerdranch.com/blog/why-immersive-training-still-matters/#respond Wed, 19 Oct 2022 15:11:10 +0000 https://bignerdranch.com/?p=9536 The learning world seems to be gravitating toward bite-sized.   eLearning, microlearning, in-context learning, self-paced learning, and social learning have all changed the way employees learn and train. And all of these have their place. Employees often find there aren’t enough hours in the day to finish their work—not to mention doing training on top of […]

The post Why Immersive Training Still Matters appeared first on Big Nerd Ranch.

]]>
The learning world seems to be gravitating toward bite-sized.  

eLearning, microlearning, in-context learning, self-paced learning, and social learning have all changed the way employees learn and train. And all of these have their place. Employees often find there aren’t enough hours in the day to finish their work—not to mention doing training on top of that. Studies have shown employees have as little as 2 – 3 hours per week that they can dedicate to learning something new. (I find that, for myself, it’s often less than that.)  

However, this doesn’t mean that the only types of training that can or should be delivered are those that can be ingested in bite-sized increments. And just because employees don’t have much time for learning during the work week doesn’t mean we can’t help them make time for it in general. 

Sometimes learning is hard. Sometimes it takes time. And sometimes it requires a more immersive approach. 

But… why? Why might you need to pursue a more immersive approach? Why can’t we simply provide employees with bite-sized, self-paced learning and expect great results every time?  

Consider How These Skills Will Be Applied

Skills like conflict management, writing, or organization can be immediately applied because employees often encounter these situations in their day-to-day work. Many skills like these are learned “on the job.” In other words, students can learn and grow their skills simply by applying them to their daily work. 

Other skills may be more about accessing information. If students use them infrequently and are unlikely to recall their learnings, they need reference documentation and resources that can be accessed on-demand. These types of skills might be appropriate for bite-sized and self-paced learning because the practice opportunities are ample. Employees can start applying what they’ve learned right away, and can do so in relatively low-risk ways. 

 Of course, this isn’t always the case. The biggest issue students run into is finding ways to practice their new skills in a real-world environment. Perhaps the student’s learning would slow down the rest of the team or the risk of real-world failure would be too high.

So, how can they find ways to practice? An immersive training environment provides students with these practice opportunities in a lower-risk, safe-to-fail environment. 

Another crucial element of practice is feedback. To truly learn and grow, students need to know whether they are on the right track—and if they aren’t, they need to know how to get back on track. This can be done through feedback loops. When learning a new skill, it’s important that students have tight feedback loops showing them how to get back on track as quickly as possible.  

On-demand learning has come a long way in terms of feedback loops, and there are great courses out there that provide helpful feedback. But sometimes, students need expert guidance to work through a tricky situation. That’s where immersive training comes in.  

What Is Immersive Training? 

We’ve been discussing immersive training, but what is it? Immersive training is one that forces students to really focus on their learning. Its most common form is that of classroom training. Classrooms, be they physical or virtual, are places where students are forced out of their everyday routine and into a learning environment. Students interact with each other, an instructor or facilitator, and new ideas and concepts. They have time and space to experiment, try, fail, and try again.

But immersive learning doesn’t only have to take place in a classroom. While we continue to believe strongly in the transformational power of classroom training (instructor-led or virtual instructor-led), we know that the learning world is evolving. Students, managers, leaders, and learning professionals are looking for ways to get the benefits of immersive, classroom learning while also making the experience more self-paced.

An example of this is blended learning. Blended learning gives students opportunities to learn on their own while also providing targeted opportunities for them to engage with an instructor, ask questions of their peers, or seek feedback on their work. It may not have all of the elements of a classroom, but a targeted, curated, and well-executed experience can still provide an immersive approach. 

 The Practice Conundrum 

Skill acquisition requires practice. We can’t fully absorb or learn something new unless we are given the opportunity to try it out (and try it out in a safe-to-fail environment).  

While there are practice opportunities that come with self-paced and bite-sized training, a significant load falls onto student. It’s on them to make the most of those practice opportunities. Since employees are already limited on time for learning, they might not commit as much as they need to the practice. Also, if they get stuck, there’s no one to help them through the problem.  

All in all, practice alone can lead to good outcomes if the student is able to find helpful resources quickly. If they struggle, it could also lead them to give up or take shortcuts to get through the activity rather than absorb the knowledge being provided. 

The Time Conundrum 

One conundrum surrounding immersive learning is time. Employee time is precious both to the company and to the employee. Prioritizing learning can be seen as a drag on productivity or taking away from business objectives. But self-paced, bite-sized learning can make the time problem worse. 

Take this example: I was recently speaking with a client about self-paced vs. live virtual training. The client indicated most of their students wanted to pursue a self-paced program, which was not entirely surprising. However, when we asked why they wanted to pursue self-paced, the client indicated it was because they didn’t feel they had time to step away from their daily work to do the training.  

Rather than simply taking the time away from work to learn the new thing, students would be trying to fit the learning in around their daily tasks. The interesting thing here is that pursuing a self-paced approach could actually make two problems worse.  

  1. Students would be more distracted and stressed because they have to study in addition to their daily tasks, and  
  2. They would likely learn less—or it would take them more time to learn the same amount—due to context switching and their divided attentions 

One way or another, learning is going to take time. Sometimes, the most efficient approach is stepping away from the daily grind for a while. 

The Knowledge Retention Conundrum 

The practice conundrum and the time conundrum both feed into the knowledge retention conundrum. If students don’t have enough opportunities to practice the new skill and they don’t have the time to focus on learning the new skill, then they aren’t likely to retain the knowledge and skills they are acquiring.  

For example, a programmer working in Java every day might have a hard time learning Kotlin using a bite-sized, on-demand approach. This is because she won’t have the chance to practice with her Kotlin knowledge; in between her training, she’s still working in Java 40+ hours a week.

A quick caveat: incremental learning stands in contrast to some of the examples we’ve been exploring. If you just need to learn how to enhance what you’re already doing, bite-sized, on-demand training might be just what you need. This could even be in the form of training materials you received from a more intensive experience.  

This works because:  

  1. You aren’t context switching (or, at least not as much), and 
  2. You’re learning something directly related to your work

This helps solve both the practice and the time conundrums. It resolves the practice conundrum because you can apply what you’re learning immediately. And it resolves the time conundrum because it’s more than likely helping you do your work more efficiently and effectively. As such, the time spent learning is being repaid with immediate efficiency and effectiveness. 

The Accountability Conundrum 

The element of accountability found within immersive training is difficult to replicate with bite-sized, self-paced training. It’s the social accountability of being in a classroom with other students and with an instructor, all of whom are working toward the same goal.

I went to a burger restaurant that my family and I once really enjoyed. They had replaced their cashiers with digital kiosks, so the only person working the “front of the house” was an expo who called out orders when they were complete. The experience was not the same. Half the kiosks didn’t work, there was no one to help if you had a problem with the kiosks, the condiments and napkins weren’t stocked, the lobby wasn’t as clean, and the food took longer to arrive. Even the quality of the food was worse. Why? There was no human accountability. 

Since no one had to directly interact with the customers, staff cared less about the customers’ ordering experience or whether the condiments were stocked. Without the human connection, the social desire to help and take care of others had been removed. 

I use this example to illustrate the power of social and human accountability in a learning environment. If you are working through content on your own with no human interaction, what do you do? You increase the lecture speed to 2x, you skim through the reading material, and you retake the quiz until you get the right answer.  

My point is simply that this type of learning may not encourage a student’s best effort, and it might not be intentional. It might be because students don’t have the time and the ability to really focus. It might be that they’ve got a looming deadline that’s taking precedence over all their other tasks. It might also be that there’s no accountability to an instructor or to peers.  

One of the reasons learning communities are so powerful is that community adds to the learning experience. Other students hold each other accountable. They learn together—from each other’s mistakes, their tricks and shortcuts, and form connections that can last years and even decades. 

What Does the Future of Immersive Training Hold?  

While I’ve been discussing some of the potential shortcomings of bite-sized and self-paced training, these training methods absolutely have their place and should be part of any robust employee development program. I’m not calling on us to abandon these and go back to ‘the good ol’ days’ of classroom instruction.

What I am suggesting is that every good and robust employee development program leave time and space for immersive training experiences. Classrooms and blended learning may take different forms, but they are still incredible places for learning. Transformation doesn’t happen without it, employees appreciate it, and it can accomplish things bite-sized training simply can’t.

Sometimes you need to learn from an expert, someone who has been there before and who can explain things in a way that’s easy to understand. You also need opportunities to socialize, build culture, share experiences, and practice things you wouldn’t normally do. 

The post Why Immersive Training Still Matters appeared first on Big Nerd Ranch.

]]>
https://bignerdranch.com/blog/why-immersive-training-still-matters/feed/ 0
Go Generics in API Design https://bignerdranch.com/blog/go-generics-in-api-design/ https://bignerdranch.com/blog/go-generics-in-api-design/#respond Wed, 17 Aug 2022 20:45:47 +0000 https://bignerdranch.com/?p=9493 Go 1.18 has finally landed, and with it comes its own flavor of generics. In a previous post, we went over the accepted proposal and dove into the new syntax. For this post, I’ve taken the last example in the first post and turned it into a working library that uses generics to design a more type-safe […]

The post Go Generics in API Design appeared first on Big Nerd Ranch.

]]>
Go 1.18 has finally landed, and with it comes its own flavor of generics. In a previous post, we went over the accepted proposal and dove into the new syntax. For this post, I’ve taken the last example in the first post and turned it into a working library that uses generics to design a more type-safe API, giving a good look at how to use this new feature in a production setting. So grab yourself an update to Go 1.18, and settle in for how we can start to use our new generics to accomplish things the language couldn’t before.

A note on when to use generics

Before we discuss how we’re using generics in the library, I wanted to make a note: generics are just a tool that has been added to the language. Like many tools in the language, it’s not recommended to use all of them all of the time. For example, you should try to handle errors before using panic since the latter will end up exiting your program. However, if you’re completely unable to recover the program after an error, panic might be a perfectly fine option. Similarly, a sentiment has been circulating with the release of Go 1.18 about when to use generics. Ian Lance Taylor, whose name you may recognize from the accepted generics proposal, has a great quote in a talk of his:

Write Go by writing code, not by designing types.

This idea fits perfectly within the “simple” philosophy of Go: do the smallest, working thing to achieve our goal before evolving the solution to be more complex. For example, if you’ve ever found yourself writing similar functions to:

func InSlice(s string, ss []string) bool {
    for _, c := range ss {
        if s != c {
            continue
        }

        return true
    }

    return false
}

And then you duplicate this function for other types, like int, it may be time to start thinking about codifying the more abstract behavior the code is trying to show us:

func InSlice[T constraints.Ordered](t T, ts []T) bool {
    for _, c := range ss {
        if s != c {
            continue
        }

        return true
    }

    return false
}

Overall: don’t optimize for the problems you haven’t solved for yet. Wait to start designing generic types since your project will make abstractions become visible to you the more you work with it. A good rule of thumb here is to keep it simple until you can’t.

Designing Upfront

Although we just discussed how we shouldn’t try to design types before coding and learning the abstractions hidden in our project, there’s an area where I believe we cannot and should not get away from designing the types first: API-first design. After all, once our server starts to respond to and accepts request bodies from clients, careless changes to either one can result in an application no longer working. However, the way we currently write HTTP handlers in Go has a bit of a lack of types. Let’s go through all the ways this can subtly break or introduce issues to our server, starting with a pretty vanilla example:

func ExampleHandler(w http.RepsonseWriter, r *http.Request) {	
    var reqBody Body
    if err := json.NewDecoder(r.Body).Decode(&reqBody); err != nil {
        http.Error(w, err.Error(), http.StatusBadRequest)
        return
    }	

    resp, err := MyDomainFunction(reqBody)
    if err != nil {
        // Write out an error to the client...
    }

    byts, err := json.Marshal(resp)
    if err != nil {
        http.Error(w, err.Error(), http.StatusInternalServerError)
        return
    }

    w.Header().Set("Content-Type", "application/json")
    w.Write(byts)
    w.WriteHeader(http.StatusCreated)
}

Just to be clear on what this HTTP handler does: it ingests a body and decodes it from JSON, which can return an error. It then passes that decoded struct to MyDomainFunction, which gives us either a response or an error. Finally, we marshal the response back to JSON, set our headers, and write the response to the client.

Picking apart the function: Changing return types

Imagine a small change on the return type of the MyDomainFunction function. Say it was returning this struct:

type Response struct {
    Name string
    Age int
}

And now it returns this:

type Response struct {
    FirstName string
    LastName string
    Age int
}

Assuming that MyDomainFunction compiles, so, too, will our example function. It’s great that it still compiles, but this may not be a great thing since the response will change and a client may depend on a certain structure, e.g., there’s no longer a Name field in the new response. Maybe the developer wanted to massage the response so it would look the same despite the change to MyDomainFunction. Worse yet is that since this compiles, we won’t know this broke something until we deploy and get the bug report.

Picking apart the function: Forgetting to return

What happens if we forgot to return after we wrote our error from unmarshaling the request body?

var reqBody RequestBody
if err := json.NewDecoder(r.Body).Decode(&reqBody); err != nil {
    http.Error(w, err.Error(), http.StatusBadRequest)
    return
}

Because http.Error is part of an imperative interface for dealing with responses back to HTTP clients, it does not cause the handler to exit. Instead, the client will get their response, and go about their merry way, while the handler function continues to feed a zero-value RequestBody struct to MyDomainFunction. This may not be a complete error, depending on what your server does, but this is likely an undesired behavior that our compiler won’t catch.

Picking apart the function: Ordering the headers

Finally, the most silent error is writing a header code at the wrong time or in the wrong order. For instance, I bet many readers didn’t notice that the example function will write back a 200 status code instead of the 201 that the last line of the example wanted to return. The http.ResponseWriter API has an implicit order that requires that you write the header code before you call Write, and while you can read some documentation to know this, it’s not something that is immediately called out when we push up or compile our code.

Being Upfront about it

Given all these (albeit minor) issues exist, how can generics help us to move away from silent or delayed failures toward compile-time avoidance of these issues? To answer that, I’ve written a small library called Upfront. It’s just a collection of functions and type signatures to apply generics to these weakly-typed APIs in HTTP handler code. We first have library consumers implement this function:

type BodyHandler[In, Out, E any] func(i BodyRequest[In]) Result[Out, E]

As a small review of the syntax, this function takes any three types for its parameters: In, for the type that is the output of decoding the body, Out, for the type you want to return, and E the possible error type you want to return to your client when something goes awry. Next, your function will accept an upfront.BodyRequest type, which is currently just a wrapper for the request and the JSON-decoded request body:

// BodyRequest is the decoded request with the associated body

type BodyRequest[T any] struct {
    Request *http.Request
    Body    T
}

And finally, the Result type looks like this:

// Result holds the necessary fields that will be output for a response
type Result[T, E any] struct {
    StatusCode int // If not set, this will be a 200: http.StatusOK

    value      T
    err        *E
}

The above struct does most of the magic when it comes to fixing the subtle, unexpected pieces of vanilla HTTP handlers. Rewriting our function a bit, we can see the end result and work backward:

func ExampleHandler[Body, DomainResp, error](in upfront.BodyRequest[Body]) Result[DomainResp, error] { 

    resp, err := MyDomainFunction(in.Body)
    if err != nil {
        return upfront.ErrResult(
            fmt.Errorf("error from MyDomainFunction: %w"),
            http.StatusInternalServerError,
        )
    }

    return upfront.OKResult(
        resp,
        http.StatusCreated,
    )
}

We’ve eliminated a lot of code, but hopefully, we’ve also eliminated a few of the “issues” from the original example function. You’ll first notice that the JSON decoding and encoding are handled by the upfront package, so there’s a few less places to forget return‘s. We also use our new Result type to exit the function, and it takes in a status code. The Result type we’re returning has a type parameter for what we want to send back from our handler. This means if MyDomainFunction changes its return type, the handler will fail compilation, letting us know we broke our contract with our callers long before we git push. Finally, the Result type also takes a status code, so it will handle the ordering of setting it at the right time (before writing the response).

And what’s with the two constructors, upfront.ErrResult and upfront.OKResult? These are used to set the package private fields value and err inside the Result struct. Since they’re private, we can enforce that any constructors of the type can’t set both value and err at the same time. In other languages, this would be similar (definitely not the same) to an Either type.

Final thoughts

This is a small example, but with this library, we can get feedback about silent issues at compile time, rather than when we redeploy the server and get bug reports from customers. And while this library is for HTTP handlers, this sort of thinking can apply to many areas of computer science and areas where we’ve been rather lax with our types in Go. With this blog and library, we’ve sort of reimplemented the idea of algebraic data types, which I don’t see being added to Go in the foreseeable future. But still, it’s a good concept to understand: it might open your mind to think about your current code differently.

Having worked with this library in a sample project, there are a few areas for improvement that I hope we see in future patches. The first being that we cannot use type parameters on type aliases. That would save a bunch of writing and allow library consumers to create their own Result type with an implicit error type instead of having to repeat it everywhere. Secondly, the type inference is a little lackluster. It’s caused the resulting code to be very verbose about the type parameters. On the other hand, Go has never embraced the idea of being terse. If you’re interested in the library’s source code, you can find it here.

All that being said, generics are ultimately a really neat tool. They let us add some type safety to a really popular API in the standard library without getting too much in the way. But as with any too, use them sparingly and when they apply. As always: keep things simple until you can’t.

The post Go Generics in API Design appeared first on Big Nerd Ranch.

]]>
https://bignerdranch.com/blog/go-generics-in-api-design/feed/ 0
Software Maintainability in New Technologies https://bignerdranch.com/blog/software-maintainability-in-new-technologies/ https://bignerdranch.com/blog/software-maintainability-in-new-technologies/#respond Fri, 17 Jun 2022 19:42:13 +0000 https://bignerdranch.com/?p=9430 Every decade at the longest, the software development industry undergoes significant technological shifts, and these shifts make it difficult to keep delivering maintainable software. New technologies solve a lot of problems for users and developers alike, whether it’s new platforms (the web, mobile, wearables, VR), new programming language paradigms (new static type systems, concurrency models, […]

The post Software Maintainability in New Technologies appeared first on Big Nerd Ranch.

]]>
Every decade at the longest, the software development industry undergoes significant technological shifts, and these shifts make it difficult to keep delivering maintainable software. New technologies solve a lot of problems for users and developers alike, whether it’s new platforms (the web, mobile, wearables, VR), new programming language paradigms (new static type systems, concurrency models, and runtimes), or new deployment options (virtualization, containerization, serverless functions). But there is one major problem that, far from solving, new technologies actually make worse: the problem of figuring out how to write maintainable software.

Specifically, how can you prevent the seemingly-inevitable descent of a codebase into problems like fragility (where changes cause unexpected things to break), rigidity (where small adjustments force much larger changes), or incomprehensibility (where you can’t understand the code to be able to change it)? How can you onboard new developers to your team due to growth or turnover, ensuring they understand the code well enough to be productive—especially when the technology is new to them or new to everyone?

Options for Maintainability

Now, it’s not that writing maintainable software is impossible in newer technologies—but there are forces that make it difficult. Whatever benefits a technology might bring, it’s always possible to make a mess; there hasn’t been a technology yet that can completely prevent that. And new technologies’ documentation generally doesn’t explain how to apply software design techniques comprehensively. Instead, documentation focuses on explaining (1) why developers should use this technology and (2) how they can get it working.

Technology creators’ priority isn’t and can’t be explaining software maintainability practices. Instead, it’s other people who create resources to explain software design and architecture principles—and these resources are created within a specific historical context that doesn’t necessarily translate to new technologies. For example, what good are the classic object-oriented design principles when you’re working in a technology that doesn’t have objects? There are some principles in classic software design materials that do apply to newer technologies, but it can be challenging to identify which principles transfer and which don’t. It’s not realistic to expect authors to have foreseen future technologies and taken them into account as they wrote. Even if they did, there was no pressing need for those authors to emphasize which of their points was more likely to apply to that imagined future technology. And if you’re a newer developer attempting to read a software design book written in an older technology you haven’t used, you face a major obstacle. You would need to put some amount of effort into learning that book’s technology in the hope that some software design principles from that book transfer to your context. Few of us have the energy to expend that much effort with that little uncertainty about the payoff.

Evolutionary Design

So as new technologies steadily emerge, how can you keep writing maintainable software? This has been the main question I’ve asked myself for the past six years as I’ve studied, tried out, and worked within a number of different languages and platforms. As I’ve done so, one particular principle of maintainable software design has risen to the top. I’ve seen this principle work across so many technologies that I’m confident in adopting it as my default approach to any technology I work in. The reason this principle applies so broadly is that it’s squarely focused on addressing this universal need: keeping your software maintainable.

This universal software design principle goes by different names: incremental design, evolutionary design, emergent design, simple design. What the principle states is that you will achieve the best design if you:

  • Build the system with a software design that’s an excellent fit for only today’s requirements, and
  • When new requirements arrive, adjust the system’s design so that it is an excellent fit for those new requirements (that’s the “incremental”/”evolutionary” part)

How can you build a system that is flexible enough to handle such continual change, flexible enough that the software design itself is changing? By thoroughly covering it with automated tests so you can make changes safely, and by making changes in small refactoring steps that keep the system running the whole time.

Evolutionary design helps you avoid veering off the road into either of two ditches. On one side, your software can fall into under-design or no design: once you get the code working you immediately move on without further thought. If a new feature doesn’t fit into the existing code very well, you hack it in with complex conditional logic until it works. The problem with under-design is that its costs compound over time. Every time you put in a hack, it increases the likelihood that the next feature will not fit well either, necessitating an even bigger hack. The codebase turns into a “big ball of mud.” And if you had any hopes of adding tests to understand the behavior and prevent regressions, each hack makes writing tests harder as well.

If you want to try to avoid under-design, the other ditch your software can fall into is over-design or premature design. You try to think through everything in the code that could change someday, and you make a configuration option or extension point for each. But you can’t predict the future perfectly, so some of your guesses will be wrong: some of your configuration points won’t be needed and will add indirection without benefit, and other changes will be needed that don’t have a configuration point so that you’ll still need hacks.

Evolutionary design avoids the dilemma of having to choose between under-design and over-design. You build an excellent design for today, and you adjust it to new requirements tomorrow.

Something Missing?

Now, if you’re an experienced software designer you may be thinking I’m leaving out something essential. You may be asking yourself “isn’t object-oriented design necessary to accomplish this?” Or “doesn’t a good type system make this easier?” Or “aren’t you forgetting test-driven development?” Although all of these techniques and more can be helpful to achieving evolutionary design, none is essential. For example:

  • Functional and object-oriented paradigms optimize for two different types of change, and a given project may benefit from one, the other, or both.
  • Test-driven development is a great way to get the thorough test coverage necessary for evolutionary design. But it is less of a natural fit for some types of code and for some people’s wiring (as argued by Kent Beck and Martin Fowler). In those cases, you might choose to take alternative approaches to achieve thorough test coverage.
  • Good modern static type systems can provide tooling support to help you make changes safely. A tradeoff is that their rigidity may also cause friction, leading to you defer evolving your system until you’ve dug a hole so deep that it’s hard to dig back out.
  • Decoupling systems into separate services or serverless functions simplifies each piece, which makes it easier for each piece to evolve. But it also makes it harder to verify that the pieces keep interacting with one another correctly as they evolve.

If you have a particular collection of the above techniques that you think are essential, I’m not asking you to give them up: I’m encouraging you to shift how you think about them. If your goal is to practice evolutionary design, think of each technique as a means to an end. This mindset opens you up to the possibility that the ideal set of techniques might be different for different individuals, different teams, different platforms, different business domains, or at different times. Separating the end (evolutionary design) from the means allows us to find more common ground and learn from one another to drive the practice of evolutionary design forward.

Distinguishing the means from the end has been the bulk of my professional journey for the past six years. I first learned about evolutionary design in the Ruby world, where dynamic typing, object-oriented design, and test-driven development were paramount. The message I got, explicitly or implicitly, was that those techniques are an essential part of evolutionary design. But since that time I’ve seen from additional perspectives: I’ve seen how static typing helps to communicate APIs to large teams, how React.js’s function-based API provides its own kind of flexibility, and how test-driven development is more costly for some types of programs and some types of programmers. In those situations I found that the specific techniques weren’t the essential thing I was reaching for; instead, the essential was “how can I evolve my code over time with confidence?”

Next Steps

Whenever a new software technology is introduced, there will be utopians who claim that it guarantees code that is maintainable without any design effort needed. There will also be fatalists who argue that it prevents good software design altogether. (If these sound like exaggerations, take half an hour to read the comments on the tech news social media site of your choice!) Unsurprisingly, neither of these extremes is correct. Instead, new technologies challenge our conceptions of what is “essential” in software design, so that something we previously thought was “the whole thing” becomes just one possible tool. At their best, new technologies provide innovative software design tools that bend the curve of what’s possible in our code, giving us more benefits for less cost. But ultimately we developers are the ones at the wheel of our projects and responsible to steer. Will we veer off the road into under-design and get stuck in a big ball of mud? Or into over-design and getting stuck in a cumbersome structure? Or will we write code that is flexible so we can adjust it to handle whatever the future brings?

If you’re interested in the practice of evolutionary design, where do you go from here? Just hearing about the concept of evolutionary design isn’t enough to equip you to do it–there is a lot more to learn and to unlearn. Unfortunately, as I mentioned earlier, most writing about evolutionary design also includes a lot more specifics beyond the essentials. At Big Nerd Ranch we’re exploring this topic and are considering developing more resources. If you’re interested in getting more resources from us, let us know!

In the meantime, classic books on evolutionary design are still your best bet—just don’t feel the pressure to accept all the specifics they advocate. I’d recommend starting with Refactoring, Second Edition by Martin Fowler—the first two chapters in particular are an excellent survey of and argument for evolutionary design. If you’re already familiar with the literature on evolutionary design, and you feel like you’re the only one in your current technology stack who is, don’t be discouraged. Instead, look for ways to apply these principles yourself, then show others—once you’ve put in the work, others may see the benefits and get interested. You might consider rereading those classic books, not so much to learn new things as to separate specific techniques you’ve used in one ecosystem from the general principles that can apply anywhere.

The post Software Maintainability in New Technologies appeared first on Big Nerd Ranch.

]]>
https://bignerdranch.com/blog/software-maintainability-in-new-technologies/feed/ 0
Top Five Things I Learned at SRECon22 Americas https://bignerdranch.com/blog/top-five-things-i-learned-at-srecon22-americas/ https://bignerdranch.com/blog/top-five-things-i-learned-at-srecon22-americas/#respond Thu, 24 Mar 2022 19:14:13 +0000 https://bignerdranch.com/?p=9352 As a full-stack web developer, I attended SRECon to expand my thinking about the reliability and observability of the services I develop. Here are my top 5 takeaways: 1. Evaluating Your Program – Reaction, Learning, Behavior, Results Casey Rosenthal’s talk titled “The success in SRE is silent” reminded us that while nobody thanks you for […]

The post Top Five Things I Learned at SRECon22 Americas appeared first on Big Nerd Ranch.

]]>
As a full-stack web developer, I attended SRECon to expand my thinking about the reliability and observability of the services I develop. Here are my top 5 takeaways:

1. Evaluating Your Program – Reaction, Learning, Behavior, Results

Casey Rosenthal’s talk titled “The success in SRE is silent” reminded us that while nobody thanks you for the incident that didn’t happen, you can still evaluate how the people around you are learning. First, check their reaction, thumbs up or thumbs down, about the changes. Eventually, they will be able to gauge that they’ve learned something. After that, you may notice shifts in behavior, such as asking for help setting up a monitor on Slack (where before, they might not have added a monitor). Finally, the results of new things making it to production, such as the new monitor.

2. Brownouts – Intentional Degradation to Avoid Blackout

Alper Selcuk shared Microsoft’s response to the massive expansion in the use of Microsoft Teams within education at the beginning of the pandemic. One of their techniques for avoiding service blackouts was brownouts, such as no longer displaying the cursor locations of other users on a shared document, preloading fewer events on the calendar, and decreasing the quality of videos on conference calls. This allowed Microsoft to keep the services online while increasing capacity and optimizing the service for the new load level. What brownouts could be applied to your service if it were to experience a sudden increase in demand?

3. Skydiving and SRE – When to Stop Fixing and Fail to the Backup

Victor Lei applied experience skydiving to disaster recovery. There is a specific altitude in skydiving to stop trying to fix your main parachute and decide what is next. Then, there is another altitude where the skydiver automatically fails their backup parachute. Timeboxing is a technique for limiting the time spent testing a new idea or optimization, but it’s easy to lose track of time during a disaster. I’d like to see more guidelines for how long the on-call engineer should try to fix a problem before failing to the backup or calling in additional support.

4. Emergent Organizational Failure – Trust

Mattie Toia discussed emergent organizational failure. One point was forgetting how hard prioritization is, which can be helped by collaborating on mental models and making sharing and communication easy. Another was using incentives to replace dedication when the organization needs to demonstrate trust through actions. At the center of all five points was trust, how to build that, and recognizing that each organization member is complex and has their views of the world and the organization.

5. Scientific Method for Resilience – Observe, Research, Hypothesis, Test, Analyze, Report

Christina Yakomin explained how to use the scientific method to test the resilience of systems.

  • First, consider your system and all its parts. Then, research all the ways the system might be able to fail. (Newer engineers are especially helpful with this since they are less likely to dismiss failure paths that long-time engineers might ignore.)
  • For each failure path, hypothesize about what will happen. (Make sure everyone can share their thoughts on what will happen rather than just agreeing with the first person to respond.)
  • Then, test the failure path and see what happens (Note: If you’re planning to try something extreme like taking the entire database offline, you might have to test in staging instead of production but be sure to simulate real load during the test.)
  • Analyze your findings. Even if the results matched what was expected, is that the behavior you want your system to have?
  • Report the findings and document the test process since you will likely want to repeat this test in the future.
  • Finally, repeat this process regularly (perhaps quarterly or yearly).

Summary

I look forward to helping each project I’m on continue to grow in features, reliability, and observability to weather the good times and the bad.

SRECon is an open-access conference. Videos of all the talks will be free from Usenix in the following weeks.

The post Top Five Things I Learned at SRECon22 Americas appeared first on Big Nerd Ranch.

]]>
https://bignerdranch.com/blog/top-five-things-i-learned-at-srecon22-americas/feed/ 0
How I finally closed my Google Chrome browser windows on my Mac https://bignerdranch.com/blog/how-i-finally-closed-my-google-chrome-browser-windows-on-my-mac/ https://bignerdranch.com/blog/how-i-finally-closed-my-google-chrome-browser-windows-on-my-mac/#respond Wed, 03 Nov 2021 10:00:04 +0000 https://bignerdranch.com/?p=9135 Features like Google Chrome's tab groups allow you to organize your tabs within a window. Tab groups can also be expanded or collapsed as you work on different groups of tabs throughout the day. Learn more ways to organize your workflow with separate browser sessions that you can close and restore using a shell script from the command line.

The post How I finally closed my Google Chrome browser windows on my Mac appeared first on Big Nerd Ranch.

]]>
Do you use Google Chrome on a Mac? Do you ever find yourself with multiple browser tabs open? Or multiple browser windows each with multiple tabs open? Do you ever wish you could close a particular browser window or group of browser windows and restore them later?

Features like Google Chrome’s tab groups allow you to organize your tabs within a window. Tab groups can also be expanded or collapsed as you work on different groups of tabs throughout the day.

Continue reading if you’d like to learn how to further organize your workflow with separate browser sessions that you can close and restore using a shell script from the command line.

At Big Nerd Ranch I often find myself working on multiple tasks throughout the week. I found myself with multiple browser windows open, each with multiple tabs open. Each browser window represented some unit of work I was working on. Sometimes a group of windows represented a unit of work. As the number of browser windows grew, it became hard to find the window or group of windows I needed when I needed them.

This challenge led me to explore options on how to start Google Chrome from the command line on my Mac. My goal was to be able to start, save, and restore named browser sessions that were easy to find.

TL;DR

The code for this script is available on GitHub at BNR-Developer-Sandbox/BNR-blog-chrome-management.

Here’s where I landed on the script:

#!/bin/bash

WORKING_ON=$1; # get directory name from first command line argument

# Create working directory and error log if needed
mkdir -p ${WORKING_ON};
touch ${WORKING_ON}/error.log;

echo "Working on... ${WORKING_ON}";
echo "Google Chrome error log at ./${WORKING_ON}/error.log";
echo "Use ^C to close your browser session.";

# Create or Restore a Google Chrome session based on what you are working on
# Redirect Google Chrome output errors to error.log
/Applications/Google\ Chrome.app/Contents/MacOS/Google\ Chrome \
  --user-data-dir="./${WORKING_ON}" --restore-last-session \
  2>> ./${WORKING_ON}/error.log;

GOOGLE_CHROME_EXIT=$?; # capture Google Chrome exit code

echo;

if [ $GOOGLE_CHROME_EXIT -eq 0 ]
then
  echo "Google Chrome Session saved: ${WORKING_ON}";
else
  echo "Google Chrome exited with error code: ${GOOGLE_CHROME_EXIT}.";
  echo "See ./${WORKING_ON}/error.log for error details.";
fi

echo "Run rm -r ./${WORKING_ON} to permanently delete this session."
echo "Re-run ./work-on.sh ${WORKING_ON} to restore this session.";

exit $GOOGLE_CHROME_EXIT;

To install the script, you can clone the repository:

$ git clone git@github.com:BNR-Developer-Sandbox/BNR-blog-chrome-management.git Browsers

Cloning into 'Browsers'...
remote: Enumerating objects: 26, done.
remote: Counting objects: 100% (26/26), done.
remote: Compressing objects: 100% (25/25), done.
Receiving objects: 100% (26/26), 4.24 KiB | 2.12 MiB/s, done.
remote: Total 26 (delta 8), reused 2 (delta 0), pack-reused 0
Resolving deltas: 100% (8/8), done.

Then, change into the Browsers directory where the work-on.sh script is found:

$ cd Browsers

Now you can execute work-on.sh to run the script and use ^C to close the script:

$ ./work-on.sh BNR   
Working on... BNR
Google Chrome error log at ./BNR/error.log
Use ^C to close your browser session.
^C
Google Chrome Session saved: BNR
Run rm -r ./BNR to permanently delete this session.
Re-run ./work-on.sh BNR to restore this session.

The Problem…

I had way too many browser windows open. Each with multiple tabs. Each window represented a different project I was working on. Sometimes I’d have multiple windows open for a project.

My current list of projects includes long-running projects like:

  • my day-to-day email, calendar, and time tracking = 1 browser window
  • my 1-on-1 with my manager and my career goals = 1 browser window
  • 1-on-1 meetings with my team – 1 window for each person = 4 browser windows
  • resources I use on the hiring team for code challenges and interviews = 1 browser window

That’s 7 browser windows so far that I need throughout the week or come back to periodically as needed.

In addition to those long-running projects, I also have more finite initiatives like:

  • 3 blog posts I’m working on including this one = 3 browser windows
  • plans for my next Professional Development Nerd Camp = 1 browser window
  • 13 different Tech Sales engagements = 13 browser windows

That’s 17 more browser windows to sort through! 😱

I was minimizing browser windows for the projects I wasn’t working on at the moment and my dock began to look cluttered with browser windows.

I have a similar problem on my home computer as I have browser windows open for communication, art, music, and various other projects.

Command-line options for Google Chrome

I knew I could start Google Chrome from the command line and that there were command-line options I could pass to it. I began my research by reading:

I searched through the list of command-line options and experimented until I found that --user-data-dir and --restore-last-session met my needs. --user-data-dir allows you to specify a directory name to save session data. --restore-last-session restores the session saved in the directory specified by --user-data-dir.

I started from the command line and then encapsulated the workflow in a simple shell script. This example is implemented on a Mac. See the resources linked above if you’d like to alter this for another operating system. If you are using a different browser, you’ll need to change the path to your browser and update the command line options to match what your browser expects in the work-on.sh shell script.

Writing the Shell Script

The only variable needed was the project name. I didn’t want to remember the command line options I needed or type out the long command each time. So, I wrote the script above to encapsulate what I learned and provide some useful output.

The script starts by saving the first command-line argument, the name of the project you are working on, in a variable called WORKING_ON.

WORKING_ON=$1; # get directory name from first command line argument

The WORKING_ON variable is used to create the directory for the browser session data and an error.log.

# Create working directory and error log if needed
mkdir -p ${WORKING_ON};
touch ${WORKING_ON}/error.log;

Next, the script starts Google Chrome with the desired command-line options for saving and restoring your session.

# Create or Restore a Google Chrome session based on what you are working on
# Redirect Google Chrome output errors to error.log
/Applications/Google\ Chrome.app/Contents/MacOS/Google\ Chrome \
  --user-data-dir="./${WORKING_ON}" --restore-last-session \
  2>> ./${WORKING_ON}/error.log;

The script also redirects error output from Google Chrome to error.log in your ${WORKING_ON} directory.

This was done to remove visual noise from recurring error messages that were being printed out to the terminal. If you examine error.log you might find messages about checking Bluetooth availability and default browser status.

When Google Chrome exits successfully, the script will print out that your session was saved. If Google Chrome exits with an error, the error code and log path will be printed out to the terminal.

GOOGLE_CHROME_EXIT=$?; # capture Google Chrome exit code

echo;

if [ $GOOGLE_CHROME_EXIT -eq 0 ]
then
  echo "Google Chrome Session saved: ${WORKING_ON}";
else
  echo "Google Chrome exited with error code: ${GOOGLE_CHROME_EXIT}.";
  echo "See ./${WORKING_ON}/error.log for error details.";
fi

Next, the script will print out some additional tips to remove your session data or restore your session.

echo "Run rm -r ./${WORKING_ON} to permanently delete this session."
echo "Re-run ./work-on.sh ${WORKING_ON} to restore this session.";

Finally, the script will exit with the exit code returned by Google Chrome.

exit $GOOGLE_CHROME_EXIT;

Installation

In your terminal, clone the BNR-Developer-Sandbox/BNR-blog-chrome-management git repository and specify the directory name where you’d like to save your browser sessions. In this example, you’ll clone the repository to a directory called Browsers.

git clone git@github.com:BNR-Developer-Sandbox/BNR-blog-chrome-management.git Browsers

Next, change into the Browsers directory.

cd Browsers/

From here you will execute the work-on-sh script.

Working on… BNR

Run ./work-on.sh BNR to create a new browser session in a directory called BNR.

The command will print out some information about what you are working on and how to close your browser session from the terminal.

 $ ./work-on.sh BNR
Working on... BNR
Google Chrome error log at ./BNR/error.log
Use ^C to close your browser session.

Google Chrome will start a new instance and you’ll notice the icon bouncing in your dock.

Click on the new Google Chrome icon and you’ll see a welcome prompt. Choose your default browser and usage statistic settings and click the “Start Google Chrome” button.

You’ll now have a new browser window open at chrome://welcome.

Since we are working on BNR at the moment, open a tab to our homepage and forums.

Close and restore your browser session

When you are done with your browser session you can quit Google Chrome from the menu Chrome > Quit Google Chrome or using the ⌘Q keyboard shortcut. You can also quit the broswer session with ^C from your terminal which is the approach I use.

^C
Google Chrome Session saved: BNR
Run rm -r ./BNR to permanently delete this session.
Re-run ./work-on.sh BNR to restore this session.

When you are ready to work on your BNR project again, use ./work-on.sh BNR to restore your browser session.

Google Chrome will pick up where you left off. If you used multiple browser windows in your session, they will all be restore. Your active tabs and scroll positions will also be restored.

Tab completion makes it easy to find and restore previous sessions

Since the only argument to the work-on.sh script is a directory name, you get tab completion out of the box. I suggest using meaningful prefixes to group directories together, this will complement tab completion well. For example, I have three different browser sessions for blog posts I’m working on. Each of these sessions is prefixed with Blog- followed by a hyphen separated project name.

Remove a project when you are done with it

When you are done with a project and you no longer need to restore it, you can remove the session by deleting the related directory name. In this example you can run rm -r ./BNR from your terminal to recursively remove the session directory.

Implementing for other browsers

Maybe you’d like to use a different browser or you aren’t working on a Mac. If that’s the case, then you’ll need to modifty the script to meet your needs. Research how to start your desired browser from the terminal and what command line options you’ll need. Modify the work-on.sh script to meet your needs accordingly. Happy scripting! 😊

Conclusion

Using this small shell script helps me stay focused and pick up where I left off. If you are new to shell scripting, I hope this example is a good introduction. I hope you found this workflow useful.

The post How I finally closed my Google Chrome browser windows on my Mac appeared first on Big Nerd Ranch.

]]>
https://bignerdranch.com/blog/how-i-finally-closed-my-google-chrome-browser-windows-on-my-mac/feed/ 0
Embracing Cloud Native https://bignerdranch.com/blog/embracing-cloud-native/ https://bignerdranch.com/blog/embracing-cloud-native/#respond Sat, 02 Oct 2021 18:34:05 +0000 https://bignerdranch.com/?p=7781 Cloud infrastructure has pushed software towards abstracting the developer away from the operating hardware, making global networks and copious amounts of computing power available over API’s, and managing large swaths of lower tiers of the tech stack with autonomous software. Gone are the days of buying bulky servers to own and here are the times […]

The post Embracing Cloud Native appeared first on Big Nerd Ranch.

]]>
Cloud infrastructure has pushed software towards abstracting the developer away from the operating hardware, making global networks and copious amounts of computing power available over API’s, and managing large swaths of lower tiers of the tech stack with autonomous software. Gone are the days of buying bulky servers to own and here are the times of renting pieces of a data center to host applications. But how does designing for a cloud environment change your application? How do software teams take advantage of all the advancements coming with this new set of infrastructure? This article will go over three pillars of a “Cloud-Native” application and how you can embrace them in your own software.

Embracing Failure

One incredible paradigm the Cloud has brought forth is represented in the Pets vs Cattle analogy. It differentiates how we treat our application servers between pets, things that we love and care for and never want to have die or be replaced, and cattle, things that are numbered and if one leaves another can take its place. It may sound cold and disconnected, but it embraces the failure and accepts it using the same methodology of “turning it off and on again”. This aligns with the Cloud mentality of adding more virtual machines and disposing of them at will, rather than the old ways of keeping a number of limited, in-house servers running because you didn’t have a whole data center available to you.

To utilize this methodology, it must be easy for your app to be restarted. One way to reflect this in your app is to make your server stateless, meaning it doesn’t persist state on its own disk: it delegates state to a database or a managed service for handling state in a resilient way. For connections or stateful attachments to dependencies, don’t fight it and try to reconnect when something goes down: just restart the application and let the initialization logic connect again. In cases where this isn’t possible, the orchestration software will kill the application, thinking it’s unhealthy (which it is) and try to restart it again, giving you a faux-exponential-backoff loop.

The above thinks of failure as binary: either the application is working or it isn’t, and let the orchestration software handle the unhealthy parts. But there’s another method to compliment these failure states, and that’s handling degraded functionality. In this scenario, some of your servers are unhealthy, but not all of them. If you’re already using an orchestration layer, you’ll likely already have something to handle this scenario: the software managing your application sees that certain instances are down, so it reroutes traffic to healthy instances and will return traffic when the instances are healthy again. But in the scenario where entire chunks of functionality are down, you can handle this state and plan for it. For example, you can return data and errors in a graphql response:

{
  "data": {
    "user": {
      "name": "James",
      "favoriteFood": "omelettes",
    },
    "comments": null,
  },
  "errors": [
    {
      "path": [
        "comments"
      ],
      "locations": [
        {
          "line": 2,
          "column": 3
        }
      ],
      "message": "Could not fetch comments for user"
    }
  ]
}

Here parts of the application were able to return user data, but comments weren’t available, so we return what we have, accepting that failure and working with it rather than returning no data. Just because parts of your application aren’t healthy doesn’t mean the user can’t still get things done with the other parts.

Embracing Agility

A more agile application means it’s quicker to start and schedule should you need more instances of it. In scenarios where the system has determined it needs more clones of your app, you don’t want to wait 5 or more minutes for it to get going. After all, in the Cloud we’re no longer buying physical servers: we’re renting the space and computing power that we need, so waiting for applications to reach a healthy state is wasting money. For your users, bulky, “slow-to-schedule” applications mean a delay in getting more resources and degraded performance or, in worse scenarios, an outage because servers are overloaded while they wait on reinforcements.

Whether you’re coming from an existing application or looking to make a Cloud-Native one from the start, the best way to make an application more agile is to think smaller. This means that the server you’re constructing is doing less, reducing start time and becoming less bloated with features. If your application is large and has unwieldy dependencies on prerequisite software installed on the server, consider removing those dependencies by delegating them to a third party or making it into a service to be started elsewhere. If your application is still too large, consider microservices, where appropriately sized and cohesive pieces of the total application are deployed separately and communicate over a network. It can increase complexity in operating the total application, but microservices can also lessen the cognitive load required to manage any individual piece with it or also coupled to the rest of the whole.

Embracing Elasticity

Image Credit: https://systeminterview.com

Following the points from above, if it’s easier to run instances of your application, it’s easier for the software to autonomously manage how many are running. This means the infrastructure managing your app can monitor the traffic or resource usage and start to add more instances to handle the increased load. In times where there’s less usage, it can scale down your resources to match. This is a huge departure from how elasticity was thought of using a traditional model: previously, you bought servers and maintained them, so you didn’t plan for just adding them on the fly. To compensate for dynamic amounts of load, you had to take the topmost estimate and add buffer room for extra heavy traffic times. During normal operation, there was capacity just sitting around unused. And to increase capacity, you likely tried to add more capacity to a single machine with upgrades and newer internals.

Again, to benefit from the elasticity that the Cloud gives you, it’s best to make it easier to enjoy that benefit. You can follow the tips on agility to make your application smaller, but before that, it might important to make it possible to run many instances of your application in the first place. This can mean removing any logic that counts on a fixed number of instances running, like using a single instance of a server because you need locks to handle concurrent logic. For scenarios like that, you can use locks provided by your database or your caching solution. All in all, the idea should be to look at logical factors that prevent you from running a second or third instance of your application in parallel. Ask yourself what are the downsides or complications of just adding one more instance of your app, and create a list of barriers. Once you’ve removed those, you’ll find that running tens or hundreds of instances in parallel is now possible.

Conclusion

The Cloud has changed the way we think about and run software, and your existing application may need to change to best utilize it. With so much of the infrastructure managed with autonomous software, new tooling has made it easier than ever to manage entire fleets of applications, removing the developer further away from the gritty details. It has pushed software deployments to be more agile, embrace failure as normal, and allow scaling by adding instances instead of making faster machines. If you’re not already running with all the Cloud has to offer, give it another look and see if it aligns with your future needs, both for your business and your application.

The post Embracing Cloud Native appeared first on Big Nerd Ranch.

]]>
https://bignerdranch.com/blog/embracing-cloud-native/feed/ 0
The 5 Steps of the Digital Customer Journey https://bignerdranch.com/blog/the-5-steps-of-the-digital-customer-journey/ https://bignerdranch.com/blog/the-5-steps-of-the-digital-customer-journey/#respond Wed, 22 Sep 2021 20:43:58 +0000 https://bignerdranch.com/?p=7759 A customer’s journey with a business can make all the difference, whether it’s physical or digital. Read on to learn more about a digital customer journey and how you can utilize it for your business.

The post The 5 Steps of the Digital Customer Journey appeared first on Big Nerd Ranch.

]]>
Picture it: Your coworker tells you about this great new coffee shop in town. Suddenly you start to see flyers and stickers for this coffee shop everywhere on your commute to and from work. You think to yourself that maybe it’s worth giving this coffee shop a chance, so the next day you stop by. Not only is the coffee amazing, but the staff is super friendly. They even give you a free cookie with your purchase. 

You had an excellent experience. This is now your favorite coffee shop.

When we consider the digital side of things, there’s a process that starts with someone first learning of your company to actually becoming a loyal customer.

The customer’s first point of contact with your brand, all the way to their final purchase, subscription, or outreach on your website, is known as the digital customer journey.

What is the digital customer journey?

The digital customer journey is the path an online user takes to find your business. It’s a journey that’s made up of every single interaction a customer has with your company, from social media to your actual website.

There are five parts to the digital customer journey experience:

  1. Awareness
  2. Consideration
  3. Fulfillment – commitment (email signup, form fill, something where they interact)
  4. Experience
  5. Loyalty

1. Awareness 

This is the point in the customer journey where you’re on the radar and your next customer first notices your brand. This can be through many different doors, from word of mouth to social media or ads. This is when first impressions are made and where you can reach your customers. One of the most significant differences to the digital customer journey as opposed to the physical-digital customer journey is that you have stronger control over your message with the digital journey. You control where your digital ads are displayed through social media, your website, and ad placement.

2. Consideration

Your customer has heard of you and is now thinking about how you might be able to help them. This is the step in the customer journey mapping where they visit your website and actually start to engage with your company and brand either directly on the site, through reviews on the internet, or social media. Once again, you have complete control here. Do you want the customer to truly consider your offer? Having a great website will help with that. Having excellent copy and a clear call to action also doesn’t hurt. 

3. Fulfillment 

In our coffee shop example, the fulfillment stage means an actual purchase. In the digital world, this could be a purchase, but it could also be a newsletter signup, a follow, or even a page view. Because fulfillment can mean many different things, it’s essential to define exactly what fulfillment looks like for you and your company. As part of your digital strategy, make sure you define your goals. This helps to ensure that the fulfillment you’re getting is what you want. Page views are great, but if you really want newsletter signups, make sure that’s what you’re tracking for so you don’t get lost in a sea of data.

No matter what action takes place, fulfillment is the place in your customer journey map where your customer commits. They’ve done their research and have decided that your product or service can solve their problem.

4. Experience

Does your product or service live up to your promises? This is where the customer will determine if what was expected is what they received and determine if they are satisfied. It is also where the first impression of your customer service takes place and can play a huge part in overall customer satisfaction. This is also why it’s essential to know the principles of customer experience for your business. 

So, if you promised a newsletter but didn’t deliver, that’s a bad experience. Or, if they fill out the “Contact Us” form and someone is in touch with them the same day? That’s a fantastic experience. 

Experience is arguably the most important step because this is where retaining a customer and creating loyalty takes place. Without great experience, the first three steps don’t mean much because a bad experience will deter a customer from making it to the next and final step: loyalty.

5. Loyalty

If your product or service delivers exactly what your customer expects, congratulations, you just gained a new loyal customer. Customer loyalty is all about whether or not your customer actually continues to use your service or product.

Although the physical customer experience journey is linear, the digital customer experience takes place as a big ole circle where all the touch points are connected. For example, the loyalty step feeds directly back to the awareness step because if you gain a loyal customer, they are more likely to pass on the good word about your product or service to their friends and family and even share on social media.

The circular digital customer journey tends to get just a bit complex, where certain steps are either skipped altogether or combined with another step. For example, let’s say your favorite YouTuber highly recommends a mobile app that up until now you had never heard of but now, after their raving review, you suddenly feel compelled to download. 

In one fell swoop, you were made aware of this product, considered it, and have followed through with fulfillment.

How The Digital Customer Experience Relates to CX.

The digital customer journey is also directly related to Customer Experience because a great Customer Experience also starts with awareness and ends with loyalty. Customer Experience is all about listening to your customers’ wants and needs. If customers feel heard, they are more likely to not only continue purchasing your product but are also likely to share with others how great your product or service is.

Great Customer Experience can also help to figure out why customers are getting stuck at certain stages in the digital customer journey. Maybe customers are making it to the Consideration stage and visiting your website but not actually making a purchase. By conducting exercises such as surveys, you will be able to figure out exactly why customers are not acting in the way that you expect them to—and how make the right changes.

Understanding the digital customer journey and how it works for a business is essential to reaching a potential customer and keeping a loyal one. Customer behavior, interaction, and experience, whether digital or physical, make all the difference for a business. Gaining customer loyalty and engagement can build your business community.

It’s Time to Turn Visitors into Loyal Customers

Here at the Ranch, we take the Digital Customer Journey seriously. We know that a great Customer Experience and amazing digital product can turn customers into super fans and help you hit your goals. Since a customer can often make multiple touch points with your brand, we use an iterative approach of proven CX strategies to help you validate your assumptions or offer a new path to success. 

Ready to get started? Get in touch (and we promise an actual human will call you).  

The post The 5 Steps of the Digital Customer Journey appeared first on Big Nerd Ranch.

]]>
https://bignerdranch.com/blog/the-5-steps-of-the-digital-customer-journey/feed/ 0