Full-Stack Web - Big Nerd Ranch Tue, 02 Aug 2022 18:30:26 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.5 What is Tech Debt and How Can You Solve for It? https://bignerdranch.com/blog/what-is-tech-debt-and-how-can-you-solve-for-it/ https://bignerdranch.com/blog/what-is-tech-debt-and-how-can-you-solve-for-it/#respond Wed, 07 Jul 2021 19:52:18 +0000 https://bignerdranch.com/?p=7607 Tech debt (or technical debt) is a concept in software development where there is a build-up of refactoring work necessary in the future because of choices prioritizing quick delivery speed in the short term over ideal code quality.

The post What is Tech Debt and How Can You Solve for It? appeared first on Big Nerd Ranch.

]]>
As much as we hate to admit it, we’ve all had to deal with the ever-increasing laundry pile. You rush through your day, from work to dinner to being just flat out too tired to deal with it. Pretty soon, the pile is a mountain and you need an outfit. 

You have two choices: the quick and easy one where you just grab some of the cleanest clothes you can find and spritz them with some Febreeze or the longer option of actually washing, drying, and folding the entire pile. 

One will save you time but isn’t sustainable for the long run while the other takes longer but sets you up for future success (at least in the sartorial sense). Now, imagine that dirty pile of laundry is your codebase and you’re pretty close to understanding the term technical debt, or tech debt. 

So, what exactly is Tech Debt?

Tech debt (or code debt) refers to the amount of refactoring work you’ll need to do in the future because of the corners that were cut in the name of speed over quality. Don’t confuse the word debt to think it has to do with financial debt or monetary debt. The technical debt metaphor simply means when a workload is created, this debt load is then referred to as tech debt.  

Though tech debt is often accumulated with the best of intentions—think the pressure of meeting a strict deadline—these shortcuts will often create problems in the future and restrain your progress when it comes time to implement new features and enhance functionality.

Tech debt can come about in a number of ways. 

Not all tech debt is created equally and it’s sometimes necessary to accrue some debt to keep projects moving. More often than not, technical debt grows as a result of choices made by the development team such as:

Development without established design

This is considered a “bad up-front definition.” While coding before the design stage is complete will get you started earlier, much of this work will be refactored later to match designs and requirements. That’s why Design Thinking is critical in your development process and can be used to help avoid any tech or design debt.  

Manual testing instead of automated testing 

Tech debt can show its ugly head as countless hours are spent testing your code manually. Combat this with automated testing, instead, which ensures your existing code is functioning properly without having to spend all your time running through every possible case. 

Merging different branches of code that were developed separately 

Parallel development will ultimately require a merger into a single codebase. Every change made in one area will add to the work needed to merge branches. 

Consequences of Tech Debt

Tech debt can plague a business and its goals in a few ways. The first is time. Just like the youngest child of any family, tech debt loves itself some attention. And since all code changes add to the existing debt, the more changes and implementations you make, the more time your devs will have to spend on breaking it down. All this is time that your team could spend on advancing the project. 

Technical debt also preys on businesses financially. Let’s look at an example of a project with more debt than there is time to handle it. At this point, you really only have two options. 

Option one, you get everything running smoothly at a high level, but the extra time you spent handling the debt caused you to miss your deadline. Now you have an upset client or missed your window to launch. 

Option two? Release the code with its persistent bugs and defects and deal with unhappy users, system outages, and a draining of your resources.  

Neither option is fit for good business. But since you’re here, you’re on track to handle your company’s tech debt before you’re ever faced with this decision. 

Unfortunately, the only “real” way to solve for tech debt is to, well, solve for it. Usually a tiresome process with plenty of code refactoring, there’s really no way around existing tech debt other than coding again and doing it better than before.

But, if you’re starting a project, how can you best avoid it, to begin with? 

How to Avoid Tech Debt 

Avoiding or managing technical debt is important for each business. Tech debt does a lot of things to a business, but one thing it doesn’t do is discriminate. Whether you’re a startup eager to reach the market and acquire users or a giant in the industry trying to add new features to old code, tech debt will be looming behind every decision you make.

A company would rarely ever need to completely pay off all tech debt, but you do need a plan in place to prevent it from getting to damaging levels. From avoiding bad code to evaluating code quality to managing technical debt, there are ways to avoid significant technical debt.

Here’s how: 

Evaluate the current scope of your sprints.

Factor in the current debt, if it already exists, and decide if the velocity of each sprint should be pulled back to allow more time to chip away at existing debt, handle new debt, or some combination of the two. Depending on what works best for your team, either set aside some portion of each sprint to handle that sprint’s debt or devote an entire sprint to handling tech debt.

Establish a clear definition of “done” for your work and a baseline for code quality.

When evaluating code quality, several methods can be implemented including peer coding reviews, documentation rules, debugger tools, and automated tests. If you’re interested in improving code quality and could use some help, check out our blogs on How We Make Sure Our Code Meets Our High Standards and Why ‘Good Enough’ Isn’t Good Enough for Our Clients.   

Consider implementing code audits.

At Big Nerd Ranch, we often partner with companies to assist with digital product development. At times, we are expected to enter seamlessly into the flow of an ongoing project, and we can’t let technical debt prevent us from progressing. 

We achieve this seamless transition through code audits, which serve as our “tech debt dogs,” dedicated to sniffing out pre-existing issues. Code audits are a detailed and comprehensive breakdown of source code usually following a three-step process moving from a front-end code review to the backend and finally reviewing the infrastructure. They unveil bugs and bad code while also familiarizing our team with all logic and documentation, saving an estimated 33 hours of work per every hour spent in an audit. 

Technical debt can be harmful to the efficiency of your team and the quality of your product. Like other problems, though, there is always a solution. Start incorporating technical debt into your sprint planning and implementing code audits to get a grip on tech debt before you have to sacrifice any more unnecessary work or money on tech debt throughout your development processes. 

The Nerds Can Squash Your Tech Debt  

Need some help getting ahead of your project’s technical debt? Here at the ranch, we provide detailed code audits, excellent developers, experienced project strategists, and the highest quality code to ensure your project is never hindered by technical debt. We’d love to help you get out in front of your debt as soon as possible. Reach out today and you can kiss your project’s tech debt goodbye.

The post What is Tech Debt and How Can You Solve for It? appeared first on Big Nerd Ranch.

]]>
https://bignerdranch.com/blog/what-is-tech-debt-and-how-can-you-solve-for-it/feed/ 0
Testing Webpacker Apps with RSpec System Tests https://bignerdranch.com/blog/testing-webpacker-apps-with-rspec-system-tests/ Tue, 17 Mar 2020 14:47:16 +0000 https://nerdranchighq.wpengine.com/?p=4208 Rails 5.1 includes Webpacker, a build pipeline that allows using the latest and greatest JavaScript language features and frameworks, as well as system tests, which can simulate user interaction in a real browser. RSpec also includes support for system tests. Let's set up a Rails app using Webpacker, then write an RSpec system test that exercises our app, including our JavaScript code, in Chrome.

The post Testing Webpacker Apps with RSpec System Tests appeared first on Big Nerd Ranch.

]]>
Rails 5.1 was a big step forward in terms of building rich JavaScript frontends in Rails. It included Webpacker, a build pipeline that allows using the latest and greatest JavaScript language features and frameworks, as well as system tests, which can simulate user interaction in a real browser. Many Rails developers use RSpec for testing, and RSpec also includes support for system tests—but the information on how to use them is a bit dispersed.

To help with this, let’s walk through an example of using RSpec system tests. We’ll set up a Rails app using Webpacker, then set up an RSpec system test that exercises our app, including our JavaScript code, in Chrome.

Setting Up Webpacker

Create a new Rails project excluding Minitest and including Webpacker configured for React. Webpacker can be preconfigured for a number of different JavaScript frameworks and you can pick whichever you like, or even vanilla JavaScript. For the sake of this tutorial, we’ll use React; we won’t be touching any React code, but this demonstrates that this testing approach works with React or any other frontend framework.

$ rails new --skip-test --webpack=react rspec_system_tests

Webpacker organizes your code around the concept of “packs,” root files that are bundled into separate output JavaScript files. Webpacker will set up a sample pack for us in app/javascript/packs/hello_react.jsx, but it isn’t used in our app automatically. To use it, we need to create a route that’s not the default Rails home page, then add it to our layout.

Add a root route in config/routes.rb:

 Rails.application.routes.draw do
   # For details on the DSL available within this file, see https://guides.rubyonrails.org/routing.html
+  root to: 'pages#home'
 end

Create the corresponding app/controllers/pages_controller.rb:

class PagesController < ApplicationController
end

We don’t need to define a #home action on that controller, because when Rails attempts to access an action that isn’t defined, the default behavior will be to render the corresponding view. So let’s just create the view, app/views/pages/home.html.erb and put some text in it:

Hello Rails!

Now, to get our hello_react pack running, let’s add it to the head of app/views/layouts/application.html.erb:

  <%= stylesheet_link_tag 'application', media: 'all', 'data-turbolinks-track': 'reload' %>
   <%= javascript_pack_tag 'application', 'data-turbolinks-track': 'reload' %>
+  <%= javascript_pack_tag 'hello_react' %>
 </head>

Run your server:

$ rails s

Load http://localhost:3000 in a browser and you should see:

Hello Rails!
Hello React!

So our React pack is working. Great!

Setting Up RSpec

Now let’s get it tested. Add rspec-rails and a few other gems to your Gemfile:

 group :development, :test do
   # Call 'byebug' anywhere in the code to stop execution and get a debugger console
   gem 'byebug', platforms: [:mri, :mingw, :x64_mingw]
+  gem 'rspec-rails'
 end

+group :test do
+  gem 'capybara'
+  gem 'selenium-webdriver'
+end

rspec-rails should be added to both the :development and :test groups so its generators can be run from the command line. capybara provides test methods for us to simulate users interacting with our app, and selenium-webdriver lets us interact with real browsers to run the tests.

Ask Rails to set up the necessary RSpec configuration files:

$ rails generate rspec:install

You’ll also need to install chromedriver, a tool for running Google Chrome in tests. Download Chromedriver or install it with Homebrew:

$ brew tap caskroom/cask
$ brew cask install chromedriver

Getting Our Test Working

Now we’re ready to write our test! In older versions of RSpec, the tests that simulated user interaction with Capybara were called feature tests, but now that Rails has built-in system testing functionality, it’s recommended to use RSpec system tests to use that same testing infrastructure under the hood.

Generate a system test:

$ rails generate rspec:system hello_react

This creates the humorously-pluralized hello_reacts_spec.rb with the following contents:

require 'rails_helper'

RSpec.describe "HelloReact", type: :system do
  before do
    driven_by(:rack_test)
  end

  pending "add some scenarios (or delete) #{__FILE__}"
end

Replace the pending line with our test:

-  pending "add some scenarios (or delete) #{__FILE__}"
+  it 'should render a React component' do
+    visit '/'
+    expect(page).to have_content('Hello React!')
+  end
 end

Run the test:

$ bundle exec rspec

Oh no, it fails! Here’s the error:

Failures:

  1) HelloReact should render a React component
     Failure/Error: expect(page).to have_content('Hello React!')
       expected to find text "Hello React!" in "Hello Rails!"

It looks like our test is seeing the “Hello Rails!” content rendered on the server in our ERB file, but not the “Hello React!” content rendered on the client by our JavaScript pack.

The reason for this is found in our test here:

RSpec.describe "HelloReact", type: :system do
  before do
    driven_by(:rack_test)
  end

By default, when we generate an RSpec system test, the test specifies that it should be driven_by(:rack_test)Rack::Test is a testing API that allows you to simulate using a browser. It’s extremely fast, and that’s why it’s the default for RSpec system tests.

The downside of Rack::Test is that because it doesn’t use a real browser, it doesn’t execute JavaScript code. So when we want our tests to exercise Webpacker packs, we need to use a different driver. Luckily this is as easy as removing the before block:

 RSpec.describe "HelloReact", type: :system do
-  before do
-    driven_by(:rack_test)
-  end
-
   it 'should render a React component' do

Rails’ system test functionality uses selenium-webdriver by default, which connects to real browsers such as Google Chrome. When we don’t specify the driver in our test, selenium-webdriver is used instead.

Run the test again. You should see Google Chrome popping up and automatically navigating to your app. Our test passes! We’re relying on Chrome to execute our JavaScript, so we should get maximum realism in terms of ensuring our JavaScript code is browser-compatible.

One more useful option for a driver is “headless Chrome.” This runs Chrome in the background so a browser window won’t pop up. This is a bit less distracting and can run more reliably on CI servers. To run headless chrome, add the #driven_by call back in with a new driver:

 RSpec.describe "HelloReact", type: :system do
+  before do
+    driven_by(:selenium_chrome_headless)
+  end
+
   it 'should render a React component' do

When you rerun the test, you’ll see a Chrome instance launch, but you should not see a browser window appear.

Alternatives

System tests are Rails’ built-in mechanism for end-to-end testing. An alternative end-to-end testing tool you may want to consider is Cypress. It’s framework agnostic and built from the ground up for rich frontend applications. One of the main benefits of Cypress is a GUI that shows your executing tests. It allows you to step back in time to see exactly what was happening at each interaction, even using Chrome Developer Tools to inspect the state of your frontend app in the browser.

But Rails’ system tests have a few benefits over Cypress as well. For experienced Rails developers, it’s helpful to write your system tests with the familiar RSpec and Capybara API and running them as part of the same test suite as your other tests. You can also directly access your Rails models to create test data in the test itself. In the past, doing so required something like the database_cleaner gem because the server was running in a separate process, but Rails system tests handle wrapping both the test and server in the same database transaction. Because Cypress doesn’t have knowledge of Rails, setting up that data in Cypress takes some custom setup.

Whether you go with Rails system tests or Cypress, you’ll have the tooling you need to apply your testing skills to rich JavaScript applications.

The post Testing Webpacker Apps with RSpec System Tests appeared first on Big Nerd Ranch.

]]>
Completing the CircleCI https://bignerdranch.com/blog/completing-the-circleci/ Wed, 19 Feb 2020 13:58:43 +0000 https://nerdranchighq.wpengine.com/?p=4126 Test automation and continous integration can streamline your workflow processes. Thankfully, cloud and SaaS solutions have turned a bare metal headache into a configuration file and a few command line tools. CircleCI is one of these solutions, but it takes some debugging and tinkering to get it running. Let's explore.

The post Completing the CircleCI appeared first on Big Nerd Ranch.

]]>
It’s nice to have the sense of security and confidence in your code that testing can provide, but running tests manually before every push can feel like a real chore. Thankfully there are some very handy cloud-based SaaS solutions that can outsource this repetitive task.

The solution we’ll be working with is CircleCI. If you are unfamiliar with continuous integration and deployment, take a look at Lev Lazinskiy’s explanation.

Before We Start

Get or Make the App

This repository has some starter code. Alternately, you can install vue-cli and set up a new project. After running create, remember to choose:

Cool. You should be good to go. Take a minute to cd into your new app and spin up the local dev server, run the tests and familiarize yourself with it. While you are there check that you have a config file on the right path. If you don’t, add it:

$ touch .circleci/config.yml

Link Everything Up

Sign in to your CircleCI account and navigate to the add projects tab. Search for your Vue Hello World repo and select add project.Preflight checklist done. Let’s set up our CI/CD.

 

CircleCI Configuration

Orbs and Workflows

Orbs are certified or third party configurations that streamline an action while simultaneously allowing you to access the underlying configuration options. We are going to use two orbs. The Heroku orb and the Cypress orb. The Heroku orb is in CircleCI’s registry. For the Cypress orb to work correctly, you’ll need to change security settings in CircleCI to allow for uncertified and third-party orbs.

Workflows are a collection of jobs run in parallel or sequence. We’ll define those as we build our configuration file.

Config Setup, Simple

Orbs are only available in version 2.1 of CircleCI, so we need to specify that version on the first line of our config file. After that CircleCI is going to be looking for some defaults. Let’s add those in as well.

# .circle/config.yml
version: 2.1
jobs:
  build:
    docker: 
      - image: circleci/node:12 # the primary container, where your job's commands are run
    steps:
      - checkout # check out the code in the project directory
      - run: echo "hello world" # print a message to verify action

workflows:
  test_then_build:
    jobs:
      - build

As we begin to add lines to our yaml we can use our command line tool to validate and confirm steps incrementally.

$ circleci run validate
Config file at .circle/config.yml is valid.

Then you can pack it up and push it to GitHub. The build won’t take too long for this one. Take a look at the logs in CircleCI. You should see your echo run in the console:Adding Orbs

 

We could build out each step in turn by picking the container, establishing each process and testing it, but we have orbs! Someone has already done that for us. Let’s scratch some of that original file and add our e2e test runner.

# .circle/config.yml
version: 2.1
- jobs:
-   build:
-     docker: 
-       - image: circleci/node:12 # the primary container, where your job's commands are run
-     steps:
-       - checkout # check out the code in the project directory
-       - run: echo "hello world" # print a message to verify action
+ orbs:
+   cypress: cypress-io/cypress@1 # Versions are changing a lot, leave off specifics to get the latest

workflows:
  test_then_build:
    jobs:
-     - build
+     - cypress/run:
+         start: npm run serve # We need our server running
+         wait-on: 'http://localhost:8080'
```

**Wait!** before you push that...  `cypress/run` is going to run our tests inside the container and needs some direction to find its way.  Let's add a `baseUrl` to `./cypress.json`

```javascript
{
  "pluginsFile": "tests/e2e/plugins/index.js",
  "baseUrl": "http://localhost:8080"
}

Wait! before you push that… cypress/run is going to run our tests inside the container and needs some direction to find its way. Let’s add a baseUrl to ./cypress.json

{
  "pluginsFile": "tests/e2e/plugins/index.js",
  "baseUrl": "http://localhost:8080"
}

Now, when our container spins up and runs Cypress, instead of looking for local files it will look for a base url and run our e2e tests. Add it, commit it and open up your workflows panel in CircleCI.Passed, with flying colors. Now let’s add the next step:

 

version: 2.1
orbs:
  cypress: cypress-io/cypress@1 
+ heroku: circleci/heroku@0 # Heroku orb for deployment

workflows:
  test_then_build:
    jobs:
      - cypress/run:
          start: npm run serve # We need our server running
          wait-on: 'http://localhost:8080'
+     - heroku/deploy-via-git:
+          requires: # Makes the above command wait until the required command is complete
+            - cypress/run

Reading down to the jobs section of the Heroku orb at the deploy-via-git command you’ll see that CircleCI needs an API key and an app name to complete its remote URI. We can set those up in our project env vars.

Push it to GitHub and watch it build:Now as you start to build tests and code CircleCI will check all your tests and only deploy if your tests pass. From here you can customize even further. Some options you can experiment with are adding unit tests, running them in parallel, deploying different branches or deploying to different hosting services. There are a lot of possibilities. Take some time and keep exploring.

 

The post Completing the CircleCI appeared first on Big Nerd Ranch.

]]>
Live Updates With Queues, WebSockets, and Push Notifications. Part 6: Push Notifications with Expo https://bignerdranch.com/blog/live-updates-with-queues-websockets-and-push-notifications-part-6-push-notifications-with-expo/ Tue, 04 Feb 2020 19:02:51 +0000 https://nerdranchighq.wpengine.com/?p=4091 Live updates can get your users information they need sooner, and prevent them from operating off of outdated information. To explore the topic, we'll create an app that allows us to receive notifications from services like GitHub, Netlify, and Heroku. In part 6, we'll add push notifications to our Expo app on iOS and Android.

The post Live Updates With Queues, WebSockets, and Push Notifications. Part 6: Push Notifications with Expo appeared first on Big Nerd Ranch.

]]>
In this series, we’ve managed to build a mobile app that will show us live notifications of events on services like GitHub using WebSockets. It works like this:diagram of worker pushing data via a WebSocket

This is pretty great when we’re running the app. But what about when it’s in the background? In that case, we can use device push notifications to alert the user to events. Here’s the complete architectural diagram of the system, with push notifications added:

Push notifications are available on the web as well (but, as of the time of this writing, not on Safari). However, users are frequently less willing to enable push notifications in web apps than in native mobile apps. Because of this, we chose to build our client app in React Native using Expo. Expo has great support for push notifications across both iOS and Android—let’s give them a try!

If you like, you can download the completed server project and the completed client project for the series.

Asking for Push Permission

Before we can send a push notification to a user, we need to request permission from them. In our Expo app, in src/MainScreen.js, add a new PushPermissionRequester component:

 import React from 'react';
 import { View } from 'react-native';
+import PushPermissionRequester from './PushPermissionRequester';
 import MessageList from './MessageList';

 export default function MainScreen() {
   return (
     <View style={{ flex: 1 }}>
+      <PushPermissionRequester />
       <MessageList />
     </View>
   );
 }

Now let’s implement PushPermissionRequester. Create a src/PushPermissionRequester.js file and enter the following:

import React, { useState } from 'react';
import { View } from 'react-native';
import { Notifications } from 'expo';
import * as Permissions from 'expo-permissions';
import { Button, Input } from 'react-native-elements';

const askForPushPermission = setToken => async () => {

};

export default function PushPermissionRequester() {
  const [token, setToken] = useState('(token not requested yet)');

  return (
    <View>
      <Input value={token} />
      <Button
        title="Ask Me for Push Permissions"
        onPress={askForPushPermission(setToken)}
      />
    </View>
  );
}

This component tracks a push notification token that can be requested, then is displayed afterward. Now let’s fill in askForPushPermission to request it:

const askForPushPermission = setToken => async () => {
  const { status: existingStatus } = await Permissions.getAsync(
    Permissions.NOTIFICATIONS,
  );
  let finalStatus = existingStatus;

  if (existingStatus !== 'granted') {
    const { status } = await Permissions.askAsync(Permissions.NOTIFICATIONS);
    finalStatus = status;
  }

  console.log('push notification status ', finalStatus);
  if (finalStatus !== 'granted') {
    setToken(`(token ${finalStatus})`);
  }

  let token = await Notifications.getExpoPushTokenAsync();
  setToken(token);
};

This is boilerplate code from the Expo Push Notification docs; what’s happening is:

  • We retrieve the existing permission status for push notifications.
  • If permission is not yet granted, we attempt to ask for permission.
  • Either way, if permission is not granted in the end, we display the status we got. If permission is granted, we request the token and set it in the component state to display it.

Reload the Expo app on your virtual device, and tap on “Ask Me for Push Permissions.” You should see the message “(token undetermined),” and a yellow box error at the bottom of the screen. The error says “Error: Must be on a physical device to get an Expo Push Token.”

Time to take this app to your real phone!

Running on Device

On Android, there are a few different ways to get the app running on your physical device. On iOS things are a bit more locked down. Let’s look at the approach that will work for both iOS and Android.

On your phone, search for “Expo” in the App Store or Google Play Store, respectively. This is a client app from Expo that allows you to run your app before going through the whole app publishing process, which is a nice speed boost. Download the Expo app. If you haven’t already created a free Expo account, create one. Then log in to the Expo app.

Now we need to get our React Native application published to Expo so we can load its JavaScript assets into the Expo app. Open the Metro Bundler browser tab that Expo opens when you run it. In the sidebar, click “Publish or republish project…”:

Choose a unique “URL Slug” for your app, then click “Publish project.”

Expo will take a minute or two to bundle up your JavaScript and upload it. Ultimately you should get a box at the bottom-right of the browser window saying “Successfully published to…”

Reopen the Expo app on your phone, go to the Profile tab, and in the “Published Projects” list you should see your app. Tap on it, and it should open and display the initial data from Heroku.

Getting and Testing a Token

Now, tap “Ask Me for Push Permissions” again, and give permission. This time, on a physical device, it should work!

You should see a token that looks like ExponentPushToken[…], with a string of letters and numbers in between the square brackets. This is a token that uniquely identifies your app running in Expo on your device. You can use this to hit Expo’s API to send a push notification.

Select the whole token, copy it, and transfer it to your development computer somehow. Emailing yourself is always an option if nothing else!

Before we code anything, we can test this push notification out through Expo’s Push Notifications Tool. Make sure Expo is in the background on your phone. Then, on your development machine, go to the Push Notifications Tool.

Paste your full token including the string ExponentPushToken into the “Expo Push Token” field. For “Message Title,” type something.

Scroll to the bottom of the page and click “Send a notification”. A push notification should appear on your phone from the Expo app, displaying the title you entered.

Feel free to play around with other fields in the push notification tool as well.

Adding an Expo Module

Now that we have a token, we can provide it to our backend. In a production application, you would set up a way for each user to send that token up to the server and store it with their user account. Since user accounts aren’t the focus of our tutorial, we’re just going to set that token via an environment variable instead.

In our node app, in .env.sample, add the following line:

 CLOUDAMQP_URL=fake_cloudamqp_url
+EXPO_PUSH_TOKEN=fake_expo_push_token
 MONGODB_URI=fake_mongodb_uri

In .env add your token, filling in the real value. This is the value we wanted to keep out of our git repo; you don’t want me to find your push token and send spam to you!

 CLOUDAMQP_URL=amqp://localhost
+EXPO_PUSH_TOKEN=ExponentPushToken[...]
 MONGODB_URI=mongodb://localhost:27017/nodeifier

Next, add Expo’s SDK as a dependency to your Node app:

$ yarn add expo-server-sdk

As we did with MongoDB and RabbitMQ, let’s wrap Expo’s SDK in a module of our own, to hide it from the rest of our app. Create a lib/expo.js file and add the following:

const Expo = require('expo-server-sdk').default;

const token = process.env.EXPO_PUSH_TOKEN;

const expo = new Expo();

async function push({ text }) {
  if (!Expo.isExpoPushToken(token)) {
    console.error(`Push token ${token} is not a valid Expo push token`);
    return;
  }

  const messages = [
    {
      to: token,
      title: text,
    },
  ];

  console.log('sending to expo push', messages);
  const chunks = expo.chunkPushNotifications(messages);

  for (let chunk of chunks) {
    try {
      let ticketChunk = await expo.sendPushNotificationsAsync(chunk);
      console.log(ticketChunk);
    } catch (error) {
      console.error(error);
    }
  }
}

module.exports = { push };

We export a push function that our app can use to send a push notification. We only use the text field of the message. First, we get the Expo push token from the environment variable and confirm it’s valid. Then we construct a message object with the structure Expo’s Push Notification SDK expects. The SDK is set up to allow sending push notifications in batches, which is a bit overkill in our case, but we work with it. We log the success or error message just in case.

Setting Up a Worker

Now let’s send out a push notification from our worker. In an earlier part we mentioned that you could conceivably separate different webhook endpoints into different microservices or lambda functions for scalability. You could do the same thing with workers. But since we’re hosting on Heroku, which will give us one web dyno and one worker dyno for free, we’ll keep our worker code in a single worker process that is watching multiple queues.

How should we organize this one worker service with multiple concerns? Currently our worker is very small, so adding code to monitor a second queue to the same file wouldn’t clutter it up much. But for the sake of illustrating how to separate concerns, let’s refactor our worker into separate modules.

Create a workers/incoming.js file, and copy and paste the require and handleIncoming code from workers/index.js into it. Then export the handler:

const queue = require('../lib/queue');
const repo = require('../lib/repo');

const handleIncoming = message =>
  repo
    .create(message)
    .then(record => {
      console.log('Saved ' + JSON.stringify(record));
      return queue.send('socket', record);
    });

module.exports = handleIncoming;

Update workers/index.js to import that function instead of duplicating it:

 if (process.env.NODE_ENV !== 'production') {
   require('dotenv').config();
 }

 const queue = require('../lib/queue');
-const repo = require('../lib/repo');
-
-const handleIncoming = message =>
-  repo
-    .create(message)
-    .then(record => {
-      console.log('Saved ' + JSON.stringify(record));
-      return queue.send('socket', record);
-    });
+const handleIncoming = require('./incoming');

 queue
   .receive('incoming', handleIncoming)

Now, where should we call our push function? In this case, we could probably do it directly in handleIncoming. But when you’re using a queue-based architecture it can be valuable to separate units of work into small pieces; that way if one part fails it can be retried without retrying the entire process. For example, if we can’t reach Expo’s push notification service, we don’t want a retry to inadvertently insert a duplicate record into our database.

So instead, let’s create a new push queue that will receive messages each time we have a push notification to send. In workers/incoming.js, just like we send a message to the socket queue, we’ll send one to the push queue as well:

const handleIncoming = message =>
   repo
     .create(message)
     .then(record => {
       console.log('Saved ' + JSON.stringify(record));
-      return queue.send('socket', record);
+      return Promise.all([
+        queue.send('socket', record),
+        queue.send('push', record),
+      ]);
     });

Note that we wrap our two queue.send calls in a Promise.all() and return the result; that way if either of the sends fails, the rejection will be propagated up and eventually logged with console.error.

Next, add a new workers/push.js file with the following contents:

const expo = require('../lib/expo');

const handlePush = message => {
  console.log('handling push', message);
  return expo.push(message);
};

module.exports = handlePush;

An extremely simple worker, this just forwards the received message along to our Expo module. Connect it in workers/index.js:

 const queue = require('../lib/queue');
 const handleIncoming = require('./incoming');
+const handlePush = require('./push');

 queue
   .receive('incoming', handleIncoming)
   .catch(console.error);
+queue
+  .receive('push', handlePush)
+  .catch(console.error);

With this, we should be set up to send push notifications. Run your two node processes locally:

$ node web
$ node workers

Send a test notification:

$ curl http://localhost:3000/webhooks/test -d "this should be pushed"

You should see the push notification show up on your phone. Note that although your Expo app is pointing to your production server, it still receives the push notification from your local server. This is because we’re using your device’s Expo push token, and it doesn’t know or care about any other backing servers.

Going to Production

Our final step is to get push notifications working in production. Whereas our previous Heroku environment variables were provided for us by add-ons, we need to set our EXPO_PUSH_TOKEN variable manually. There are two ways we can do this:

  • If you’d like to use the CLI, run heroku config:set "EXPO_PUSH_TOKEN=ExponentPushToken[...]" (entering your full token as usual)
  • If you’d like to use Heroku’s web dashboard, pull up your app, then go to “Settings”, then click “Reveal Config Vars”. In the row that has the “Add” button, fill in EXPO_PUSH_TOKEN for the KEY and your token for the VALUE, then click Add.

Commit your latest changes then push them to Heroku:

$ git add .
$ git commit -m "updated for push tokens"
$ git push heroku master

When your app finishes deploying, try sending it a webhook, filling in your app’s URL instead of mine:

$ curl https://murmuring-garden-42327.herokuapp.com/webhooks/test -d "push from production"

You should receive a push notification on your phone.

You can also try toggling your GitHub PR to see that your other webhooks also deliver push notifications now too.

Where We’ve Been

With that, our app is complete! Let’s review what we’ve built one last time:

We’ve been able to hook up to live updates coming from services like GitHub, Heroku, and Netlify. We set up a queue-based architecture to ensure that on real systems that would have far more load than this, that each piece of the process can run performantly. We push data to running apps over WebSockets, and apps in the background using push notifications.

Adding live updates to your mobile or web applications using approaches such as these can be a big boost to your app’s usefulness to your users. If you’re a developer, give these technologies a try. And if Big Nerd Ranch could help train you in these technologies or help build the foundation of a new live-updating application for you, let us know!

The post Live Updates With Queues, WebSockets, and Push Notifications. Part 6: Push Notifications with Expo appeared first on Big Nerd Ranch.

]]>
Live Updates With Queues, WebSockets, and Push Notifications. Part 5: Deploying to Heroku https://bignerdranch.com/blog/live-updates-with-queues-websockets-and-push-notifications-part-5-deploying-to-heroku/ Tue, 21 Jan 2020 09:33:23 +0000 https://nerdranchighq.wpengine.com/?p=4060 Live updates can get your users information they need sooner, and prevent them from operating off of outdated information. To explore the topic, we'll create an app that allows us to receive notifications from services like GitHub, Netlify, and Heroku. In part 5, we'll deploy our app to Heroku.

The post Live Updates With Queues, WebSockets, and Push Notifications. Part 5: Deploying to Heroku appeared first on Big Nerd Ranch.

]]>
In this series we’ve built a React Native and Node app that receives and passes along notifications from services like GitHub and Netlify. It works great locally, but we can’t keep it running on our machine forever, and every time we restart ngrok the URL changes. We need to get our app deployed somewhere permanent.

If we weren’t using WebSockets, then functions-as-a-service would be a great option for deployment. But since WebSockets are stateful, you need to run a service like Amazon API Gateway in front of the functions to provide WebSocket statefulness, and setting that up can be tricky. Instead, we’ll deploy our app on Heroku, an easy hosting platform that allows us to continue to use WebSockets in the normal way.

If you like, you can download the completed server project and the completed client project for part 5.

Heroku

If you don’t already have a Heroku account, create one for free. (You may need to add a credit card, but it won’t be charged.) Then install and log in to the Heroku CLI.

Go into our node app’s directory in the terminal. If you aren’t already tracking your app in git for version control, initialize a git repo now:

$ git init .
$ echo node_modules > .gitignore

Heroku’s main functionality can be accessed either through the web interface or through the CLI. In this post we’ll be using both, depending on which is easiest for any given step.

To begin, create a new Heroku app for our backend using the CLI:

$ heroku create

This will create a new app and assign it a random name—in my case, murmuring-garden-42327. It will also add a git remote named heroku to your repo alongside any other remotes you may have. You can see this by running the following command:

$ git remote -v
heroku  https://git.heroku.com/murmuring-garden-42327.git (fetch)
heroku  https://git.heroku.com/murmuring-garden-42327.git (push)

We aren’t quite ready to deploy our app to Heroku yet, but we can go ahead and set up our database and queue services. We’ll do this step in the Heroku dashboard. Go to the dashboard, then click on your new app, then the “Resources” tab.

Under Add-ons, search for “mongo”, then click “mLab MongoDB”.

A modal will appear allowing you to choose a plan. The “Sandbox – Free” plan will work fine for us. Click “Provision.”

Next, search for “cloud,” then click “CloudAMQP.”

The default plan works here too: “Little Lemur – Free,” so click “Provision” again.

This has set up our database and queue server. How can we access them? The services provide URLs to our app via environment variables. To see them, click “Settings,” then “Reveal Config Vars.”

From the CLI, you can run heroku config to show the environment variables.

Here’s what they’re for:

  • CLOUDAMQP_APIKEY: we won’t need this for our tutorial app
  • CLOUDAMQP_URL: our RabbitMQ access URL
  • MONGODB_URI: our MongoDB access URL

Using Environment Variables

We need to update our application code to use these environment variables to access the backing services, but this raises a question: how can we set up analogous environment variables in our local environment? The dotenv library is a popular approach: it allows us to set up variables in a .env file in our app. Let’s refactor our app to use dotenv.

First, add dotenv as a dependency:

$ yarn add dotenv

Create two files, .env and .env.sample. It’s a good practice to not commit your .env file to version control; so far our connection strings don’t have any secure info, but later we’ll add a variable that does. But if you create and commit a .env.sample file with example data, this helps other users of your app see which environment variables your app uses. If you’re using git for version control, make sure .env is in your .gitignore file so it won’t be committed.

Add the following to .env.sample:

CLOUDAMQP_URL=fake_cloudamqp_url
MONGODB_URI=fake_mongodb_uri

This just documents for other users that these are the values needed.

Now let’s add the real values we’re using to .env:

CLOUDAMQP_URL=amqp://localhost
MONGODB_URI=mongodb://localhost:27017/notifier

Note that the name CLOUDAMQP_URL is a bit misleading because we aren’t using CloudAMQP locally, just a general RabbitMQ server. But since that’s the name of the environment variable CloudAMQP sets up for us on Heroku, it’ll be easiest for us to use the same one locally. And since CloudAMQP is giving us a free queue server, we shouldn’t begrudge them a little marketing!

The values we set in the .env file are the values from our lib/queue.js and lib/repo.js files respectively. Let’s replace the hard-coded values in those files with the environment variables. In lib/queue.js:

-const queueUrl = 'amqp://localhost';
+const queueUrl = process.env.CLOUDAMQP_URL;

And in lib/repo.js:

-const dbUrl = 'mongodb://localhost:27017/notifier';
+const dbUrl = process.env.MONGODB_URI;

Now, how can we load these environment variables? Add the following to the very top of both web/index.js and workers/index.js, even above any require() calls:

if (process.env.NODE_ENV !== 'production') {
  require('dotenv').config();
}

When we are not in the production environment, this will load dotenv and instruct it to load the configuration. When we are in the production environment, the environment variables will be provided by Heroku automatically, and we won’t have a .env file, so we don’t need dotenv to run.

To make sure we haven’t broken our app for running locally, stop any node processes you have running, then start node web and node workers, run the Expo app, and post a test webhook:

$ curl http://localhost:3000/webhooks/test -d "this is with envvars"

The message should show up in Expo as usual.

Configuring and Deploying

Heroku is smart enough to automatically detect that we have a node app and provision an appropriate environment. But we need to tell Heroku what processes to run. We do this by creating a Procfile at the root of our app and adding the following:

web: node web
worker: node workers

This tells Heroku that it should run two processes, web and worker, and tells it the command to run for each.

Now we’re finally ready to deploy. The simplest way to do this is via a git push. Make sure all your changes are committed to git:

$ git add .
$ git commit -m "preparing for heroku deploy"

Then push:

$ git push heroku master

This pushes our local master branch to the master branch on the heroku remote. When Heroku sees changes to its master branch, it triggers a deployment. We’ll be able to see the deployment process as it runs in the output of the git push command.

Deployment will take a minute or two due to provisioning the server and downloading dependencies. In the end, we’ll get a message like:

remote:        Released v7
remote:        https://murmuring-garden-42327.herokuapp.com/ deployed to Heroku

We have one more step to do. Heroku will start the process named web by default, but when we have any other processes we will need to start by ourselves. In our case, we need to start the worker process. We can do this a few different ways:

  • If you want to use the CLI, run heroku ps:scale worker=1. This scales the process named worker to run on a single “dyno” (kind of like the Heroku equivalent of a server)
  • If you want to use the web dashboard instead, go to your app, then to “Resources.” Next to the worker row, click the pencil icon to edit it, then set the slider to on, and click Confirm.

Testing

Now let’s update our Expo client app to point to our production servers. We could set up environment variables there as well, but for the sake of this tutorial let’s just change the URLs by hand. Make the following changes in MessageList.js, putting in your Heroku app name in place of mine:

 import React, { useState, useEffect, useCallback } from 'react';
-import { FlatList, Linking, Platform, View } from 'react-native';
+import { FlatList, Linking, View } from 'react-native';
 import { ListItem } from 'react-native-elements';
...
-const httpUrl = Platform.select({
-  ios: 'http://localhost:3000',
-  android: 'http://10.0.2.2:3000',
-});
-const wsUrl = Platform.select({
-  ios: 'ws://localhost:3000',
-  android: 'ws://10.0.2.2:3000',
-});
+const httpUrl = 'https://murmuring-garden-42327.herokuapp.com';
+const wsUrl = 'wss://murmuring-garden-42327.herokuapp.com';

Note that the WebSocket URL uses the wss protocol instead of ws; this is the secure protocol, which Heroku makes available for us.

Reload your Expo app. It should start out blank because our production server doesn’t have any data in it yet. Let’s send a test webhook, again substituting your app’s name for mine:

$ curl https://murmuring-garden-42327.herokuapp.com/webhooks/test -d "this is heroku"

You should see your message show up. We’ve got a real production server!

Next, let’s set up a GitHub webhook pointing to our Heroku server. In the testing GitHub repo you created, go to Settings > Webhooks. Add a new webhook and leave the existing one unchanged; that way you can continue receiving events on your development server as well.

  • Payload URL: your Heroku app URL, with /webhooks/github appended.
  • Content type: change this to application/json
  • Secret: leave blank
  • SSL verification: leave as “Enable SSL verification”
  • Which events would you like to trigger this webhook? Choose “Let me select individual events”

  • Scroll down and uncheck “Pushes” and anything else if it’s checked by default, and check “Pull requests.”
  • Active: leave this checked

Now head back to your test PR and toggle it open and closed a few times. You should see the messages show up in your Expo app.

Congratulations — now you have a webhooks-and-WebSockets app running in production!

Heroku Webhooks

Now that we have a Heroku app running, maybe we can set up webhooks for Heroku deployments as well!

Well, there’s one problem with that: it can be hard for an app that is being deployed to report on its deployment.

Surprisingly, if you set up a webhook for your Node app, you will get a message that the build started. It’s able to do this because Heroku leaves the existing app running until the build completes, then swaps out the running version. You won’t get a message that the build completed over the WebSocket, however—by that time the app has been restarted, your WebSocket connection is lost. The success message has been stored to the database, however, so if you reload the Expo app it will appear.

With that caveat in place — or if you have another Heroku app that you want to set up notifications for — here are a few pointers for how to do that.

To configure webhooks, open your site in the Heroku dashboard, then click “More > View webhooks.” Click “Create Webhook.” Choose the api:build event type: that will allow you to receive webhook events when builds both start and complete.

The webhook route itself should be very similar to the GitHub one. The following code can be used to construct a message from the request body:

const {
  data: {
    app: { name },
    status,
  },
} = req.body;

const message = {
  text: `Build ${status} for app ${name}`,
  url: null,
};

Note that the Heroku webhook doesn’t appear to send the URL of your app; if you want it to be clickable, you would need to use the Heroku Platform API to retrieve that info via another call.

What’s Next?

Now that we’ve gotten our application deployed to production, there’s one more piece we can add to make a fully-featured mobile app: push notifications to alert us in the background. To get push notifications, we’ll need to deploy our Expo app to a real hardware device.

The post Live Updates With Queues, WebSockets, and Push Notifications. Part 5: Deploying to Heroku appeared first on Big Nerd Ranch.

]]>
Meeting Customers Where They Are With a Multiexperience Development Strategy https://bignerdranch.com/blog/meeting-customers-where-they-are-with-a-multiexperience-development-strategy/ Tue, 14 Jan 2020 09:00:16 +0000 https://nerdranchighq.wpengine.com/?p=4057 Users expect to interact with your company when and how they choose to. This can include on the web, in mobile applications, via chatbots and voice assistants, and via wearable devices. How can your company provide a high-quality experience for users across all these platforms?

The post Meeting Customers Where They Are With a Multiexperience Development Strategy appeared first on Big Nerd Ranch.

]]>
Users expect to interact with your company when and how they choose. This can include via the web, mobile applications, chatbots, voice assistants, and wearable devices. To maximize your company’s potential customer base and customer satisfaction, you need a multiexperience development strategy: a plan for delivering consistent, high-quality apps across a multitude of platforms.

Facing users’ expectations to interact on a variety of platforms, many companies struggle to provide a high-quality customer experience across all of them. It can be a challenge even to keep feature parity between the web and a single mobile platform. As a result, companies will often pick either iOS or Android to launch on and defer their entry onto the other mobile platform until later, limiting their potential customer base. As each of these apps evolves over time, development speed slows down. The functionality of the apps on different platforms drifts as new features don’t make it onto every platform. Bugs appear as different platforms use slightly different logic to interpret data. When it is this challenging just to target web and mobile platforms, adding even more platforms like chat, voice assistants, and wearables is a non-starter.

Failing to provide a high-quality experience for users across different platforms risks losing users on those platforms. Users perceive the apps as out-of-date and unreliable. Large segments of users who prefer to use other platforms will reach for competitors that have offerings on them. The result is the loss of users to competitors with broader reach across platforms.

To avoid this problem, you can use a multiexperience development strategy to satisfy and retain those customers. A multiexperience development strategy is a plan for delivering consistent, high-quality apps across a multitude of platforms. There is no off-the-shelf solution to this that meets the quality demands of the market. Instead, companies will need to apply their application engineering capabilities in an intentional way to come up with their own approach.

When you are thinking about creating a multiexperience development strategy internally or finding a partner to work with you, here are some important areas to plan:

  • Technology Selection: First, you need to decide which platforms are the most important for your company to target and which technologies you should use to build for those platforms. Should you use cross-platform solutions like Flutter, Kotlin Multiplatform, or React Native to help keep your mobile apps in sync while reducing costs, or is traditional native development the best path to the quality you need? For the web, what new browser technologies are available to make your users’ web experience more app-like? Should your company consider building apps for chat, voice assistant, and wearable platforms, and if so, when is the time right to build them? Teams who have experience building for these platforms can provide valuable information to help make these decisions.
  • API-First Development: It’s important to ensure each frontend app has the data it needs in a consistent way, and one of the best ways to do this is to design your backend API before development begins. Mocking tools such as those provided by SwaggerHub and Apollo GraphQL can allow frontend development to begin in parallel to backend development. This allows frontend apps to uncover needs that weren’t anticipated before backend work has started on them, preventing wasted effort. With a consistent API, your company will be ready to spin up additional platforms in the future as soon as the need arises.
  • Cross-Team Collaboration: For a consistent user experience, projects across multiple frontend and backend platforms need to be coordinated. The best way to facilitate this communication is with small, fast, cost-efficient teams. Features should be delivered on a regular basis so they can be rolled out quickly without being blocked by one platform that is moving more slowly. Designers with expertise in mobile and web application design should be integrated with the teams to ensure your apps deliver a world-class experience to your users.
  • Speed of Delivery: Delivery automation technologies are important to ensure that development teams stay focused on building features, not performing repetitive deployment tasks. Continuous Integration servers can run automated tests across all platforms to ensure your apps stay stable for your users as new features are added. Test and production builds can be automatically deployed to low-maintenance Platform-as-a-Service hosting and app distribution tools such as TestFlight and Microsoft App Center, allowing business users to quickly test and provide feedback.

Technology selection, API-first development, cross-team collaboration, and speed of delivery will all significantly impact your company’s ability to implement an effective multiexperience development strategy.

As you are evaluating your internal resources and potential partners, Big Nerd Ranch would be happy to help. We have a breadth of experience developing for platforms ranging from web, native mobile, cross-platform, and platforms such as chat, voice, and wearables. Our API-first strategies, team collaboration, and speed-of-delivery tooling can help give you momentum developing world-class apps for the right platforms.

The post Meeting Customers Where They Are With a Multiexperience Development Strategy appeared first on Big Nerd Ranch.

]]>
Live Updates With Queues, WebSockets, and Push Notifications. Part 4: Webhooks https://bignerdranch.com/blog/live-updates-with-queues-websockets-and-push-notifications-part-4-webhooks/ Thu, 02 Jan 2020 13:19:29 +0000 https://nerdranchighq.wpengine.com/?p=4036 Live updates can get your users information they need sooner, and prevent them from operating off of outdated information. To explore the topic, we'll create an app that allows us to receive notifications from services like GitHub, Netlify, and Heroku. In part 4, we'll write code to receive webhooks from GitHub.

The post Live Updates With Queues, WebSockets, and Push Notifications. Part 4: Webhooks appeared first on Big Nerd Ranch.

]]>
So far in this series we’ve set up a React Native frontend and Node backend to receive notifications from external services and deliver live updates to our client app. But we haven’t done any real service integrations yet. Let’s fix that! Our app can be used to notify us of anything; in this post we’ll hook it up to GitHub. We’ll also have some tips for if you want to hook it up to Netlify. In a future post we’ll provide tips for hooking up to Heroku as well, after we’ve deployed the backend to run on Heroku.

If you like, you can download the completed server project and the completed client project for part 4.

Setup

Before we create any of these webhook integrations, let’s refactor how our webhook code is set up to make it easy to add additional integrations. We’ll keep our existing “test” webhook for easy experimentation; we’ll just add webhooks for real services alongside it.

We could set up multiple webhook endpoints in a few different ways. If we were worried about having too much traffic for one application server to handle we could run each webhook as a separate microservice so that they could be scaled independently. Alternatively, we could run each webhook as a separate function on a function-as-a-service platform like AWS Lambda. Each microservice or function would need to have access to send messages to the same queue, but other than that they could be totally independent.

In our case, we’re going to deploy our app on Heroku. That platform only allows us to expose a single service to HTTP traffic, so let’s make each webhook a separate route within the same Node server.

Create a web/webhooks folder. Move web/webhook.js to web/webhooks/test.js. Make the following changes to only export the route, not to set up a router:

-const express = require('express');
-const bodyParser = require('body-parser');
-const queue = require('../lib/queue');
+const queue = require('../../lib/queue');

 const webhookRoute = (req, res) => {
...
 };

-const router = express.Router();
-router.post('/', bodyParser.text({ type: '*/*' }), webhookRoute);
-
-module.exports = router;
+module.exports = webhookRoute;

We’ll define the router in a new web/webhooks/index.js file instead. Create it and add the following:

const express = require('express');
const bodyParser = require('body-parser');
const testRoute = require('./test');

const router = express.Router();
router.post('/test', bodyParser.text({ type: '*/*' }), testRoute);

module.exports = router;

Now we just need to make a tiny change to web/index.js to account for the fact that we’ve pluralized “webhooks”:

 const express = require('express');
-const webhookRouter = require('./webhook');
+const webhookRouter = require('./webhooks');
 const listRouter = require('./list');
...
 app.use('/list', listRouter);
-app.use('/webhook', webhookRouter);
+app.use('/webhooks', webhookRouter);

 const server = http.createServer(app);

This moves our webhook endpoint from /webhook to /webhooks/test. Now any future webhooks we add can be at other paths under /webhooks/.

If your node web process is running, stop and restart it. Make sure node workers is running as well. You’ll then need to reload your Expo app to re-establish the WebSocket connection.

Now you can send a message to the new path and confirm our test webhook still works:

$ curl http://localhost:3000/webhooks/test -d "this is the new endpoint"

That message should show up in the Expo app as usual.

Making Our Local Server Accessible

We need to do another preparatory step as well. Because we’ve been sending webhooks from our local machine, we’ve been able to connect to localhost. But external services don’t have access to our localhost. One way to get around this problem is ngrok, a great free tool to give you a publicly-accessible URL to your local development machine. Create an ngrok account if you don’t already have one, then sign in.

Install ngrok by following the instructions on the dashboard to download it, or, if you’re on a Mac and use Homebrew, you can run brew cask install ngrok. Provide ngrok with your auth token as instructed on the ngrok web dashboard.

Now you can open a public tunnel to your local server. With node web running, in another terminal run:

$ ngrok http 3000

You should see output like the following:

In the output, look for the lines that start with “Forwarding” – these show the .ngrok.io subdomain that has been temporarily set up to access your service. Note that there is an HTTP and HTTPS one; you may as well use the HTTPS one.

To confirm it works, send a POST to your test webhook using the ngrok URL instead of localhost. Be sure to fill in your domain name instead of the sample one I’m using here:

$ curl https://abcd1234.ngrok.io/webhooks/test -d "this is via ngrok"

The message should appear in the client as usual.

Building the GitHub Webhook

Now that we’ve got a subdomain that can be accessed from third-party services, we’re ready to build out the webhook endpoint for GitHub to hit. Create a web/webhooks/github.js file and add the following:

const queue = require('../../lib/queue');

const webhookRoute = (req, res) => {
  console.log(JSON.stringify(req.body));

  const {
    repository: { name: repoName },
    pull_request: { title: prTitle, html_url: prUrl },
    action,
  } = req.body;

  const message = {
    text: `PR ${action} for repo ${repoName}: ${prTitle}`,
    url: prUrl,
  };

  console.log(message);
  queue
    .send('incoming', message)
    .then(() => {
      res.end('Received ' + JSON.stringify(message));
    })
    .catch(e => {
      console.error(e);
      res.status(500);
      res.end(e.message);
    });
};

module.exports = webhookRoute;

In our route, we do a few things:

  • We log out the webhook request body we received as a JSON string, in case it’s useful.
  • We pull the pertinent fields out of the request body: the repository name, pull request title and URL, and the action that was taken (opened, closed, etc.)
  • We construct a message object in the standard format our app uses: with a text field describing it and a related url the user can visit.
  • As in our test webhook, we send this message to our incoming queue to be processed.

Connect this new route in web/webhooks/index.js:

 const testRoute = require('./test');
+const githubRoute = require('./github');

 const router = express.Router();
 router.post('/test', bodyParser.text({ type: '*/*' }), testRoute);
+router.post('/github', express.json(), githubRoute);

 module.exports = router;

Note that in this case we aren’t using the bodyParser.text() middleware, but instead Express’s built-in express.json() middleware. This is because we’ll be receiving JSON data instead of plain text.

Restart node web to pick up these changes. You don’t need to restart ngrok.

Testing the Integration

Now let’s create a new repo to use for testing. Go to github.com and create a new repo; you could call it something like notifier-test-repo. We don’t care about the contents of this repo; we just need to be able to open PRs. So choose the option to “Initialize this repository with a README”.

When the repo is created, go to Settings > Webhooks, then click “Add webhook”. Choose the following options

  • Payload URL: your ngrok domain, with /webhooks/github appended.
  • Content type: change this to application/json
  • Secret: leave this blank. We aren’t using it for this tutorial, but you can use this field to confirm your webhook traffic is coming from a trusted source
  • SSL verification: leave as “Enable SSL verification”
  • Which events would you like to trigger this webhook? Choose “Let me select individual events”

  • Scroll down and uncheck “Pushes” and anything else if it’s checked by default, and check “Pull requests”.
  • Active: leave this checked

Note that your ngrok URL will change every time you restart ngrok. You will need to update any testing webhook configuration in GitHub and other services to continue receiving webhooks.

Now we just need to create a pull request to test out this webhook. The easiest way is to click the edit icon at the top right of our readme on GitHub’s site. Add some text to the readme, then at the bottom choose “Create a new branch for this commit and start a pull request,” and click “Commit changes,” then click “Create pull request.”

In your client app you should see a new message “PR opened for repo notifier-test-repo: Update README.md:”

If you want to see more messages, or if something went wrong and you need to troubleshoot, you can repeatedly click “Close pull request” then “Reopen pull request;” each one will send a new event to your webhook.

Our test webhook didn’t pass along any URLs. Now that we have messages from GitHub with URLs attached, let’s update our client app to allow tapping on an item to visit its URL. Open src/MessageList.js and make the following change:

 import React, { useState, useEffect, useCallback } from 'react';
-import { FlatList, Platform, View } from 'react-native';
+import { FlatList, Linking, Platform, View } from 'react-native';
 import { ListItem } from 'react-native-elements';
...
       <FlatList
         data={messages}
         keyExtractor={item => item._id}
         renderItem={({ item }) => (
           <ListItem
             title={item.text}
             bottomDivider
+            onPress={() => item.url && Linking.openURL(item.url)}
           />
         )}
       />

Reload the client app, tap on one of the GitHub notifications, and you’ll be taken to the PR in Safari. Pretty nice!

Heroku and Netlify

Now we’ve got a working GitHub webhook integration. We’ll wait a bit to set up the webhook integration with Heroku; first we’ll deploy our app to Heroku. That way we’ll be sure we have a Heroku app to receive webhooks for!

Netlify is another deployment service with webhook support; it’s extremely popular for frontend apps. We won’t walk through setting up Netlify webhooks in detail, but here are a few pointers if you use that service and would like to try integrating.

To configure webhooks, open your site in the Netlify dashboard, then click Settings > Build & deploy > Deploy notifications. Click Add notification > Outgoing webhook. Netlify requires you to set up a separate hook for each event you want to monitor. You may be interested in “Deploy started,” “Deploy succeeded,” and “Deploy failed.”

The webhook route code itself should be very similar to the GitHub one. The following lines can be used to construct a message from the request body:

const { state, name, ssl_url: url } = req.body;

const message = {
  text: `Deployment ${state} for site ${name}`,
  url,
};

What’s Next?

Now we’ve got our first real service sending notifications to our app. But the fact that we’re dependent on a changeable ngrok URL feels a bit fragile. So we can get this running in a stable way, in our next post we’ll deploy our app to production on a free Heroku account.

The post Live Updates With Queues, WebSockets, and Push Notifications. Part 4: Webhooks appeared first on Big Nerd Ranch.

]]>
Live Updates With Queues, WebSockets, and Push Notifications. Part 3: WebSockets https://bignerdranch.com/blog/live-updates-with-queues-websockets-and-push-notifications-part-3-websockets/ Mon, 23 Dec 2019 09:22:25 +0000 https://nerdranchighq.wpengine.com/?p=4030 Live updates can get your users information they need sooner, and prevent them from operating off of outdated information. To explore the topic, we'll create an app that allows us to receive notifications from services like GitHub, Netlify, and Heroku. In part 3, we'll build out WebSockets functionality to accomplish these live updates.

The post Live Updates With Queues, WebSockets, and Push Notifications. Part 3: WebSockets appeared first on Big Nerd Ranch.

]]>
In parts one and two of this series, we set up a frontend and backend to view notifications from third-party services like GitHub, Netlify, and Heroku. It works like this:

diagram of HTTP endpoints sending data to a queue, which a worker reads from

Now our client is set up to view our messages, but we need to quit and restart the app to get any updates. We could add pull-to-refresh functionality, but it’d be much nicer if we could automatically receive updates from the server when a new message is received. Let’s build out WebSockets functionality to accomplish these live updates. Here’s an illustration of how the flow of data will work:

diagram of worker pushing data via a WebSocket

If you like, you can download the completed server project and the completed client project for part 3.

Adding WebSockets to the Server

There are a few different libraries that can provide WebSocket functionality to Node apps. For the sake of this tutorial, we’ll use websocket:

$ yarn add websocket

In our worker, after we handle a message on the incoming queue and save the message to the database, we’ll send a message out on another queue indicating that we should deliver that message over the WebSocket. We’ll call that new queue socket. Make the following change in workers/index.js:

const handleIncoming = message =>
  repo
    .create(message)
    .then(record => {
      console.log('Saved ' + JSON.stringify(record));
+     return queue.send('socket', record);
    });

 queue
   .receive('incoming', handleIncoming)

Note the following sequence:

  • We receive a message on the incoming queue;
  • Then, we save the record to the database;
  • And finally, we send another message out on the socket queue.

Note that we haven’t yet implemented the WebSocket code to send the response to the client yet; we’ll do that next. So far, we’ve just sent a message to a new queue that the WebSocket code will watch.

Now let’s implement the WebSocket code. In the web folder, create a file socket.js and add the following:

const WebSocketServer = require('websocket').server;

const configureWebSockets = httpServer => {
  const wsServer = new WebSocketServer({ httpServer });
};

module.exports = configureWebSockets;

We create a function configureWebSockets that allows us to pass in a Node httpServer and creates a WebSocketServer from it.

Next, let’s add some boilerplate code to allow a client to establish a WebSocket connection:

 const configureWebSockets = httpServer => {
   const wsServer = new WebSocketServer({ httpServer });
+
+  let connection;
+
+  wsServer.on('request', function(request) {
+    connection = request.accept(null, request.origin);
+    console.log('accepted connection');
+
+    connection.on('close', function() {
+      console.log('closing connection');
+      connection = null;
+    });
+  });
 };

All we do is save the connection in a variable and add a little logging to indicate when we’ve connected and disconnected. Note that our server is only allowing one connection; if a new one comes in, it’ll be overwritten. In a production application you would want to structure your code to handle multiple connections. Some WebSocket libraries will handle multiple connections for you.

Next, we want to listen on the socket queue we set up before, and send an outgoing message on our WebSocket connection when we get one:

 const WebSocketServer = require('websocket').server;
+const queue = require('../lib/queue');

 const configureWebSockets = httpServer => {
...
   wsServer.on('request', function(request) {
...
   });
+
+  queue
+    .receive('socket', message => {
+      if (!connection) {
+        console.log('no WebSocket connection');
+        return;
+      }
+      connection.sendUTF(JSON.stringify(message));
+    })
+    .catch(console.error);
 }

When a message is sent on the socket queue and if there is no WebSocket client connection, we do nothing. If there is a WebSocket client connection we send the message we receive out over it.

Now, we just need to call our configureWebSockets function, passing our HTTP server to it. Open web/index.js and add the following:

 const listRouter = require('./list');
+const configureWebSockets = require('./socket');

 const app = express();
...
 const server = http.createServer(app);
+configureWebSockets(server);

By calling our function, which in turn calls new WebSocketServer(), we enable our server to accept requests for WebSocket connections.

Adding WebSockets to the Client

Now we need to update our Expo client to make that WebSocket connection to the backend and accept messages it sends, updating the screen in the process. On the frontend we don’t need to add a dependency to handle WebSockets; the WebSocket API is built-in to React Native’s JavaScript runtime.

Open src/MessageList.js and add the following:

 const httpUrl = Platform.select({
   ios: 'http://localhost:3000',
   android: 'http://10.0.2.2:3000',
 });
+const wsUrl = Platform.select({
+  ios: 'ws://localhost:3000',
+  android: 'ws://10.0.2.2:3000',
+});
+
+let socket;
+
+const setUpWebSocket = addMessage => {
+  if (!socket) {
+    socket = new WebSocket(wsUrl);
+    console.log('Attempting Connection...');
+
+    socket.onopen = () => {
+      console.log('Successfully Connected');
+    };
+
+    socket.onclose = event => {
+      console.log('Socket Closed Connection: ', event);
+      socket = null;
+    };
+
+    socket.onerror = error => {
+      console.log('Socket Error: ', error);
+    };
+  }
+
+  socket.onmessage = event => {
+    addMessage(JSON.parse(event.data));
+  };
+};

 const loadInitialData = async setMessages => {

This creates a function setUpWebSocket that ensures our WebSocket is ready to go. If the WebSocket is not already opened, it opens it and hooks up some logging. Whether or not it was already open, we configure the WebSocket to pass any message it receives along to the passed-in addMessage function.

Now, let’s call setUpWebSocket from our component function:

   useEffect(() => {
     loadInitialData(setMessages);
   }, []);

+  useEffect(() => {
+    setUpWebSocket(newMessage => {
+      setMessages([newMessage, ...messages]);
+    });
+  }, [messages]);
+
   return (
     <View style={{ flex: 1 }}>

We call setUpWebSocket in a useEffect hook. We pass it a function allowing it to append a new message to the state. This effect depends on the messages state.

As a result of these dependencies, when the messages are changed, we create a new addMessage callback that appends the message to the updated messages array and then we call setUpWebsocket again with that updated addMessage callback. This is why we wrote setUpWebsocket to work whether or not the WebSocket is already established; it will be called multiple times.

With this, we’re ready to give our WebSockets a try! Make sure you have both Node services running in different terminals:

$ node web
$ node workers

Then reload our Expo app:

  • In the iOS Simulator, press Command-Control-Z to bring up the developer menu, then tap “Reload JS Bundle”
  • In the Android Emulator, press Command-M to bring up the developer menu, then tap “Reload”

In yet another terminal, send in a new message:

$ curl http://localhost:3000/webhook -d "this is for WebSocketyness"

You should see the message appear in the Expo app right away, without any action needed by the user. We’ve got live updates!

What’s Next?

Now that we’ve proven out that we can get live updates to our app, we should move beyond our simple webhook and get data from real third-party services. In the next part, we’ll set up a webhook to get notifications from GitHub about pull request events.

The post Live Updates With Queues, WebSockets, and Push Notifications. Part 3: WebSockets appeared first on Big Nerd Ranch.

]]>
Kotlin Multiplatform in 2020 https://bignerdranch.com/blog/kotlin-multiplatform-in-2020/ Tue, 17 Dec 2019 16:05:57 +0000 https://nerdranchighq.wpengine.com/?p=4024 Kotlin Multiplatform is quickly becoming a solid contender in the multiplatform solution space. This article explains what makes the platform stand out against the competition and what to expect for Kotlin Multiplatform in 2020 and beyond.

The post Kotlin Multiplatform in 2020 appeared first on Big Nerd Ranch.

]]>
This month over 1700 engineers, entrepreneurs, and business leaders from around the world flew to Copenhagen, Denmark to attend KotlinConf, a two day conference run by JetBrains, the company behind the Kotlin programming language and Intellij. Attendees came to learn more about the language that has been adopted now by over 2.2 million users and powers 60% of the top 1000 apps in the Google Play Store.

Some notable themes emerged this year including several upcoming performance improvements, language stabilizations, and feature improvements to coroutines, Kotlin’s approach to concurrency. The star of the show however was clearly Kotlin Multiplatform, with 7 sessions dedicated to the subject including 2 sessions from engineers at Square and Careem, sharing their own experiences with it.

Kotlin Multiplatform is moving from an early stage, engineering focused, experiment into a solid contender in the multiplatform solution space.

What is Kotlin Multiplatform?

Kotin Multiplatform is a mechanism for sharing code across multiple platforms. This means you can share common data, state, and business logic across Windows, Linux, MacOS, Web, iOS and Android, as well as any Java Virtual Machine (JVM) based platform not covered by the aforementioned operating systems.

This is not a new concept. In the mobile application space today there are other popular solutions to sharing code between mobile platforms, such as Flutter and React Native, and many others that have come before them; Cordova, PhoneGap, Kony, Titanium, to name a few. Even outside of mobile, this has been attempted before. If you consider the idea of languages like C# compiling to intermediate language for the Common Language Runtime (CLR) and Java compiling to bytecode for the Java Virtual Machine (JVM), these languages and their platforms were designed with portability and sharing in mind.

Kotlin Multiplatform has the same goal as its predecessors but accomplishes it in a different way. Whereas Java, C#, React Native and many others compile or target an intermediary language that requires a bridging adapter or virtual machine layer to interpret at runtime, Kotlin Multiplatform code compiles to the same native output as the rest of the code running on the platform. In this way, it transcends platforms that were previously not reachable for a single language. To put it in practical terms, Java needs a JVM, C# needs a CLR, React Native needs a JavaScript bridge to interpret the logic and invoke the appropriate native UI framework code, Kotlin Multiplatform does not. It compiles down to JVM bytecode, JavaScript, or native machine code as needed, based on the platform target.

The Case for Kotlin Multiplatform

The pitch for Kotlin Multiplatform is put simply by Kotlin’s lead design engineer, Andrew Breslav, “Code Sharing, Skill Sharing, plus 100% access to the native platforms”. The idea is that sharing code across all of the platforms your application(s) support, using a common programming language, leads to fewer bugs, shortens development times, and carries lower maintenance costs since your team is maintaining a smaller code base.

Having a common language also means that developer skillsets are heavily transferable between teams.

Answers to Commonly Expressed Concerns

Many of the concerns common to multiplatform solutions, such as the fear of adopting (and then supporting) a new language, experiencing degraded runtime performance, or losing the native look-and-feel of your applications, fall by the wayside.

Team Support

Kotlin is already used by over 53% of professional Android developers today and that growth is accelerating. It’s being used in server-side development too at large tech companies like Intuit, Expedia, and Pivotal.

Kotlin is also easy to learn, with a language syntax that is remarkably similar to other popular languages like Swift, Scala, Groovy, and Java. You may find that your team already has the right skill sets.

Performance

Since Kotlin Multiplatform code compiles to the exact same format as the target platform, it is just as performant as its native counterparts. Your users (and engineers) won’t be able to tell the difference.

Native UI/UX

Kotlin Multiplatform encourages sharing only where it makes sense. As an example, you may wish to share your core business logic and state, but allow each platform to decide exactly how to consume and present that data. This gives your customers a fully native experience while reaping the benefits of code sharing.

Considerations & Risks

The biggest risk to Kotlin Multiplatform is timing and consequently the sparseness of third-party library support. Some basic libraries like HTTP client libraries and data serialization libraries exist and there is a promise for more to come in 2020, with the release of Kotlin 1.4, including a DateTime library. But, for the moment it’s likely that you will need to implement some things yourself.

In addition, Kotlin multiplatform is designed to share code, but not be a drop in replacement for all of the APIs of each platform you target. So, it should be noted that you will need at least some knowledge of each platform you support. This is different than a multiplatform solution like Flutter where Flutter comes with its own UI componentry and is very much a drop in replacement for the native UI stack.

The Future

In 2020, expect Kotlin Multiplatform adoption to accelerate. While it has not quite reached 1.0 stable, major tech companies are already starting to use it.

At Big Nerd Ranch, we see a lot of potential in it and are excited to see where it goes!

The post Kotlin Multiplatform in 2020 appeared first on Big Nerd Ranch.

]]>
Live Updates With Queues, WebSockets, and Push Notifications. Part 2: React Native Apps with Expo https://bignerdranch.com/blog/live-updates-with-queues-websockets-and-push-notifications-part-2-react-native-apps-with-expo/ Tue, 10 Dec 2019 09:29:55 +0000 https://nerdranchighq.wpengine.com/?p=3987 Live updates can get your users information they need sooner, and prevent them from operating off of outdated information. To explore the topic, we'll create an app that allows us to receive notifications from services like GitHub, Netlify, and Heroku. In part 2 we'll create a React Native client app using Expo.

The post Live Updates With Queues, WebSockets, and Push Notifications. Part 2: React Native Apps with Expo appeared first on Big Nerd Ranch.

]]>
In part 1 of our series, we created a Node.js backend for our Notifier app that receives messages via a webhook and sends them over a RabbitMQ queue to a worker process, which then saves them to a MongoDB database. This is a good foundation for our Node backend upon which we’ll be able to add live updates in future posts.

Before we do any more work on the backend, let’s create a React Native client app using Expo so we’ll have a frontend that’s ready for a great live-update experience as well. One of Expo’s features is great cross-platform push notification support, so we’ll be able to benefit from that in part 6 of the series.

If you like, you can download the completed client project for part 2.

Setting Up the Project

If you haven’t built an app with Expo before, this tutorial will walk you through running the app on a virtual device on either Android or iOS. You will need to have one of the following installed on your development machine:

Next, install the Expo CLI globally:

$ npm install -g expo-cli

Then create a new project:

$ expo init notifier-client

You’ll be prompted to answer a few questions; choose the following answers:

  • Choose a template: blank
  • Name: Notifier
  • Slug: notifier-client
  • Use Yarn to install dependencies? Y

After the project setup completes, go into the project directory and add a few more dependencies:

$ cd notifier-client
$ yarn add axios react-native-elements

Here’s what they’re for:

  • axios is a popular HTTP client.
  • react-native-elements is a UI library that will make our super-simple app look a bit nicer.

Next, let’s start the Expo development server:

$ yarn start

 

This should open Expo’s dev server in your browser. It looks something like this:

metro bundler

If you want to run on Android, make sure you’ve followed Expo’s instructions to start an Android virtual device. If you want to run on iOS, Expo will start the virtual device for you.

Now, in the browser window click either “Run on Android device/emulator” or “Run on iOS Simulator.” In the appropriate virtual device you should see a build progress bar and, when it completes, the message “Open up App.js to start working on your app!”.

default screen android and ios

Let’s do that thing they just said!

Loading Data From the Server

Replace the contents of App.js with the following:

import React, { Fragment } from 'react';
import { SafeAreaView, StatusBar } from 'react-native';
import { ThemeProvider } from 'react-native-elements';
import MainScreen from './src/MainScreen';

export default function App() {
  return (
    <ThemeProvider>
      <Fragment>
        <StatusBar barStyle="dark-content" />
        <SafeAreaView style={{ flex: 1 }}>
          <MainScreen />
        </SafeAreaView>
      </Fragment>
    </ThemeProvider>
  );
}

Note that at this point the React Native app won’t build for a few steps.

The changes to App.js will do the following:

  • Hooks up React Native Elements’ ThemeProvider so we can use Elements.
  • Sets up a top status bar.
  • Confines our content to the safe area of the screen, so we don’t overlap hardware features such as the iPhone X notch.
  • Delegates the rest of the UI to a MainScreen component we haven’t created yet.

Now let’s create that MainScreen component. Create a src folder, then a MainScreen.js inside it, and add the following contents:

import React from 'react';
import { View } from 'react-native';
import MessageList from './MessageList';

export default function MainScreen() {
  return (
    <View style={{ flex: 1 }}>
      <MessageList />
    </View>
  );
}

This file doesn’t do much yet; we’ll add more to it in a future post. Right now it just displays a MessageList we haven’t created yet. On to that component!

Create src/MessageList.js and add the following:

import React, { useState, useEffect } from 'react';
import { FlatList, Platform, View } from 'react-native';
import { ListItem } from 'react-native-elements';
import axios from 'axios';

const httpUrl = Platform.select({
  ios: 'http://localhost:3000',
  android: 'http://10.0.2.2:3000',
});

const loadInitialData = async setMessages => {
  const messages = await axios.get(`${httpUrl}/list`);
  setMessages(messages.data);
};

export default function MessageList() {
  const [messages, setMessages] = useState([]);

  useEffect(() => {
    loadInitialData(setMessages);
  }, []);

  return (
    <View style={{ flex: 1 }}>
      <FlatList
        data={messages}
        keyExtractor={item => item._id}
        renderItem={({ item }) => (
          <ListItem
            title={item.text}
            bottomDivider
          />
        )}
      />
    </View>
  );
}

Here’s what’s going on here:

  • In our component function, we set up a messages state item.
  • We set up an effect to call a loadInitialData function the first time the component mounts. We pass it the setMessages function so it can update the state.
  • loadInitialData makes a web service request and stores the data in the response. The way to make HTTP requests to your local development machine differs between the iOS Simulator (http://localhost) and Android Emulator (http://10.0.2.2), so we use React Native’s Platform.select() function to return the appropriate value for the device we’re on.
  • We render a FlatList which is React Native’s performant scrollable list. The list contains React Native Elements ListItems. For now we just display the text of the message.

Run the following command in the Node app folder to take sure our notifier Node app from part 1 is up:

$ node web

Reload our Expo app on the virtual device:

  • In the Android Emulator, press Command-M to bring up the developer menu, then tap “Reload”.
  • In the iOS Simulator, press Command-Control-Z to bring up the developer menu, then tap “Reload JS Bundle”.

When the app reloads, you should see a list of the test messages you entered on your server:

messages on phone

What’s Next?

With this, the basics of our client app are in place, and we’re set to begin adding live updates across our stack. In the next part we’ll introduce WebSockets that allow us to push updates to the client.

The post Live Updates With Queues, WebSockets, and Push Notifications. Part 2: React Native Apps with Expo appeared first on Big Nerd Ranch.

]]>