React Native - Big Nerd Ranch Tue, 19 Oct 2021 17:47:12 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.5 Live Updates With Queues, WebSockets, and Push Notifications. Part 6: Push Notifications with Expo https://bignerdranch.com/blog/live-updates-with-queues-websockets-and-push-notifications-part-6-push-notifications-with-expo/ Tue, 04 Feb 2020 19:02:51 +0000 https://nerdranchighq.wpengine.com/?p=4091 Live updates can get your users information they need sooner, and prevent them from operating off of outdated information. To explore the topic, we'll create an app that allows us to receive notifications from services like GitHub, Netlify, and Heroku. In part 6, we'll add push notifications to our Expo app on iOS and Android.

The post Live Updates With Queues, WebSockets, and Push Notifications. Part 6: Push Notifications with Expo appeared first on Big Nerd Ranch.

]]>
In this series, we’ve managed to build a mobile app that will show us live notifications of events on services like GitHub using WebSockets. It works like this:diagram of worker pushing data via a WebSocket

This is pretty great when we’re running the app. But what about when it’s in the background? In that case, we can use device push notifications to alert the user to events. Here’s the complete architectural diagram of the system, with push notifications added:

Push notifications are available on the web as well (but, as of the time of this writing, not on Safari). However, users are frequently less willing to enable push notifications in web apps than in native mobile apps. Because of this, we chose to build our client app in React Native using Expo. Expo has great support for push notifications across both iOS and Android—let’s give them a try!

If you like, you can download the completed server project and the completed client project for the series.

Asking for Push Permission

Before we can send a push notification to a user, we need to request permission from them. In our Expo app, in src/MainScreen.js, add a new PushPermissionRequester component:

 import React from 'react';
 import { View } from 'react-native';
+import PushPermissionRequester from './PushPermissionRequester';
 import MessageList from './MessageList';

 export default function MainScreen() {
   return (
     <View style={{ flex: 1 }}>
+      <PushPermissionRequester />
       <MessageList />
     </View>
   );
 }

Now let’s implement PushPermissionRequester. Create a src/PushPermissionRequester.js file and enter the following:

import React, { useState } from 'react';
import { View } from 'react-native';
import { Notifications } from 'expo';
import * as Permissions from 'expo-permissions';
import { Button, Input } from 'react-native-elements';

const askForPushPermission = setToken => async () => {

};

export default function PushPermissionRequester() {
  const [token, setToken] = useState('(token not requested yet)');

  return (
    <View>
      <Input value={token} />
      <Button
        title="Ask Me for Push Permissions"
        onPress={askForPushPermission(setToken)}
      />
    </View>
  );
}

This component tracks a push notification token that can be requested, then is displayed afterward. Now let’s fill in askForPushPermission to request it:

const askForPushPermission = setToken => async () => {
  const { status: existingStatus } = await Permissions.getAsync(
    Permissions.NOTIFICATIONS,
  );
  let finalStatus = existingStatus;

  if (existingStatus !== 'granted') {
    const { status } = await Permissions.askAsync(Permissions.NOTIFICATIONS);
    finalStatus = status;
  }

  console.log('push notification status ', finalStatus);
  if (finalStatus !== 'granted') {
    setToken(`(token ${finalStatus})`);
  }

  let token = await Notifications.getExpoPushTokenAsync();
  setToken(token);
};

This is boilerplate code from the Expo Push Notification docs; what’s happening is:

  • We retrieve the existing permission status for push notifications.
  • If permission is not yet granted, we attempt to ask for permission.
  • Either way, if permission is not granted in the end, we display the status we got. If permission is granted, we request the token and set it in the component state to display it.

Reload the Expo app on your virtual device, and tap on “Ask Me for Push Permissions.” You should see the message “(token undetermined),” and a yellow box error at the bottom of the screen. The error says “Error: Must be on a physical device to get an Expo Push Token.”

Time to take this app to your real phone!

Running on Device

On Android, there are a few different ways to get the app running on your physical device. On iOS things are a bit more locked down. Let’s look at the approach that will work for both iOS and Android.

On your phone, search for “Expo” in the App Store or Google Play Store, respectively. This is a client app from Expo that allows you to run your app before going through the whole app publishing process, which is a nice speed boost. Download the Expo app. If you haven’t already created a free Expo account, create one. Then log in to the Expo app.

Now we need to get our React Native application published to Expo so we can load its JavaScript assets into the Expo app. Open the Metro Bundler browser tab that Expo opens when you run it. In the sidebar, click “Publish or republish project…”:

Choose a unique “URL Slug” for your app, then click “Publish project.”

Expo will take a minute or two to bundle up your JavaScript and upload it. Ultimately you should get a box at the bottom-right of the browser window saying “Successfully published to…”

Reopen the Expo app on your phone, go to the Profile tab, and in the “Published Projects” list you should see your app. Tap on it, and it should open and display the initial data from Heroku.

Getting and Testing a Token

Now, tap “Ask Me for Push Permissions” again, and give permission. This time, on a physical device, it should work!

You should see a token that looks like ExponentPushToken[…], with a string of letters and numbers in between the square brackets. This is a token that uniquely identifies your app running in Expo on your device. You can use this to hit Expo’s API to send a push notification.

Select the whole token, copy it, and transfer it to your development computer somehow. Emailing yourself is always an option if nothing else!

Before we code anything, we can test this push notification out through Expo’s Push Notifications Tool. Make sure Expo is in the background on your phone. Then, on your development machine, go to the Push Notifications Tool.

Paste your full token including the string ExponentPushToken into the “Expo Push Token” field. For “Message Title,” type something.

Scroll to the bottom of the page and click “Send a notification”. A push notification should appear on your phone from the Expo app, displaying the title you entered.

Feel free to play around with other fields in the push notification tool as well.

Adding an Expo Module

Now that we have a token, we can provide it to our backend. In a production application, you would set up a way for each user to send that token up to the server and store it with their user account. Since user accounts aren’t the focus of our tutorial, we’re just going to set that token via an environment variable instead.

In our node app, in .env.sample, add the following line:

 CLOUDAMQP_URL=fake_cloudamqp_url
+EXPO_PUSH_TOKEN=fake_expo_push_token
 MONGODB_URI=fake_mongodb_uri

In .env add your token, filling in the real value. This is the value we wanted to keep out of our git repo; you don’t want me to find your push token and send spam to you!

 CLOUDAMQP_URL=amqp://localhost
+EXPO_PUSH_TOKEN=ExponentPushToken[...]
 MONGODB_URI=mongodb://localhost:27017/nodeifier

Next, add Expo’s SDK as a dependency to your Node app:

$ yarn add expo-server-sdk

As we did with MongoDB and RabbitMQ, let’s wrap Expo’s SDK in a module of our own, to hide it from the rest of our app. Create a lib/expo.js file and add the following:

const Expo = require('expo-server-sdk').default;

const token = process.env.EXPO_PUSH_TOKEN;

const expo = new Expo();

async function push({ text }) {
  if (!Expo.isExpoPushToken(token)) {
    console.error(`Push token ${token} is not a valid Expo push token`);
    return;
  }

  const messages = [
    {
      to: token,
      title: text,
    },
  ];

  console.log('sending to expo push', messages);
  const chunks = expo.chunkPushNotifications(messages);

  for (let chunk of chunks) {
    try {
      let ticketChunk = await expo.sendPushNotificationsAsync(chunk);
      console.log(ticketChunk);
    } catch (error) {
      console.error(error);
    }
  }
}

module.exports = { push };

We export a push function that our app can use to send a push notification. We only use the text field of the message. First, we get the Expo push token from the environment variable and confirm it’s valid. Then we construct a message object with the structure Expo’s Push Notification SDK expects. The SDK is set up to allow sending push notifications in batches, which is a bit overkill in our case, but we work with it. We log the success or error message just in case.

Setting Up a Worker

Now let’s send out a push notification from our worker. In an earlier part we mentioned that you could conceivably separate different webhook endpoints into different microservices or lambda functions for scalability. You could do the same thing with workers. But since we’re hosting on Heroku, which will give us one web dyno and one worker dyno for free, we’ll keep our worker code in a single worker process that is watching multiple queues.

How should we organize this one worker service with multiple concerns? Currently our worker is very small, so adding code to monitor a second queue to the same file wouldn’t clutter it up much. But for the sake of illustrating how to separate concerns, let’s refactor our worker into separate modules.

Create a workers/incoming.js file, and copy and paste the require and handleIncoming code from workers/index.js into it. Then export the handler:

const queue = require('../lib/queue');
const repo = require('../lib/repo');

const handleIncoming = message =>
  repo
    .create(message)
    .then(record => {
      console.log('Saved ' + JSON.stringify(record));
      return queue.send('socket', record);
    });

module.exports = handleIncoming;

Update workers/index.js to import that function instead of duplicating it:

 if (process.env.NODE_ENV !== 'production') {
   require('dotenv').config();
 }

 const queue = require('../lib/queue');
-const repo = require('../lib/repo');
-
-const handleIncoming = message =>
-  repo
-    .create(message)
-    .then(record => {
-      console.log('Saved ' + JSON.stringify(record));
-      return queue.send('socket', record);
-    });
+const handleIncoming = require('./incoming');

 queue
   .receive('incoming', handleIncoming)

Now, where should we call our push function? In this case, we could probably do it directly in handleIncoming. But when you’re using a queue-based architecture it can be valuable to separate units of work into small pieces; that way if one part fails it can be retried without retrying the entire process. For example, if we can’t reach Expo’s push notification service, we don’t want a retry to inadvertently insert a duplicate record into our database.

So instead, let’s create a new push queue that will receive messages each time we have a push notification to send. In workers/incoming.js, just like we send a message to the socket queue, we’ll send one to the push queue as well:

const handleIncoming = message =>
   repo
     .create(message)
     .then(record => {
       console.log('Saved ' + JSON.stringify(record));
-      return queue.send('socket', record);
+      return Promise.all([
+        queue.send('socket', record),
+        queue.send('push', record),
+      ]);
     });

Note that we wrap our two queue.send calls in a Promise.all() and return the result; that way if either of the sends fails, the rejection will be propagated up and eventually logged with console.error.

Next, add a new workers/push.js file with the following contents:

const expo = require('../lib/expo');

const handlePush = message => {
  console.log('handling push', message);
  return expo.push(message);
};

module.exports = handlePush;

An extremely simple worker, this just forwards the received message along to our Expo module. Connect it in workers/index.js:

 const queue = require('../lib/queue');
 const handleIncoming = require('./incoming');
+const handlePush = require('./push');

 queue
   .receive('incoming', handleIncoming)
   .catch(console.error);
+queue
+  .receive('push', handlePush)
+  .catch(console.error);

With this, we should be set up to send push notifications. Run your two node processes locally:

$ node web
$ node workers

Send a test notification:

$ curl http://localhost:3000/webhooks/test -d "this should be pushed"

You should see the push notification show up on your phone. Note that although your Expo app is pointing to your production server, it still receives the push notification from your local server. This is because we’re using your device’s Expo push token, and it doesn’t know or care about any other backing servers.

Going to Production

Our final step is to get push notifications working in production. Whereas our previous Heroku environment variables were provided for us by add-ons, we need to set our EXPO_PUSH_TOKEN variable manually. There are two ways we can do this:

  • If you’d like to use the CLI, run heroku config:set "EXPO_PUSH_TOKEN=ExponentPushToken[...]" (entering your full token as usual)
  • If you’d like to use Heroku’s web dashboard, pull up your app, then go to “Settings”, then click “Reveal Config Vars”. In the row that has the “Add” button, fill in EXPO_PUSH_TOKEN for the KEY and your token for the VALUE, then click Add.

Commit your latest changes then push them to Heroku:

$ git add .
$ git commit -m "updated for push tokens"
$ git push heroku master

When your app finishes deploying, try sending it a webhook, filling in your app’s URL instead of mine:

$ curl https://murmuring-garden-42327.herokuapp.com/webhooks/test -d "push from production"

You should receive a push notification on your phone.

You can also try toggling your GitHub PR to see that your other webhooks also deliver push notifications now too.

Where We’ve Been

With that, our app is complete! Let’s review what we’ve built one last time:

We’ve been able to hook up to live updates coming from services like GitHub, Heroku, and Netlify. We set up a queue-based architecture to ensure that on real systems that would have far more load than this, that each piece of the process can run performantly. We push data to running apps over WebSockets, and apps in the background using push notifications.

Adding live updates to your mobile or web applications using approaches such as these can be a big boost to your app’s usefulness to your users. If you’re a developer, give these technologies a try. And if Big Nerd Ranch could help train you in these technologies or help build the foundation of a new live-updating application for you, let us know!

The post Live Updates With Queues, WebSockets, and Push Notifications. Part 6: Push Notifications with Expo appeared first on Big Nerd Ranch.

]]>
Live Updates With Queues, WebSockets, and Push Notifications. Part 5: Deploying to Heroku https://bignerdranch.com/blog/live-updates-with-queues-websockets-and-push-notifications-part-5-deploying-to-heroku/ Tue, 21 Jan 2020 09:33:23 +0000 https://nerdranchighq.wpengine.com/?p=4060 Live updates can get your users information they need sooner, and prevent them from operating off of outdated information. To explore the topic, we'll create an app that allows us to receive notifications from services like GitHub, Netlify, and Heroku. In part 5, we'll deploy our app to Heroku.

The post Live Updates With Queues, WebSockets, and Push Notifications. Part 5: Deploying to Heroku appeared first on Big Nerd Ranch.

]]>
In this series we’ve built a React Native and Node app that receives and passes along notifications from services like GitHub and Netlify. It works great locally, but we can’t keep it running on our machine forever, and every time we restart ngrok the URL changes. We need to get our app deployed somewhere permanent.

If we weren’t using WebSockets, then functions-as-a-service would be a great option for deployment. But since WebSockets are stateful, you need to run a service like Amazon API Gateway in front of the functions to provide WebSocket statefulness, and setting that up can be tricky. Instead, we’ll deploy our app on Heroku, an easy hosting platform that allows us to continue to use WebSockets in the normal way.

If you like, you can download the completed server project and the completed client project for part 5.

Heroku

If you don’t already have a Heroku account, create one for free. (You may need to add a credit card, but it won’t be charged.) Then install and log in to the Heroku CLI.

Go into our node app’s directory in the terminal. If you aren’t already tracking your app in git for version control, initialize a git repo now:

$ git init .
$ echo node_modules > .gitignore

Heroku’s main functionality can be accessed either through the web interface or through the CLI. In this post we’ll be using both, depending on which is easiest for any given step.

To begin, create a new Heroku app for our backend using the CLI:

$ heroku create

This will create a new app and assign it a random name—in my case, murmuring-garden-42327. It will also add a git remote named heroku to your repo alongside any other remotes you may have. You can see this by running the following command:

$ git remote -v
heroku  https://git.heroku.com/murmuring-garden-42327.git (fetch)
heroku  https://git.heroku.com/murmuring-garden-42327.git (push)

We aren’t quite ready to deploy our app to Heroku yet, but we can go ahead and set up our database and queue services. We’ll do this step in the Heroku dashboard. Go to the dashboard, then click on your new app, then the “Resources” tab.

Under Add-ons, search for “mongo”, then click “mLab MongoDB”.

A modal will appear allowing you to choose a plan. The “Sandbox – Free” plan will work fine for us. Click “Provision.”

Next, search for “cloud,” then click “CloudAMQP.”

The default plan works here too: “Little Lemur – Free,” so click “Provision” again.

This has set up our database and queue server. How can we access them? The services provide URLs to our app via environment variables. To see them, click “Settings,” then “Reveal Config Vars.”

From the CLI, you can run heroku config to show the environment variables.

Here’s what they’re for:

  • CLOUDAMQP_APIKEY: we won’t need this for our tutorial app
  • CLOUDAMQP_URL: our RabbitMQ access URL
  • MONGODB_URI: our MongoDB access URL

Using Environment Variables

We need to update our application code to use these environment variables to access the backing services, but this raises a question: how can we set up analogous environment variables in our local environment? The dotenv library is a popular approach: it allows us to set up variables in a .env file in our app. Let’s refactor our app to use dotenv.

First, add dotenv as a dependency:

$ yarn add dotenv

Create two files, .env and .env.sample. It’s a good practice to not commit your .env file to version control; so far our connection strings don’t have any secure info, but later we’ll add a variable that does. But if you create and commit a .env.sample file with example data, this helps other users of your app see which environment variables your app uses. If you’re using git for version control, make sure .env is in your .gitignore file so it won’t be committed.

Add the following to .env.sample:

CLOUDAMQP_URL=fake_cloudamqp_url
MONGODB_URI=fake_mongodb_uri

This just documents for other users that these are the values needed.

Now let’s add the real values we’re using to .env:

CLOUDAMQP_URL=amqp://localhost
MONGODB_URI=mongodb://localhost:27017/notifier

Note that the name CLOUDAMQP_URL is a bit misleading because we aren’t using CloudAMQP locally, just a general RabbitMQ server. But since that’s the name of the environment variable CloudAMQP sets up for us on Heroku, it’ll be easiest for us to use the same one locally. And since CloudAMQP is giving us a free queue server, we shouldn’t begrudge them a little marketing!

The values we set in the .env file are the values from our lib/queue.js and lib/repo.js files respectively. Let’s replace the hard-coded values in those files with the environment variables. In lib/queue.js:

-const queueUrl = 'amqp://localhost';
+const queueUrl = process.env.CLOUDAMQP_URL;

And in lib/repo.js:

-const dbUrl = 'mongodb://localhost:27017/notifier';
+const dbUrl = process.env.MONGODB_URI;

Now, how can we load these environment variables? Add the following to the very top of both web/index.js and workers/index.js, even above any require() calls:

if (process.env.NODE_ENV !== 'production') {
  require('dotenv').config();
}

When we are not in the production environment, this will load dotenv and instruct it to load the configuration. When we are in the production environment, the environment variables will be provided by Heroku automatically, and we won’t have a .env file, so we don’t need dotenv to run.

To make sure we haven’t broken our app for running locally, stop any node processes you have running, then start node web and node workers, run the Expo app, and post a test webhook:

$ curl http://localhost:3000/webhooks/test -d "this is with envvars"

The message should show up in Expo as usual.

Configuring and Deploying

Heroku is smart enough to automatically detect that we have a node app and provision an appropriate environment. But we need to tell Heroku what processes to run. We do this by creating a Procfile at the root of our app and adding the following:

web: node web
worker: node workers

This tells Heroku that it should run two processes, web and worker, and tells it the command to run for each.

Now we’re finally ready to deploy. The simplest way to do this is via a git push. Make sure all your changes are committed to git:

$ git add .
$ git commit -m "preparing for heroku deploy"

Then push:

$ git push heroku master

This pushes our local master branch to the master branch on the heroku remote. When Heroku sees changes to its master branch, it triggers a deployment. We’ll be able to see the deployment process as it runs in the output of the git push command.

Deployment will take a minute or two due to provisioning the server and downloading dependencies. In the end, we’ll get a message like:

remote:        Released v7
remote:        https://murmuring-garden-42327.herokuapp.com/ deployed to Heroku

We have one more step to do. Heroku will start the process named web by default, but when we have any other processes we will need to start by ourselves. In our case, we need to start the worker process. We can do this a few different ways:

  • If you want to use the CLI, run heroku ps:scale worker=1. This scales the process named worker to run on a single “dyno” (kind of like the Heroku equivalent of a server)
  • If you want to use the web dashboard instead, go to your app, then to “Resources.” Next to the worker row, click the pencil icon to edit it, then set the slider to on, and click Confirm.

Testing

Now let’s update our Expo client app to point to our production servers. We could set up environment variables there as well, but for the sake of this tutorial let’s just change the URLs by hand. Make the following changes in MessageList.js, putting in your Heroku app name in place of mine:

 import React, { useState, useEffect, useCallback } from 'react';
-import { FlatList, Linking, Platform, View } from 'react-native';
+import { FlatList, Linking, View } from 'react-native';
 import { ListItem } from 'react-native-elements';
...
-const httpUrl = Platform.select({
-  ios: 'http://localhost:3000',
-  android: 'http://10.0.2.2:3000',
-});
-const wsUrl = Platform.select({
-  ios: 'ws://localhost:3000',
-  android: 'ws://10.0.2.2:3000',
-});
+const httpUrl = 'https://murmuring-garden-42327.herokuapp.com';
+const wsUrl = 'wss://murmuring-garden-42327.herokuapp.com';

Note that the WebSocket URL uses the wss protocol instead of ws; this is the secure protocol, which Heroku makes available for us.

Reload your Expo app. It should start out blank because our production server doesn’t have any data in it yet. Let’s send a test webhook, again substituting your app’s name for mine:

$ curl https://murmuring-garden-42327.herokuapp.com/webhooks/test -d "this is heroku"

You should see your message show up. We’ve got a real production server!

Next, let’s set up a GitHub webhook pointing to our Heroku server. In the testing GitHub repo you created, go to Settings > Webhooks. Add a new webhook and leave the existing one unchanged; that way you can continue receiving events on your development server as well.

  • Payload URL: your Heroku app URL, with /webhooks/github appended.
  • Content type: change this to application/json
  • Secret: leave blank
  • SSL verification: leave as “Enable SSL verification”
  • Which events would you like to trigger this webhook? Choose “Let me select individual events”

  • Scroll down and uncheck “Pushes” and anything else if it’s checked by default, and check “Pull requests.”
  • Active: leave this checked

Now head back to your test PR and toggle it open and closed a few times. You should see the messages show up in your Expo app.

Congratulations — now you have a webhooks-and-WebSockets app running in production!

Heroku Webhooks

Now that we have a Heroku app running, maybe we can set up webhooks for Heroku deployments as well!

Well, there’s one problem with that: it can be hard for an app that is being deployed to report on its deployment.

Surprisingly, if you set up a webhook for your Node app, you will get a message that the build started. It’s able to do this because Heroku leaves the existing app running until the build completes, then swaps out the running version. You won’t get a message that the build completed over the WebSocket, however—by that time the app has been restarted, your WebSocket connection is lost. The success message has been stored to the database, however, so if you reload the Expo app it will appear.

With that caveat in place — or if you have another Heroku app that you want to set up notifications for — here are a few pointers for how to do that.

To configure webhooks, open your site in the Heroku dashboard, then click “More > View webhooks.” Click “Create Webhook.” Choose the api:build event type: that will allow you to receive webhook events when builds both start and complete.

The webhook route itself should be very similar to the GitHub one. The following code can be used to construct a message from the request body:

const {
  data: {
    app: { name },
    status,
  },
} = req.body;

const message = {
  text: `Build ${status} for app ${name}`,
  url: null,
};

Note that the Heroku webhook doesn’t appear to send the URL of your app; if you want it to be clickable, you would need to use the Heroku Platform API to retrieve that info via another call.

What’s Next?

Now that we’ve gotten our application deployed to production, there’s one more piece we can add to make a fully-featured mobile app: push notifications to alert us in the background. To get push notifications, we’ll need to deploy our Expo app to a real hardware device.

The post Live Updates With Queues, WebSockets, and Push Notifications. Part 5: Deploying to Heroku appeared first on Big Nerd Ranch.

]]>
Live Updates With Queues, WebSockets, and Push Notifications. Part 4: Webhooks https://bignerdranch.com/blog/live-updates-with-queues-websockets-and-push-notifications-part-4-webhooks/ Thu, 02 Jan 2020 13:19:29 +0000 https://nerdranchighq.wpengine.com/?p=4036 Live updates can get your users information they need sooner, and prevent them from operating off of outdated information. To explore the topic, we'll create an app that allows us to receive notifications from services like GitHub, Netlify, and Heroku. In part 4, we'll write code to receive webhooks from GitHub.

The post Live Updates With Queues, WebSockets, and Push Notifications. Part 4: Webhooks appeared first on Big Nerd Ranch.

]]>
So far in this series we’ve set up a React Native frontend and Node backend to receive notifications from external services and deliver live updates to our client app. But we haven’t done any real service integrations yet. Let’s fix that! Our app can be used to notify us of anything; in this post we’ll hook it up to GitHub. We’ll also have some tips for if you want to hook it up to Netlify. In a future post we’ll provide tips for hooking up to Heroku as well, after we’ve deployed the backend to run on Heroku.

If you like, you can download the completed server project and the completed client project for part 4.

Setup

Before we create any of these webhook integrations, let’s refactor how our webhook code is set up to make it easy to add additional integrations. We’ll keep our existing “test” webhook for easy experimentation; we’ll just add webhooks for real services alongside it.

We could set up multiple webhook endpoints in a few different ways. If we were worried about having too much traffic for one application server to handle we could run each webhook as a separate microservice so that they could be scaled independently. Alternatively, we could run each webhook as a separate function on a function-as-a-service platform like AWS Lambda. Each microservice or function would need to have access to send messages to the same queue, but other than that they could be totally independent.

In our case, we’re going to deploy our app on Heroku. That platform only allows us to expose a single service to HTTP traffic, so let’s make each webhook a separate route within the same Node server.

Create a web/webhooks folder. Move web/webhook.js to web/webhooks/test.js. Make the following changes to only export the route, not to set up a router:

-const express = require('express');
-const bodyParser = require('body-parser');
-const queue = require('../lib/queue');
+const queue = require('../../lib/queue');

 const webhookRoute = (req, res) => {
...
 };

-const router = express.Router();
-router.post('/', bodyParser.text({ type: '*/*' }), webhookRoute);
-
-module.exports = router;
+module.exports = webhookRoute;

We’ll define the router in a new web/webhooks/index.js file instead. Create it and add the following:

const express = require('express');
const bodyParser = require('body-parser');
const testRoute = require('./test');

const router = express.Router();
router.post('/test', bodyParser.text({ type: '*/*' }), testRoute);

module.exports = router;

Now we just need to make a tiny change to web/index.js to account for the fact that we’ve pluralized “webhooks”:

 const express = require('express');
-const webhookRouter = require('./webhook');
+const webhookRouter = require('./webhooks');
 const listRouter = require('./list');
...
 app.use('/list', listRouter);
-app.use('/webhook', webhookRouter);
+app.use('/webhooks', webhookRouter);

 const server = http.createServer(app);

This moves our webhook endpoint from /webhook to /webhooks/test. Now any future webhooks we add can be at other paths under /webhooks/.

If your node web process is running, stop and restart it. Make sure node workers is running as well. You’ll then need to reload your Expo app to re-establish the WebSocket connection.

Now you can send a message to the new path and confirm our test webhook still works:

$ curl http://localhost:3000/webhooks/test -d "this is the new endpoint"

That message should show up in the Expo app as usual.

Making Our Local Server Accessible

We need to do another preparatory step as well. Because we’ve been sending webhooks from our local machine, we’ve been able to connect to localhost. But external services don’t have access to our localhost. One way to get around this problem is ngrok, a great free tool to give you a publicly-accessible URL to your local development machine. Create an ngrok account if you don’t already have one, then sign in.

Install ngrok by following the instructions on the dashboard to download it, or, if you’re on a Mac and use Homebrew, you can run brew cask install ngrok. Provide ngrok with your auth token as instructed on the ngrok web dashboard.

Now you can open a public tunnel to your local server. With node web running, in another terminal run:

$ ngrok http 3000

You should see output like the following:

In the output, look for the lines that start with “Forwarding” – these show the .ngrok.io subdomain that has been temporarily set up to access your service. Note that there is an HTTP and HTTPS one; you may as well use the HTTPS one.

To confirm it works, send a POST to your test webhook using the ngrok URL instead of localhost. Be sure to fill in your domain name instead of the sample one I’m using here:

$ curl https://abcd1234.ngrok.io/webhooks/test -d "this is via ngrok"

The message should appear in the client as usual.

Building the GitHub Webhook

Now that we’ve got a subdomain that can be accessed from third-party services, we’re ready to build out the webhook endpoint for GitHub to hit. Create a web/webhooks/github.js file and add the following:

const queue = require('../../lib/queue');

const webhookRoute = (req, res) => {
  console.log(JSON.stringify(req.body));

  const {
    repository: { name: repoName },
    pull_request: { title: prTitle, html_url: prUrl },
    action,
  } = req.body;

  const message = {
    text: `PR ${action} for repo ${repoName}: ${prTitle}`,
    url: prUrl,
  };

  console.log(message);
  queue
    .send('incoming', message)
    .then(() => {
      res.end('Received ' + JSON.stringify(message));
    })
    .catch(e => {
      console.error(e);
      res.status(500);
      res.end(e.message);
    });
};

module.exports = webhookRoute;

In our route, we do a few things:

  • We log out the webhook request body we received as a JSON string, in case it’s useful.
  • We pull the pertinent fields out of the request body: the repository name, pull request title and URL, and the action that was taken (opened, closed, etc.)
  • We construct a message object in the standard format our app uses: with a text field describing it and a related url the user can visit.
  • As in our test webhook, we send this message to our incoming queue to be processed.

Connect this new route in web/webhooks/index.js:

 const testRoute = require('./test');
+const githubRoute = require('./github');

 const router = express.Router();
 router.post('/test', bodyParser.text({ type: '*/*' }), testRoute);
+router.post('/github', express.json(), githubRoute);

 module.exports = router;

Note that in this case we aren’t using the bodyParser.text() middleware, but instead Express’s built-in express.json() middleware. This is because we’ll be receiving JSON data instead of plain text.

Restart node web to pick up these changes. You don’t need to restart ngrok.

Testing the Integration

Now let’s create a new repo to use for testing. Go to github.com and create a new repo; you could call it something like notifier-test-repo. We don’t care about the contents of this repo; we just need to be able to open PRs. So choose the option to “Initialize this repository with a README”.

When the repo is created, go to Settings > Webhooks, then click “Add webhook”. Choose the following options

  • Payload URL: your ngrok domain, with /webhooks/github appended.
  • Content type: change this to application/json
  • Secret: leave this blank. We aren’t using it for this tutorial, but you can use this field to confirm your webhook traffic is coming from a trusted source
  • SSL verification: leave as “Enable SSL verification”
  • Which events would you like to trigger this webhook? Choose “Let me select individual events”

  • Scroll down and uncheck “Pushes” and anything else if it’s checked by default, and check “Pull requests”.
  • Active: leave this checked

Note that your ngrok URL will change every time you restart ngrok. You will need to update any testing webhook configuration in GitHub and other services to continue receiving webhooks.

Now we just need to create a pull request to test out this webhook. The easiest way is to click the edit icon at the top right of our readme on GitHub’s site. Add some text to the readme, then at the bottom choose “Create a new branch for this commit and start a pull request,” and click “Commit changes,” then click “Create pull request.”

In your client app you should see a new message “PR opened for repo notifier-test-repo: Update README.md:”

If you want to see more messages, or if something went wrong and you need to troubleshoot, you can repeatedly click “Close pull request” then “Reopen pull request;” each one will send a new event to your webhook.

Our test webhook didn’t pass along any URLs. Now that we have messages from GitHub with URLs attached, let’s update our client app to allow tapping on an item to visit its URL. Open src/MessageList.js and make the following change:

 import React, { useState, useEffect, useCallback } from 'react';
-import { FlatList, Platform, View } from 'react-native';
+import { FlatList, Linking, Platform, View } from 'react-native';
 import { ListItem } from 'react-native-elements';
...
       <FlatList
         data={messages}
         keyExtractor={item => item._id}
         renderItem={({ item }) => (
           <ListItem
             title={item.text}
             bottomDivider
+            onPress={() => item.url && Linking.openURL(item.url)}
           />
         )}
       />

Reload the client app, tap on one of the GitHub notifications, and you’ll be taken to the PR in Safari. Pretty nice!

Heroku and Netlify

Now we’ve got a working GitHub webhook integration. We’ll wait a bit to set up the webhook integration with Heroku; first we’ll deploy our app to Heroku. That way we’ll be sure we have a Heroku app to receive webhooks for!

Netlify is another deployment service with webhook support; it’s extremely popular for frontend apps. We won’t walk through setting up Netlify webhooks in detail, but here are a few pointers if you use that service and would like to try integrating.

To configure webhooks, open your site in the Netlify dashboard, then click Settings > Build & deploy > Deploy notifications. Click Add notification > Outgoing webhook. Netlify requires you to set up a separate hook for each event you want to monitor. You may be interested in “Deploy started,” “Deploy succeeded,” and “Deploy failed.”

The webhook route code itself should be very similar to the GitHub one. The following lines can be used to construct a message from the request body:

const { state, name, ssl_url: url } = req.body;

const message = {
  text: `Deployment ${state} for site ${name}`,
  url,
};

What’s Next?

Now we’ve got our first real service sending notifications to our app. But the fact that we’re dependent on a changeable ngrok URL feels a bit fragile. So we can get this running in a stable way, in our next post we’ll deploy our app to production on a free Heroku account.

The post Live Updates With Queues, WebSockets, and Push Notifications. Part 4: Webhooks appeared first on Big Nerd Ranch.

]]>
Live Updates With Queues, WebSockets, and Push Notifications. Part 3: WebSockets https://bignerdranch.com/blog/live-updates-with-queues-websockets-and-push-notifications-part-3-websockets/ Mon, 23 Dec 2019 09:22:25 +0000 https://nerdranchighq.wpengine.com/?p=4030 Live updates can get your users information they need sooner, and prevent them from operating off of outdated information. To explore the topic, we'll create an app that allows us to receive notifications from services like GitHub, Netlify, and Heroku. In part 3, we'll build out WebSockets functionality to accomplish these live updates.

The post Live Updates With Queues, WebSockets, and Push Notifications. Part 3: WebSockets appeared first on Big Nerd Ranch.

]]>
In parts one and two of this series, we set up a frontend and backend to view notifications from third-party services like GitHub, Netlify, and Heroku. It works like this:

diagram of HTTP endpoints sending data to a queue, which a worker reads from

Now our client is set up to view our messages, but we need to quit and restart the app to get any updates. We could add pull-to-refresh functionality, but it’d be much nicer if we could automatically receive updates from the server when a new message is received. Let’s build out WebSockets functionality to accomplish these live updates. Here’s an illustration of how the flow of data will work:

diagram of worker pushing data via a WebSocket

If you like, you can download the completed server project and the completed client project for part 3.

Adding WebSockets to the Server

There are a few different libraries that can provide WebSocket functionality to Node apps. For the sake of this tutorial, we’ll use websocket:

$ yarn add websocket

In our worker, after we handle a message on the incoming queue and save the message to the database, we’ll send a message out on another queue indicating that we should deliver that message over the WebSocket. We’ll call that new queue socket. Make the following change in workers/index.js:

const handleIncoming = message =>
  repo
    .create(message)
    .then(record => {
      console.log('Saved ' + JSON.stringify(record));
+     return queue.send('socket', record);
    });

 queue
   .receive('incoming', handleIncoming)

Note the following sequence:

  • We receive a message on the incoming queue;
  • Then, we save the record to the database;
  • And finally, we send another message out on the socket queue.

Note that we haven’t yet implemented the WebSocket code to send the response to the client yet; we’ll do that next. So far, we’ve just sent a message to a new queue that the WebSocket code will watch.

Now let’s implement the WebSocket code. In the web folder, create a file socket.js and add the following:

const WebSocketServer = require('websocket').server;

const configureWebSockets = httpServer => {
  const wsServer = new WebSocketServer({ httpServer });
};

module.exports = configureWebSockets;

We create a function configureWebSockets that allows us to pass in a Node httpServer and creates a WebSocketServer from it.

Next, let’s add some boilerplate code to allow a client to establish a WebSocket connection:

 const configureWebSockets = httpServer => {
   const wsServer = new WebSocketServer({ httpServer });
+
+  let connection;
+
+  wsServer.on('request', function(request) {
+    connection = request.accept(null, request.origin);
+    console.log('accepted connection');
+
+    connection.on('close', function() {
+      console.log('closing connection');
+      connection = null;
+    });
+  });
 };

All we do is save the connection in a variable and add a little logging to indicate when we’ve connected and disconnected. Note that our server is only allowing one connection; if a new one comes in, it’ll be overwritten. In a production application you would want to structure your code to handle multiple connections. Some WebSocket libraries will handle multiple connections for you.

Next, we want to listen on the socket queue we set up before, and send an outgoing message on our WebSocket connection when we get one:

 const WebSocketServer = require('websocket').server;
+const queue = require('../lib/queue');

 const configureWebSockets = httpServer => {
...
   wsServer.on('request', function(request) {
...
   });
+
+  queue
+    .receive('socket', message => {
+      if (!connection) {
+        console.log('no WebSocket connection');
+        return;
+      }
+      connection.sendUTF(JSON.stringify(message));
+    })
+    .catch(console.error);
 }

When a message is sent on the socket queue and if there is no WebSocket client connection, we do nothing. If there is a WebSocket client connection we send the message we receive out over it.

Now, we just need to call our configureWebSockets function, passing our HTTP server to it. Open web/index.js and add the following:

 const listRouter = require('./list');
+const configureWebSockets = require('./socket');

 const app = express();
...
 const server = http.createServer(app);
+configureWebSockets(server);

By calling our function, which in turn calls new WebSocketServer(), we enable our server to accept requests for WebSocket connections.

Adding WebSockets to the Client

Now we need to update our Expo client to make that WebSocket connection to the backend and accept messages it sends, updating the screen in the process. On the frontend we don’t need to add a dependency to handle WebSockets; the WebSocket API is built-in to React Native’s JavaScript runtime.

Open src/MessageList.js and add the following:

 const httpUrl = Platform.select({
   ios: 'http://localhost:3000',
   android: 'http://10.0.2.2:3000',
 });
+const wsUrl = Platform.select({
+  ios: 'ws://localhost:3000',
+  android: 'ws://10.0.2.2:3000',
+});
+
+let socket;
+
+const setUpWebSocket = addMessage => {
+  if (!socket) {
+    socket = new WebSocket(wsUrl);
+    console.log('Attempting Connection...');
+
+    socket.onopen = () => {
+      console.log('Successfully Connected');
+    };
+
+    socket.onclose = event => {
+      console.log('Socket Closed Connection: ', event);
+      socket = null;
+    };
+
+    socket.onerror = error => {
+      console.log('Socket Error: ', error);
+    };
+  }
+
+  socket.onmessage = event => {
+    addMessage(JSON.parse(event.data));
+  };
+};

 const loadInitialData = async setMessages => {

This creates a function setUpWebSocket that ensures our WebSocket is ready to go. If the WebSocket is not already opened, it opens it and hooks up some logging. Whether or not it was already open, we configure the WebSocket to pass any message it receives along to the passed-in addMessage function.

Now, let’s call setUpWebSocket from our component function:

   useEffect(() => {
     loadInitialData(setMessages);
   }, []);

+  useEffect(() => {
+    setUpWebSocket(newMessage => {
+      setMessages([newMessage, ...messages]);
+    });
+  }, [messages]);
+
   return (
     <View style={{ flex: 1 }}>

We call setUpWebSocket in a useEffect hook. We pass it a function allowing it to append a new message to the state. This effect depends on the messages state.

As a result of these dependencies, when the messages are changed, we create a new addMessage callback that appends the message to the updated messages array and then we call setUpWebsocket again with that updated addMessage callback. This is why we wrote setUpWebsocket to work whether or not the WebSocket is already established; it will be called multiple times.

With this, we’re ready to give our WebSockets a try! Make sure you have both Node services running in different terminals:

$ node web
$ node workers

Then reload our Expo app:

  • In the iOS Simulator, press Command-Control-Z to bring up the developer menu, then tap “Reload JS Bundle”
  • In the Android Emulator, press Command-M to bring up the developer menu, then tap “Reload”

In yet another terminal, send in a new message:

$ curl http://localhost:3000/webhook -d "this is for WebSocketyness"

You should see the message appear in the Expo app right away, without any action needed by the user. We’ve got live updates!

What’s Next?

Now that we’ve proven out that we can get live updates to our app, we should move beyond our simple webhook and get data from real third-party services. In the next part, we’ll set up a webhook to get notifications from GitHub about pull request events.

The post Live Updates With Queues, WebSockets, and Push Notifications. Part 3: WebSockets appeared first on Big Nerd Ranch.

]]>
Live Updates With Queues, WebSockets, and Push Notifications. Part 2: React Native Apps with Expo https://bignerdranch.com/blog/live-updates-with-queues-websockets-and-push-notifications-part-2-react-native-apps-with-expo/ Tue, 10 Dec 2019 09:29:55 +0000 https://nerdranchighq.wpengine.com/?p=3987 Live updates can get your users information they need sooner, and prevent them from operating off of outdated information. To explore the topic, we'll create an app that allows us to receive notifications from services like GitHub, Netlify, and Heroku. In part 2 we'll create a React Native client app using Expo.

The post Live Updates With Queues, WebSockets, and Push Notifications. Part 2: React Native Apps with Expo appeared first on Big Nerd Ranch.

]]>
In part 1 of our series, we created a Node.js backend for our Notifier app that receives messages via a webhook and sends them over a RabbitMQ queue to a worker process, which then saves them to a MongoDB database. This is a good foundation for our Node backend upon which we’ll be able to add live updates in future posts.

Before we do any more work on the backend, let’s create a React Native client app using Expo so we’ll have a frontend that’s ready for a great live-update experience as well. One of Expo’s features is great cross-platform push notification support, so we’ll be able to benefit from that in part 6 of the series.

If you like, you can download the completed client project for part 2.

Setting Up the Project

If you haven’t built an app with Expo before, this tutorial will walk you through running the app on a virtual device on either Android or iOS. You will need to have one of the following installed on your development machine:

Next, install the Expo CLI globally:

$ npm install -g expo-cli

Then create a new project:

$ expo init notifier-client

You’ll be prompted to answer a few questions; choose the following answers:

  • Choose a template: blank
  • Name: Notifier
  • Slug: notifier-client
  • Use Yarn to install dependencies? Y

After the project setup completes, go into the project directory and add a few more dependencies:

$ cd notifier-client
$ yarn add axios react-native-elements

Here’s what they’re for:

  • axios is a popular HTTP client.
  • react-native-elements is a UI library that will make our super-simple app look a bit nicer.

Next, let’s start the Expo development server:

$ yarn start

 

This should open Expo’s dev server in your browser. It looks something like this:

metro bundler

If you want to run on Android, make sure you’ve followed Expo’s instructions to start an Android virtual device. If you want to run on iOS, Expo will start the virtual device for you.

Now, in the browser window click either “Run on Android device/emulator” or “Run on iOS Simulator.” In the appropriate virtual device you should see a build progress bar and, when it completes, the message “Open up App.js to start working on your app!”.

default screen android and ios

Let’s do that thing they just said!

Loading Data From the Server

Replace the contents of App.js with the following:

import React, { Fragment } from 'react';
import { SafeAreaView, StatusBar } from 'react-native';
import { ThemeProvider } from 'react-native-elements';
import MainScreen from './src/MainScreen';

export default function App() {
  return (
    <ThemeProvider>
      <Fragment>
        <StatusBar barStyle="dark-content" />
        <SafeAreaView style={{ flex: 1 }}>
          <MainScreen />
        </SafeAreaView>
      </Fragment>
    </ThemeProvider>
  );
}

Note that at this point the React Native app won’t build for a few steps.

The changes to App.js will do the following:

  • Hooks up React Native Elements’ ThemeProvider so we can use Elements.
  • Sets up a top status bar.
  • Confines our content to the safe area of the screen, so we don’t overlap hardware features such as the iPhone X notch.
  • Delegates the rest of the UI to a MainScreen component we haven’t created yet.

Now let’s create that MainScreen component. Create a src folder, then a MainScreen.js inside it, and add the following contents:

import React from 'react';
import { View } from 'react-native';
import MessageList from './MessageList';

export default function MainScreen() {
  return (
    <View style={{ flex: 1 }}>
      <MessageList />
    </View>
  );
}

This file doesn’t do much yet; we’ll add more to it in a future post. Right now it just displays a MessageList we haven’t created yet. On to that component!

Create src/MessageList.js and add the following:

import React, { useState, useEffect } from 'react';
import { FlatList, Platform, View } from 'react-native';
import { ListItem } from 'react-native-elements';
import axios from 'axios';

const httpUrl = Platform.select({
  ios: 'http://localhost:3000',
  android: 'http://10.0.2.2:3000',
});

const loadInitialData = async setMessages => {
  const messages = await axios.get(`${httpUrl}/list`);
  setMessages(messages.data);
};

export default function MessageList() {
  const [messages, setMessages] = useState([]);

  useEffect(() => {
    loadInitialData(setMessages);
  }, []);

  return (
    <View style={{ flex: 1 }}>
      <FlatList
        data={messages}
        keyExtractor={item => item._id}
        renderItem={({ item }) => (
          <ListItem
            title={item.text}
            bottomDivider
          />
        )}
      />
    </View>
  );
}

Here’s what’s going on here:

  • In our component function, we set up a messages state item.
  • We set up an effect to call a loadInitialData function the first time the component mounts. We pass it the setMessages function so it can update the state.
  • loadInitialData makes a web service request and stores the data in the response. The way to make HTTP requests to your local development machine differs between the iOS Simulator (http://localhost) and Android Emulator (http://10.0.2.2), so we use React Native’s Platform.select() function to return the appropriate value for the device we’re on.
  • We render a FlatList which is React Native’s performant scrollable list. The list contains React Native Elements ListItems. For now we just display the text of the message.

Run the following command in the Node app folder to take sure our notifier Node app from part 1 is up:

$ node web

Reload our Expo app on the virtual device:

  • In the Android Emulator, press Command-M to bring up the developer menu, then tap “Reload”.
  • In the iOS Simulator, press Command-Control-Z to bring up the developer menu, then tap “Reload JS Bundle”.

When the app reloads, you should see a list of the test messages you entered on your server:

messages on phone

What’s Next?

With this, the basics of our client app are in place, and we’re set to begin adding live updates across our stack. In the next part we’ll introduce WebSockets that allow us to push updates to the client.

The post Live Updates With Queues, WebSockets, and Push Notifications. Part 2: React Native Apps with Expo appeared first on Big Nerd Ranch.

]]>
Live Updates With Queues, WebSockets, and Push Notifications. Part 1: RabbitMQ Queues and Workers https://bignerdranch.com/blog/live-updates-with-queues-websockets-and-push-notifications-part-1-rabbitmq-queues-and-workers/ Tue, 03 Dec 2019 09:06:49 +0000 https://nerdranchighq.wpengine.com/?p=3946 Live updates can get your users information they need sooner, and prevent them from operating off of outdated information. To explore the topic, we'll create an app that allows us to receive notifications from services like GitHub, Netlify, and Heroku. For our first post, we'll create the Node backend that will store and route our messages.

The post Live Updates With Queues, WebSockets, and Push Notifications. Part 1: RabbitMQ Queues and Workers appeared first on Big Nerd Ranch.

]]>
Traditional web and native applications load data when the user requests it, but there are now a variety of well-established technologies to deliver live updates to your users. Live updates can get your users information they need sooner and prevent them from operating on outdated information. This can be a major advantage over competitors’ apps that do not use live updates.

To explore the topic of live-updating applications, let’s create an app called Notifier that allows us to receive notifications about things that happen in various services, like GitHub pull requests or deployments on Netlify or Heroku. We’ll receive these notifications via webhooks, then send them back to a mobile application using WebSockets and push notifications. This architecture could work for any web or mobile application, but in our case we’ll use React Native so we can get push notifications in a really straightforward way.

The final project will work like this:

For our first post, let’s create the Node backend that will store and route our messages. Rather than a typical monolithic backend, we’ll use RabbitMQ to communicate between separate components. For this part, it will only provide the messages when requested from the server. In future parts we’ll add the live-update functionality.

If you like, you can download the completed server project for part 1.

Setting Up the Project

Before we get fancy, we’ll create a simple Express app that connects directly to a database. This is much simpler than where our project will end up and it works like so:

There are good reasons to be cautious about using MongoDB in production. One of the benefits of MongoDB, and many other NoSQL databases, is that they offer better support for horizontal scaling than traditional SQL databases. This could be useful for a live-updating system like ours if the traffic ever got extremely high. But be carefully to weigh the pros and cons of SQL and NoSQL databases before making a choice for your production database.

Install the following on your development machine:

Be sure to start Mongo and RabbitMQ using the appropriate mechanism for the way you installed it.

Create a new folder and initialize it as a Node project:

$ mkdir notifier-server
$ cd notifier-server
$ yarn init -y

Add a few runtime dependencies:

$ yarn add express body-parser mongoose

Here’s what they’re for:

  • express is a lightweight web server.
  • body-parser provides a middleware to handle plain-text request bodies to complement Express’s built-in JSON-handling middleware.
  • mongoose is a database client for MongoDB.

Connecting to MongoDB

Rather than interspersing database code throughout our app we’ll use what’s called the “repository pattern”: we’ll create a module that hides the database implementation and just allows the rest of the app to read and write data. Create a lib folder, and a lib/repo.js file inside it.

In that file, first, connect to the Mongo database:

const mongoose = require('mongoose');
&nbsp;
const dbUrl = 'mongodb://localhost:27017/notifier';
&nbsp;
mongoose.connect(dbUrl, {
  useNewUrlParser: true,
  useUnifiedTopology: true,
});
&nbsp;
mongoose.connection.on('error', console.error);

Next, we need to define the core data type of our application. Let’s call it a “message”: something we receive from an external service letting us know that something happened.

Let’s define a Mongoose schema and model for it:

const messageSchema = new mongoose.Schema({
  text: String,
  url: String,
});
&nbsp;
const Message = mongoose.model('Message', messageSchema);

Finally, we’ll create the functions we want to expose to the rest of the app:

const create = attrs => new Message(attrs).save();
&nbsp;
const list = () => Message.find().then(messages => messages.slice().reverse());
&nbsp;
module.exports = { create, list };
  • create saves a new message record with the attributes we pass it.
  • list returns all the message records in the database in the reverse order they were created.

Connecting to the Web

With our repository defined we can create our web service to provide access to it.

Create a web folder and an index.js file inside it. In that file create an Express app and listen on the configured port:

const http = require('http');
const express = require('express');
&nbsp;
const app = express();
&nbsp;
const server = http.createServer(app);
&nbsp;
const { PORT = 3000 } = process.env;
server.listen(PORT);
console.log(`listening on port ${PORT}`);

Note that instead of using Express’s app.listen() shorthand, we are using Node’s built-in http module to create a server based on the Express app and then calling server.listen(). Setting up the HTTP server explicitly like this will allow us to add WebSockets in a future blog post.

Next, import two routers that we will define shortly and add them to the app:

const http = require('http');
 const express = require('express');
+const webhookRouter = require('./webhook');
+const listRouter = require('./list');
&nbsp;
 const app = express();
&nbsp;
+app.use('/webhook', webhookRouter);
+app.use('/list', listRouter);
&nbsp;
 const server = http.createServer(app);

Now let’s define those routers.

First, the webhook. Webhooks are a mechanism for one web application to inform another of an event via an HTTP request. Many services offer webhook integrations. Over the course of this series we’ll integrate with GitHub, Netlify, and Heroku. To start out, we’ll create the simplest possible webhook that allows us to POST any text content we like. This will allow us to easily test out the architecture that we’ll build future webhooks on.

In the web folder create a webhook.js file and add the following:

const express = require('express');
const bodyParser = require('body-parser');
const repo = require('../lib/repo');
&nbsp;
const webhookRoute = (req, res) => {
  const message = {
    text: req.body,
  };
  repo
    .create(message)
    .then(record => {
      res.end('Saved ' + JSON.stringify(record));
    })
    .catch(e => {
      console.error(e);
      res.status(500);
      res.end(e.message);
    });
};
&nbsp;
const router = express.Router();
router.post('/', bodyParser.text({ type: '*/*' }), webhookRoute);
&nbsp;
module.exports = router;

This will receive any text posted to it, save it to our repo as the text attribute of a message, and return an affirmative response.

Next, let’s provide a way to read the messages in the database. In web create list.js and add the following:

const express = require('express');
const repo = require('../lib/repo');
&nbsp;
const listRoute = (req, res) => {
  repo
    .list()
    .then(messages => {
      res.setHeader('content-type', 'application/json');
      res.end(JSON.stringify(messages));
    })
    .catch(e => {
      console.error(e);
      res.status(500);
      res.setHeader('content-type', 'application/json');
      res.end(JSON.stringify({ error: e.message }));
    });
};
&nbsp;
const router = express.Router();
router.get('/', listRoute);
&nbsp;
module.exports = router;

With this, we are ready to try our app. Start the app:

$ node web

In another terminal, POST a few messages to the app using curl:

$ curl http://localhost:3000/webhook -d "this is a message"
$ curl http://localhost:3000/webhook -d "this is another message"

The -d flag allows us to provide an HTTP body to the request. Setting the -d flag will make our request a POST request by default, which is what we want here.

Now, request the list of data:

$ curl http://localhost:3000/list

You should receive back the messages you posted in JSON format (formatting added here for clarity):

[
  {
    "_id":"5dae1d549a0ade04888a1ac6",
    "text":"this is another message",
    "__v":0
  },
  {
    "_id":"5dae1d499a0ade04888a1ac5",
    "text":"this is a message",
    "__v":0
  }
]

Your _ids that are automatically assigned by MongoDB will be different than these.

Our app runs fine so far, but what if the process to save the data to the database was kind of slow? This could cause the requests from the third-party service to time out. We don’t need to pre-emptively address this, but say we decided in our app that it was important to do so. How could we avoid this timeout?

Connecting to RabbitMQ

We can decouple the processing of the data from receiving it. When we receive it, we just insert it into a queue. A separate worker process will pick up data added to the queue and do whatever we like with it. We’ll use RabbitMQ to handle the queueing, and the amqplib client library to communicate with it. Here’s an illustration of that flow of communication:

To start, add one more runtime dependency: amqplib is a client for RabbitMQ.

$ yarn add amqplib

As with our database, instead of accessing it directly from the rest of the app we’ll wrap it in a module to hide the implementation. Create a lib/queue.js file and add the following:

const amqp = require('amqplib');
&nbsp;
const queueUrl = 'amqp://localhost';
&nbsp;
const channel = () => {
  return amqp.connect(queueUrl).then(connection => connection.createChannel());
};

First, we provide a private helper function channel that will connect to the queue and create a channel for communication.

Next, let’s define a send function to allow us to send a message on a given queue:

const send = (queue, message) =>
  channel().then(channel => {
    const encodedMessage = JSON.stringify(message);
    channel.assertQueue(queue, { durable: false });
    channel.sendToQueue(queue, Buffer.from(encodedMessage));
    console.log('Sent to "%s" message %s', queue, encodedMessage);
  });

Note that we serialize the message to JSON, so we can handle any object structure that’s serializable.

Next, let’s create a receive function allowing us to listen for messages on a given queue, calling a passed-in handler when a message arrives:

const receive = (queue, handler) =>
  channel().then(channel => {
    channel.assertQueue(queue, { durable: false });
    console.log('Listening for messages on queue "%s"', queue);
    channel.consume(queue, msg => handler(JSON.parse(msg.content.toString())), {
      noAck: true,
    });
  });

Finally, we export these two functions:

module.exports = { send, receive };

Using Queueing in Our App

Let’s see these in use. We’ll change our webhook route to enqueue the data instead of saving it to the database. Then we’ll define a separate worker process to save that message to the database.

Open web/webhook.js and make the following changes:

const bodyParser = require('body-parser');
-const repo = require('../lib/repo');
+const queue = require('../lib/queue');
&nbsp;
 const webhookRoute = (req, res) => {
   const message = {
     text: req.body,
   };
-  repo
-    .create(message)
-    .then(record => {
-      res.end('Saved ' + JSON.stringify(record));
-    })
+  queue
+    .send('incoming', message)
+    .then(() => {
+      res.end('Received ' + JSON.stringify(message));
+    })
     .catch(e => {
       console.error(e);
       res.status(500);
       res.end(e.message);
     });
 };

Instead of saving our message to the repo, now we enqueue it.

Note one other significant difference: we can no longer confirm in the response that record has been “saved” to the database because that hasn’t happened yet; we can only say that the message was received. Also, we will not have the complete record including database ID to return; we can only echo back the message as we received it.

Now our data is being sent to a queue named incoming. How can we listen for it to come in to actually save it to the database? We can create a worker to do so.

Create a folder workers at the root of your project, then a file index.js inside it. Add the following contents to it:

const queue = require('../lib/queue');
const repo = require('../lib/repo');
&nbsp;
const handleIncoming = message =>
  repo
    .create(message)
    .then(record => {
      console.log('Saved ' + JSON.stringify(record));
    });
&nbsp; 
queue
  .receive('incoming', handleIncoming)
  .catch(console.error);

This is pretty simple; we define a function to handle an incoming message. It saves it to the database, then logs it out.

We will need to run these as two separate processes. Quit and restart the existing web process:

$ node web

Then in another terminal to start the worker:

$ node workers

Send a message:

$ curl http://localhost:3000/webhook -d "this is a message to be enqueued"

Both terminals will appear to update instantly. In the web process you’ll see:

Sent to "incoming" message {"text":"this is a message to be enqueued"}

And in the worker process you’ll see:

Saved {"_id":"5dae2346fc89d91e09576e70","text":"this is a message to be enqueued","__v":0}

So our two processes are now working together. Our webhook process receives the webhook, puts it on a queue, and returns an HTTP response as quickly as possible. Our worker process receives the message from the queue and saves it to the database.

What’s Next?

In this post we’ve built a good foundation for our Node backend that we will build live-update functionality in future posts. In our next post we’ll create a React Native client app using Expo so we’ll have a frontend that’s ready for a great live-update experience as well.

The post Live Updates With Queues, WebSockets, and Push Notifications. Part 1: RabbitMQ Queues and Workers appeared first on Big Nerd Ranch.

]]>
React Native Is Native https://bignerdranch.com/blog/react-native-is-native/ https://bignerdranch.com/blog/react-native-is-native/#respond Mon, 15 Oct 2018 09:00:00 +0000 https://nerdranchighq.wpengine.com/blog/react-native-is-native/ React Native apps are native apps. It’s a heck of a coup they’ve pulled off, and while I have my concerns around adopting the technology, “Is it native?” isn’t one of them.

The post React Native Is Native appeared first on Big Nerd Ranch.

]]>

React Native apps are native apps. It’s a heck of a coup they’ve pulled off, and while I have my concerns around adopting the technology, “Is it native?” isn’t one of them.

But what is “native”?

I suspect whether you agree with me hinges on what we each understand by “native”. Here’s what I have in mind:

  • Uses the platform’s preferred UI toolkit
  • Wires into the platform’s usual mechanisms for event dispatch (touches, keys, motion, location changes, etc.)

Overall: Capable of achieving the same ends as any app developed using the platform’s preferred tooling by fundamentally the same mechanisms.

I claim React Native meets that bar.

Same Mechanisms Differently Marshaled

I’ve spent most of my years as a professional programmer working on Mac & iOS apps. From my Apple-native point of view, React Native is a very elaborate way to marshal UIViews and other UIKit mechanisms towards the usual UIKit ends:

  • View creation and configuration: Many iOS apps rely on XIB files for their view creation and configuration. If you haven’t looked at a XIB file on disk before, have a look: It’s your UI, rendered in XML. React Native uses hand-written JSX to rig up its UIViews, but that’s a difference more in markup flavor than in kind. (It codegens something nearer manual view creation and configuration code, but that’s also something in vogue amongst some iOS devs.)
  • Layout: It’s not enough to just stuff some views inside some others. You sometimes want them to have a certain shape. iOS devs can use raw AutoLayout constraints, or Ye Olde Visual Format Language, or the newer anchor-based API that leverages the type system, or Snap, or Masonry, or, or… Whatever it is you use, it ultimately boils down to setting the view’s frame. Well, React Native likes to use a flexbox-alike to describe its layout. It, too, boils down to frame updates by way of Yoga.
  • Event Dispatch: You got your UIViews, you got your IBActions. You’re coding your event reaction using JavaScript rather than Swift or Objective-C, but, eh – what’s one more C-family language?

Language & Execution Context

Well, about that one more language. Let’s talk about animation jank and asynchrony.

What is “jank”? It’s jargon for what happens when it’s time for something to show up on screen, but your app can’t render the needed pixels fast enough to show that something. As Shawn Maust put it back in 2015 in “What the Jank?”:

“Jank” is any stuttering or choppiness that a user experiences when there is
motion on the screen—like during scrolling, transitions, or animations.

The difference in language drives to something that may seem less than native at first glance. You see, there’s a context switch between UIKit-land and React Native JavaScript-action-handler-land, and at a high enough call rate – like, say, animation handlers that are supposed to run at the frame rate – the time taken in data marshaling and context switching can become noticeable.

Native apps aren’t immune from animation jank. It feels like there’s a WWDC session or three every year on how not to stutter when you scroll. But the overhead inherent in the technical mechanism eats some of your time budget, which means you get to sweep less inefficiency in your app code under Moore’s rug.

Native apps also aren’t immune from blocking rendering entirely. Do a bulk-import into Core Data on the main thread, parse a sufficiently large (or malicious) XML or JSON document on the main thread, or run a whole network request on the main thread, and the system watchdog will kill your app while leaving behind a death note of “8badf00d”. React Native’s context switch automatically enforces the best practice of doing work off the main thread: React Native developers naturally fall into the “pit of success” when it comes to aggressively pushing work off the main thread.

Asynchrony

How do you deal with the time taken by a function call? You do less work, or you do work on the other side of the bridge.

Or you surface that gap, that asynchrony, in your programming model with:

  • Callbacks
  • Delegates
  • Operations
  • Reactors (Run-Loop Observers)
  • Promises Futures Results Observables Streams Channels

Apple’s frameworks are rife with these mechanisms. Your standard IBAction-to-URLSession-to-spinner-to-view-update flow has a slow as a dog HTTP call in the middle. React Native’s IBAction-to-JSCore-to-view-update flow has a tiny little RPC bridge in the middle that often runs fast enough that you can pretend it’s synchronous. By the end of 2018, you may not even have to pretend – React Native will directly support synchronous cross-language calls where that’s advantageous.

React Native apps with their action handlers in JavaScript are no less native than an iOS app with their action handlers on a server on the other side of an HTTP API.

If you’ve worked on the common “all the brains are in our serverside API” flavor of iOS app, this should sound familiar. It should sound doubly familiar if that serverside API happens to be implemented in Node.js.

And, indeed, running the same language both serverside and clientside makes it a lot easier to change up which side of the pipe an operation happens on. (Such are the joys of isomorphic code, and it’s a small reason some are excited about Swift on the Server.)

Native Is As Native Does

React Native uses the same underlying mechanisms and benefits as much from Apple’s work on UIKit as does any other iOS app. React Native apps are native – perhaps even more native than many “iOS app as Web API frontend” apps!

The post React Native Is Native appeared first on Big Nerd Ranch.

]]>
https://bignerdranch.com/blog/react-native-is-native/feed/ 0