The post Live Updates With Queues, WebSockets, and Push Notifications. Part 6: Push Notifications with Expo appeared first on Big Nerd Ranch.
]]>This is pretty great when we’re running the app. But what about when it’s in the background? In that case, we can use device push notifications to alert the user to events. Here’s the complete architectural diagram of the system, with push notifications added:
Push notifications are available on the web as well (but, as of the time of this writing, not on Safari). However, users are frequently less willing to enable push notifications in web apps than in native mobile apps. Because of this, we chose to build our client app in React Native using Expo. Expo has great support for push notifications across both iOS and Android—let’s give them a try!
If you like, you can download the completed server project and the completed client project for the series.
Before we can send a push notification to a user, we need to request permission from them. In our Expo app, in src/MainScreen.js
, add a new PushPermissionRequester
component:
import React from 'react'; import { View } from 'react-native'; +import PushPermissionRequester from './PushPermissionRequester'; import MessageList from './MessageList'; export default function MainScreen() { return ( <View style={{ flex: 1 }}> + <PushPermissionRequester /> <MessageList /> </View> ); }
Now let’s implement PushPermissionRequester
. Create a src/PushPermissionRequester.js
file and enter the following:
import React, { useState } from 'react'; import { View } from 'react-native'; import { Notifications } from 'expo'; import * as Permissions from 'expo-permissions'; import { Button, Input } from 'react-native-elements'; const askForPushPermission = setToken => async () => { }; export default function PushPermissionRequester() { const [token, setToken] = useState('(token not requested yet)'); return ( <View> <Input value={token} /> <Button title="Ask Me for Push Permissions" onPress={askForPushPermission(setToken)} /> </View> ); }
This component tracks a push notification token that can be requested, then is displayed afterward. Now let’s fill in askForPushPermission
to request it:
const askForPushPermission = setToken => async () => { const { status: existingStatus } = await Permissions.getAsync( Permissions.NOTIFICATIONS, ); let finalStatus = existingStatus; if (existingStatus !== 'granted') { const { status } = await Permissions.askAsync(Permissions.NOTIFICATIONS); finalStatus = status; } console.log('push notification status ', finalStatus); if (finalStatus !== 'granted') { setToken(`(token ${finalStatus})`); } let token = await Notifications.getExpoPushTokenAsync(); setToken(token); };
This is boilerplate code from the Expo Push Notification docs; what’s happening is:
Reload the Expo app on your virtual device, and tap on “Ask Me for Push Permissions.” You should see the message “(token undetermined),” and a yellow box error at the bottom of the screen. The error says “Error: Must be on a physical device to get an Expo Push Token.”
Time to take this app to your real phone!
On Android, there are a few different ways to get the app running on your physical device. On iOS things are a bit more locked down. Let’s look at the approach that will work for both iOS and Android.
On your phone, search for “Expo” in the App Store or Google Play Store, respectively. This is a client app from Expo that allows you to run your app before going through the whole app publishing process, which is a nice speed boost. Download the Expo app. If you haven’t already created a free Expo account, create one. Then log in to the Expo app.
Now we need to get our React Native application published to Expo so we can load its JavaScript assets into the Expo app. Open the Metro Bundler browser tab that Expo opens when you run it. In the sidebar, click “Publish or republish project…”:
Choose a unique “URL Slug” for your app, then click “Publish project.”
Expo will take a minute or two to bundle up your JavaScript and upload it. Ultimately you should get a box at the bottom-right of the browser window saying “Successfully published to…”
Reopen the Expo app on your phone, go to the Profile tab, and in the “Published Projects” list you should see your app. Tap on it, and it should open and display the initial data from Heroku.
Now, tap “Ask Me for Push Permissions” again, and give permission. This time, on a physical device, it should work!
You should see a token that looks like ExponentPushToken[…]
, with a string of letters and numbers in between the square brackets. This is a token that uniquely identifies your app running in Expo on your device. You can use this to hit Expo’s API to send a push notification.
Select the whole token, copy it, and transfer it to your development computer somehow. Emailing yourself is always an option if nothing else!
Before we code anything, we can test this push notification out through Expo’s Push Notifications Tool. Make sure Expo is in the background on your phone. Then, on your development machine, go to the Push Notifications Tool.
Paste your full token including the string ExponentPushToken
into the “Expo Push Token” field. For “Message Title,” type something.
Scroll to the bottom of the page and click “Send a notification”. A push notification should appear on your phone from the Expo app, displaying the title you entered.
Feel free to play around with other fields in the push notification tool as well.
Now that we have a token, we can provide it to our backend. In a production application, you would set up a way for each user to send that token up to the server and store it with their user account. Since user accounts aren’t the focus of our tutorial, we’re just going to set that token via an environment variable instead.
In our node app, in .env.sample
, add the following line:
CLOUDAMQP_URL=fake_cloudamqp_url +EXPO_PUSH_TOKEN=fake_expo_push_token MONGODB_URI=fake_mongodb_uri
In .env
add your token, filling in the real value. This is the value we wanted to keep out of our git repo; you don’t want me to find your push token and send spam to you!
CLOUDAMQP_URL=amqp://localhost +EXPO_PUSH_TOKEN=ExponentPushToken[...] MONGODB_URI=mongodb://localhost:27017/nodeifier
Next, add Expo’s SDK as a dependency to your Node app:
$ yarn add expo-server-sdk
As we did with MongoDB and RabbitMQ, let’s wrap Expo’s SDK in a module of our own, to hide it from the rest of our app. Create a lib/expo.js
file and add the following:
const Expo = require('expo-server-sdk').default; const token = process.env.EXPO_PUSH_TOKEN; const expo = new Expo(); async function push({ text }) { if (!Expo.isExpoPushToken(token)) { console.error(`Push token ${token} is not a valid Expo push token`); return; } const messages = [ { to: token, title: text, }, ]; console.log('sending to expo push', messages); const chunks = expo.chunkPushNotifications(messages); for (let chunk of chunks) { try { let ticketChunk = await expo.sendPushNotificationsAsync(chunk); console.log(ticketChunk); } catch (error) { console.error(error); } } } module.exports = { push };
We export a push
function that our app can use to send a push notification. We only use the text
field of the message. First, we get the Expo push token from the environment variable and confirm it’s valid. Then we construct a message object with the structure Expo’s Push Notification SDK expects. The SDK is set up to allow sending push notifications in batches, which is a bit overkill in our case, but we work with it. We log the success or error message just in case.
Now let’s send out a push notification from our worker. In an earlier part we mentioned that you could conceivably separate different webhook endpoints into different microservices or lambda functions for scalability. You could do the same thing with workers. But since we’re hosting on Heroku, which will give us one web dyno and one worker dyno for free, we’ll keep our worker code in a single worker process that is watching multiple queues.
How should we organize this one worker service with multiple concerns? Currently our worker is very small, so adding code to monitor a second queue to the same file wouldn’t clutter it up much. But for the sake of illustrating how to separate concerns, let’s refactor our worker into separate modules.
Create a workers/incoming.js
file, and copy and paste the require
and handleIncoming
code from workers/index.js
into it. Then export the handler:
const queue = require('../lib/queue'); const repo = require('../lib/repo'); const handleIncoming = message => repo .create(message) .then(record => { console.log('Saved ' + JSON.stringify(record)); return queue.send('socket', record); }); module.exports = handleIncoming;
Update workers/index.js
to import that function instead of duplicating it:
if (process.env.NODE_ENV !== 'production') { require('dotenv').config(); } const queue = require('../lib/queue'); -const repo = require('../lib/repo'); - -const handleIncoming = message => - repo - .create(message) - .then(record => { - console.log('Saved ' + JSON.stringify(record)); - return queue.send('socket', record); - }); +const handleIncoming = require('./incoming'); queue .receive('incoming', handleIncoming)
Now, where should we call our push
function? In this case, we could probably do it directly in handleIncoming
. But when you’re using a queue-based architecture it can be valuable to separate units of work into small pieces; that way if one part fails it can be retried without retrying the entire process. For example, if we can’t reach Expo’s push notification service, we don’t want a retry to inadvertently insert a duplicate record into our database.
So instead, let’s create a new push
queue that will receive messages each time we have a push notification to send. In workers/incoming.js
, just like we send a message to the socket
queue, we’ll send one to the push
queue as well:
const handleIncoming = message => repo .create(message) .then(record => { console.log('Saved ' + JSON.stringify(record)); - return queue.send('socket', record); + return Promise.all([ + queue.send('socket', record), + queue.send('push', record), + ]); });
Note that we wrap our two queue.send
calls in a Promise.all()
and return the result; that way if either of the sends fails, the rejection will be propagated up and eventually logged with console.error
.
Next, add a new workers/push.js
file with the following contents:
const expo = require('../lib/expo'); const handlePush = message => { console.log('handling push', message); return expo.push(message); }; module.exports = handlePush;
An extremely simple worker, this just forwards the received message along to our Expo module. Connect it in workers/index.js
:
const queue = require('../lib/queue'); const handleIncoming = require('./incoming'); +const handlePush = require('./push'); queue .receive('incoming', handleIncoming) .catch(console.error); +queue + .receive('push', handlePush) + .catch(console.error);
With this, we should be set up to send push notifications. Run your two node processes locally:
$ node web
$ node workers
Send a test notification:
$ curl http://localhost:3000/webhooks/test -d "this should be pushed"
You should see the push notification show up on your phone. Note that although your Expo app is pointing to your production server, it still receives the push notification from your local server. This is because we’re using your device’s Expo push token, and it doesn’t know or care about any other backing servers.
Our final step is to get push notifications working in production. Whereas our previous Heroku environment variables were provided for us by add-ons, we need to set our EXPO_PUSH_TOKEN
variable manually. There are two ways we can do this:
heroku config:set "EXPO_PUSH_TOKEN=ExponentPushToken[...]"
(entering your full token as usual)EXPO_PUSH_TOKEN
for the KEY and your token for the VALUE, then click Add.Commit your latest changes then push them to Heroku:
$ git add . $ git commit -m "updated for push tokens" $ git push heroku master
When your app finishes deploying, try sending it a webhook, filling in your app’s URL instead of mine:
$ curl https://murmuring-garden-42327.herokuapp.com/webhooks/test -d "push from production"
You should receive a push notification on your phone.
You can also try toggling your GitHub PR to see that your other webhooks also deliver push notifications now too.
With that, our app is complete! Let’s review what we’ve built one last time:
We’ve been able to hook up to live updates coming from services like GitHub, Heroku, and Netlify. We set up a queue-based architecture to ensure that on real systems that would have far more load than this, that each piece of the process can run performantly. We push data to running apps over WebSockets, and apps in the background using push notifications.
Adding live updates to your mobile or web applications using approaches such as these can be a big boost to your app’s usefulness to your users. If you’re a developer, give these technologies a try. And if Big Nerd Ranch could help train you in these technologies or help build the foundation of a new live-updating application for you, let us know!
The post Live Updates With Queues, WebSockets, and Push Notifications. Part 6: Push Notifications with Expo appeared first on Big Nerd Ranch.
]]>The post Live Updates With Queues, WebSockets, and Push Notifications. Part 5: Deploying to Heroku appeared first on Big Nerd Ranch.
]]>If we weren’t using WebSockets, then functions-as-a-service would be a great option for deployment. But since WebSockets are stateful, you need to run a service like Amazon API Gateway in front of the functions to provide WebSocket statefulness, and setting that up can be tricky. Instead, we’ll deploy our app on Heroku, an easy hosting platform that allows us to continue to use WebSockets in the normal way.
If you like, you can download the completed server project and the completed client project for part 5.
If you don’t already have a Heroku account, create one for free. (You may need to add a credit card, but it won’t be charged.) Then install and log in to the Heroku CLI.
Go into our node app’s directory in the terminal. If you aren’t already tracking your app in git for version control, initialize a git repo now:
$ git init . $ echo node_modules > .gitignore
Heroku’s main functionality can be accessed either through the web interface or through the CLI. In this post we’ll be using both, depending on which is easiest for any given step.
To begin, create a new Heroku app for our backend using the CLI:
$ heroku create
This will create a new app and assign it a random name—in my case, murmuring-garden-42327
. It will also add a git remote named heroku
to your repo alongside any other remotes you may have. You can see this by running the following command:
$ git remote -v heroku https://git.heroku.com/murmuring-garden-42327.git (fetch) heroku https://git.heroku.com/murmuring-garden-42327.git (push)
We aren’t quite ready to deploy our app to Heroku yet, but we can go ahead and set up our database and queue services. We’ll do this step in the Heroku dashboard. Go to the dashboard, then click on your new app, then the “Resources” tab.
Under Add-ons, search for “mongo”, then click “mLab MongoDB”.
A modal will appear allowing you to choose a plan. The “Sandbox – Free” plan will work fine for us. Click “Provision.”
Next, search for “cloud,” then click “CloudAMQP.”
The default plan works here too: “Little Lemur – Free,” so click “Provision” again.
This has set up our database and queue server. How can we access them? The services provide URLs to our app via environment variables. To see them, click “Settings,” then “Reveal Config Vars.”
From the CLI, you can run
heroku config
to show the environment variables.
Here’s what they’re for:
CLOUDAMQP_APIKEY
: we won’t need this for our tutorial appCLOUDAMQP_URL
: our RabbitMQ access URLMONGODB_URI
: our MongoDB access URLWe need to update our application code to use these environment variables to access the backing services, but this raises a question: how can we set up analogous environment variables in our local environment? The dotenv
library is a popular approach: it allows us to set up variables in a .env
file in our app. Let’s refactor our app to use dotenv
.
First, add dotenv
as a dependency:
$ yarn add dotenv
Create two files, .env
and .env.sample
. It’s a good practice to not commit your .env
file to version control; so far our connection strings don’t have any secure info, but later we’ll add a variable that does. But if you create and commit a .env.sample
file with example data, this helps other users of your app see which environment variables your app uses. If you’re using git for version control, make sure .env
is in your .gitignore
file so it won’t be committed.
Add the following to .env.sample
:
CLOUDAMQP_URL=fake_cloudamqp_url MONGODB_URI=fake_mongodb_uri
This just documents for other users that these are the values needed.
Now let’s add the real values we’re using to .env
:
CLOUDAMQP_URL=amqp://localhost MONGODB_URI=mongodb://localhost:27017/notifier
Note that the name CLOUDAMQP_URL
is a bit misleading because we aren’t using CloudAMQP locally, just a general RabbitMQ server. But since that’s the name of the environment variable CloudAMQP sets up for us on Heroku, it’ll be easiest for us to use the same one locally. And since CloudAMQP is giving us a free queue server, we shouldn’t begrudge them a little marketing!
The values we set in the .env
file are the values from our lib/queue.js
and lib/repo.js
files respectively. Let’s replace the hard-coded values in those files with the environment variables. In lib/queue.js
:
-const queueUrl = 'amqp://localhost'; +const queueUrl = process.env.CLOUDAMQP_URL;
And in lib/repo.js
:
-const dbUrl = 'mongodb://localhost:27017/notifier'; +const dbUrl = process.env.MONGODB_URI;
Now, how can we load these environment variables? Add the following to the very top of both web/index.js
and workers/index.js
, even above any require()
calls:
if (process.env.NODE_ENV !== 'production') { require('dotenv').config(); }
When we are not in the production environment, this will load dotenv and instruct it to load the configuration. When we are in the production environment, the environment variables will be provided by Heroku automatically, and we won’t have a .env file, so we don’t need dotenv to run.
To make sure we haven’t broken our app for running locally, stop any node processes you have running, then start node web
and node workers
, run the Expo app, and post a test webhook:
$ curl http://localhost:3000/webhooks/test -d "this is with envvars"
The message should show up in Expo as usual.
Heroku is smart enough to automatically detect that we have a node app and provision an appropriate environment. But we need to tell Heroku what processes to run. We do this by creating a Procfile
at the root of our app and adding the following:
web: node web worker: node workers
This tells Heroku that it should run two processes, web
and worker
, and tells it the command to run for each.
Now we’re finally ready to deploy. The simplest way to do this is via a git push. Make sure all your changes are committed to git:
$ git add . $ git commit -m "preparing for heroku deploy"
Then push:
$ git push heroku master
This pushes our local master
branch to the master
branch on the heroku
remote. When Heroku sees changes to its master
branch, it triggers a deployment. We’ll be able to see the deployment process as it runs in the output of the git push
command.
Deployment will take a minute or two due to provisioning the server and downloading dependencies. In the end, we’ll get a message like:
remote: Released v7 remote: https://murmuring-garden-42327.herokuapp.com/ deployed to Heroku
We have one more step to do. Heroku will start the process named web
by default, but when we have any other processes we will need to start by ourselves. In our case, we need to start the worker
process. We can do this a few different ways:
heroku ps:scale worker=1
. This scales the process named worker
to run on a single “dyno” (kind of like the Heroku equivalent of a server)worker
row, click the pencil icon to edit it, then set the slider to on, and click Confirm.Now let’s update our Expo client app to point to our production servers. We could set up environment variables there as well, but for the sake of this tutorial let’s just change the URLs by hand. Make the following changes in MessageList.js
, putting in your Heroku app name in place of mine:
import React, { useState, useEffect, useCallback } from 'react'; -import { FlatList, Linking, Platform, View } from 'react-native'; +import { FlatList, Linking, View } from 'react-native'; import { ListItem } from 'react-native-elements'; ... -const httpUrl = Platform.select({ - ios: 'http://localhost:3000', - android: 'http://10.0.2.2:3000', -}); -const wsUrl = Platform.select({ - ios: 'ws://localhost:3000', - android: 'ws://10.0.2.2:3000', -}); +const httpUrl = 'https://murmuring-garden-42327.herokuapp.com'; +const wsUrl = 'wss://murmuring-garden-42327.herokuapp.com';
Note that the WebSocket URL uses the wss
protocol instead of ws
; this is the secure protocol, which Heroku makes available for us.
Reload your Expo app. It should start out blank because our production server doesn’t have any data in it yet. Let’s send a test webhook, again substituting your app’s name for mine:
$ curl https://murmuring-garden-42327.herokuapp.com/webhooks/test -d "this is heroku"
You should see your message show up. We’ve got a real production server!
Next, let’s set up a GitHub webhook pointing to our Heroku server. In the testing GitHub repo you created, go to Settings > Webhooks. Add a new webhook and leave the existing one unchanged; that way you can continue receiving events on your development server as well.
/webhooks/github
appended.application/json
Now head back to your test PR and toggle it open and closed a few times. You should see the messages show up in your Expo app.
Congratulations — now you have a webhooks-and-WebSockets app running in production!
Now that we have a Heroku app running, maybe we can set up webhooks for Heroku deployments as well!
Well, there’s one problem with that: it can be hard for an app that is being deployed to report on its deployment.
Surprisingly, if you set up a webhook for your Node app, you will get a message that the build started. It’s able to do this because Heroku leaves the existing app running until the build completes, then swaps out the running version. You won’t get a message that the build completed over the WebSocket, however—by that time the app has been restarted, your WebSocket connection is lost. The success message has been stored to the database, however, so if you reload the Expo app it will appear.
With that caveat in place — or if you have another Heroku app that you want to set up notifications for — here are a few pointers for how to do that.
To configure webhooks, open your site in the Heroku dashboard, then click “More > View webhooks.” Click “Create Webhook.” Choose the api:build
event type: that will allow you to receive webhook events when builds both start and complete.
The webhook route itself should be very similar to the GitHub one. The following code can be used to construct a message from the request body:
const { data: { app: { name }, status, }, } = req.body; const message = { text: `Build ${status} for app ${name}`, url: null, };
Note that the Heroku webhook doesn’t appear to send the URL of your app; if you want it to be clickable, you would need to use the Heroku Platform API to retrieve that info via another call.
Now that we’ve gotten our application deployed to production, there’s one more piece we can add to make a fully-featured mobile app: push notifications to alert us in the background. To get push notifications, we’ll need to deploy our Expo app to a real hardware device.
The post Live Updates With Queues, WebSockets, and Push Notifications. Part 5: Deploying to Heroku appeared first on Big Nerd Ranch.
]]>The post Live Updates With Queues, WebSockets, and Push Notifications. Part 4: Webhooks appeared first on Big Nerd Ranch.
]]>If you like, you can download the completed server project and the completed client project for part 4.
Before we create any of these webhook integrations, let’s refactor how our webhook code is set up to make it easy to add additional integrations. We’ll keep our existing “test” webhook for easy experimentation; we’ll just add webhooks for real services alongside it.
We could set up multiple webhook endpoints in a few different ways. If we were worried about having too much traffic for one application server to handle we could run each webhook as a separate microservice so that they could be scaled independently. Alternatively, we could run each webhook as a separate function on a function-as-a-service platform like AWS Lambda. Each microservice or function would need to have access to send messages to the same queue, but other than that they could be totally independent.
In our case, we’re going to deploy our app on Heroku. That platform only allows us to expose a single service to HTTP traffic, so let’s make each webhook a separate route within the same Node server.
Create a web/webhooks
folder. Move web/webhook.js
to web/webhooks/test.js
. Make the following changes to only export the route, not to set up a router:
-const express = require('express'); -const bodyParser = require('body-parser'); -const queue = require('../lib/queue'); +const queue = require('../../lib/queue'); const webhookRoute = (req, res) => { ... }; -const router = express.Router(); -router.post('/', bodyParser.text({ type: '*/*' }), webhookRoute); - -module.exports = router; +module.exports = webhookRoute;
We’ll define the router in a new web/webhooks/index.js
file instead. Create it and add the following:
const express = require('express'); const bodyParser = require('body-parser'); const testRoute = require('./test'); const router = express.Router(); router.post('/test', bodyParser.text({ type: '*/*' }), testRoute); module.exports = router;
Now we just need to make a tiny change to web/index.js
to account for the fact that we’ve pluralized “webhooks”:
const express = require('express'); -const webhookRouter = require('./webhook'); +const webhookRouter = require('./webhooks'); const listRouter = require('./list'); ... app.use('/list', listRouter); -app.use('/webhook', webhookRouter); +app.use('/webhooks', webhookRouter); const server = http.createServer(app);
This moves our webhook endpoint from /webhook
to /webhooks/test
. Now any future webhooks we add can be at other paths under /webhooks/
.
If your node web
process is running, stop and restart it. Make sure node workers
is running as well. You’ll then need to reload your Expo app to re-establish the WebSocket connection.
Now you can send a message to the new path and confirm our test webhook still works:
$ curl http://localhost:3000/webhooks/test -d "this is the new endpoint"
That message should show up in the Expo app as usual.
We need to do another preparatory step as well. Because we’ve been sending webhooks from our local machine, we’ve been able to connect to localhost
. But external services don’t have access to our localhost
. One way to get around this problem is ngrok
, a great free tool to give you a publicly-accessible URL to your local development machine. Create an ngrok account if you don’t already have one, then sign in.
Install ngrok by following the instructions on the dashboard to download it, or, if you’re on a Mac and use Homebrew, you can run brew cask install ngrok
. Provide ngrok with your auth token as instructed on the ngrok web dashboard.
Now you can open a public tunnel to your local server. With node web
running, in another terminal run:
$ ngrok http 3000
You should see output like the following:
In the output, look for the lines that start with “Forwarding” – these show the .ngrok.io
subdomain that has been temporarily set up to access your service. Note that there is an HTTP and HTTPS one; you may as well use the HTTPS one.
To confirm it works, send a POST to your test webhook using the ngrok URL instead of localhost. Be sure to fill in your domain name instead of the sample one I’m using here:
$ curl https://abcd1234.ngrok.io/webhooks/test -d "this is via ngrok"
The message should appear in the client as usual.
Now that we’ve got a subdomain that can be accessed from third-party services, we’re ready to build out the webhook endpoint for GitHub to hit. Create a web/webhooks/github.js
file and add the following:
const queue = require('../../lib/queue'); const webhookRoute = (req, res) => { console.log(JSON.stringify(req.body)); const { repository: { name: repoName }, pull_request: { title: prTitle, html_url: prUrl }, action, } = req.body; const message = { text: `PR ${action} for repo ${repoName}: ${prTitle}`, url: prUrl, }; console.log(message); queue .send('incoming', message) .then(() => { res.end('Received ' + JSON.stringify(message)); }) .catch(e => { console.error(e); res.status(500); res.end(e.message); }); }; module.exports = webhookRoute;
In our route, we do a few things:
text
field describing it and a related url
the user can visit.test
webhook, we send this message to our incoming
queue to be processed.Connect this new route in web/webhooks/index.js
:
const testRoute = require('./test'); +const githubRoute = require('./github'); const router = express.Router(); router.post('/test', bodyParser.text({ type: '*/*' }), testRoute); +router.post('/github', express.json(), githubRoute); module.exports = router;
Note that in this case we aren’t using the bodyParser.text()
middleware, but instead Express’s built-in express.json()
middleware. This is because we’ll be receiving JSON data instead of plain text.
Restart node web
to pick up these changes. You don’t need to restart ngrok
.
Now let’s create a new repo to use for testing. Go to github.com and create a new repo; you could call it something like notifier-test-repo
. We don’t care about the contents of this repo; we just need to be able to open PRs. So choose the option to “Initialize this repository with a README”.
When the repo is created, go to Settings > Webhooks, then click “Add webhook”. Choose the following options
/webhooks/github
appended.application/json
Note that your ngrok URL will change every time you restart ngrok. You will need to update any testing webhook configuration in GitHub and other services to continue receiving webhooks.
Now we just need to create a pull request to test out this webhook. The easiest way is to click the edit icon at the top right of our readme on GitHub’s site. Add some text to the readme, then at the bottom choose “Create a new branch for this commit and start a pull request,” and click “Commit changes,” then click “Create pull request.”
In your client app you should see a new message “PR opened for repo notifier-test-repo: Update README.md:”
If you want to see more messages, or if something went wrong and you need to troubleshoot, you can repeatedly click “Close pull request” then “Reopen pull request;” each one will send a new event to your webhook.
Our test webhook didn’t pass along any URLs. Now that we have messages from GitHub with URLs attached, let’s update our client app to allow tapping on an item to visit its URL. Open src/MessageList.js
and make the following change:
import React, { useState, useEffect, useCallback } from 'react'; -import { FlatList, Platform, View } from 'react-native'; +import { FlatList, Linking, Platform, View } from 'react-native'; import { ListItem } from 'react-native-elements'; ... <FlatList data={messages} keyExtractor={item => item._id} renderItem={({ item }) => ( <ListItem title={item.text} bottomDivider + onPress={() => item.url && Linking.openURL(item.url)} /> )} />
Reload the client app, tap on one of the GitHub notifications, and you’ll be taken to the PR in Safari. Pretty nice!
Now we’ve got a working GitHub webhook integration. We’ll wait a bit to set up the webhook integration with Heroku; first we’ll deploy our app to Heroku. That way we’ll be sure we have a Heroku app to receive webhooks for!
Netlify is another deployment service with webhook support; it’s extremely popular for frontend apps. We won’t walk through setting up Netlify webhooks in detail, but here are a few pointers if you use that service and would like to try integrating.
To configure webhooks, open your site in the Netlify dashboard, then click Settings > Build & deploy > Deploy notifications. Click Add notification > Outgoing webhook. Netlify requires you to set up a separate hook for each event you want to monitor. You may be interested in “Deploy started,” “Deploy succeeded,” and “Deploy failed.”
The webhook route code itself should be very similar to the GitHub one. The following lines can be used to construct a message from the request body:
const { state, name, ssl_url: url } = req.body; const message = { text: `Deployment ${state} for site ${name}`, url, };
Now we’ve got our first real service sending notifications to our app. But the fact that we’re dependent on a changeable ngrok URL feels a bit fragile. So we can get this running in a stable way, in our next post we’ll deploy our app to production on a free Heroku account.
The post Live Updates With Queues, WebSockets, and Push Notifications. Part 4: Webhooks appeared first on Big Nerd Ranch.
]]>The post Live Updates With Queues, WebSockets, and Push Notifications. Part 3: WebSockets appeared first on Big Nerd Ranch.
]]>Now our client is set up to view our messages, but we need to quit and restart the app to get any updates. We could add pull-to-refresh functionality, but it’d be much nicer if we could automatically receive updates from the server when a new message is received. Let’s build out WebSockets functionality to accomplish these live updates. Here’s an illustration of how the flow of data will work:
If you like, you can download the completed server project and the completed client project for part 3.
There are a few different libraries that can provide WebSocket functionality to Node apps. For the sake of this tutorial, we’ll use websocket
:
$ yarn add websocket
In our worker, after we handle a message on the incoming
queue and save the message to the database, we’ll send a message out on another queue indicating that we should deliver that message over the WebSocket. We’ll call that new queue socket
. Make the following change in workers/index.js
:
const handleIncoming = message => repo .create(message) .then(record => { console.log('Saved ' + JSON.stringify(record)); + return queue.send('socket', record); }); queue .receive('incoming', handleIncoming)
Note the following sequence:
incoming
queue;socket
queue.Note that we haven’t yet implemented the WebSocket code to send the response to the client yet; we’ll do that next. So far, we’ve just sent a message to a new queue that the WebSocket code will watch.
Now let’s implement the WebSocket code. In the web
folder, create a file socket.js
and add the following:
const WebSocketServer = require('websocket').server; const configureWebSockets = httpServer => { const wsServer = new WebSocketServer({ httpServer }); }; module.exports = configureWebSockets;
We create a function configureWebSockets
that allows us to pass in a Node httpServer
and creates a WebSocketServer
from it.
Next, let’s add some boilerplate code to allow a client to establish a WebSocket connection:
const configureWebSockets = httpServer => { const wsServer = new WebSocketServer({ httpServer }); + + let connection; + + wsServer.on('request', function(request) { + connection = request.accept(null, request.origin); + console.log('accepted connection'); + + connection.on('close', function() { + console.log('closing connection'); + connection = null; + }); + }); };
All we do is save the connection
in a variable and add a little logging to indicate when we’ve connected and disconnected. Note that our server is only allowing one connection; if a new one comes in, it’ll be overwritten. In a production application you would want to structure your code to handle multiple connections. Some WebSocket libraries will handle multiple connections for you.
Next, we want to listen on the socket
queue we set up before, and send an outgoing message on our WebSocket connection when we get one:
const WebSocketServer = require('websocket').server; +const queue = require('../lib/queue'); const configureWebSockets = httpServer => { ... wsServer.on('request', function(request) { ... }); + + queue + .receive('socket', message => { + if (!connection) { + console.log('no WebSocket connection'); + return; + } + connection.sendUTF(JSON.stringify(message)); + }) + .catch(console.error); }
When a message is sent on the socket
queue and if there is no WebSocket client connection, we do nothing. If there is a WebSocket client connection we send the message we receive out over it.
Now, we just need to call our configureWebSockets
function, passing our HTTP server to it. Open web/index.js
and add the following:
const listRouter = require('./list'); +const configureWebSockets = require('./socket'); const app = express(); ... const server = http.createServer(app); +configureWebSockets(server);
By calling our function, which in turn calls new WebSocketServer()
, we enable our server to accept requests for WebSocket connections.
Now we need to update our Expo client to make that WebSocket connection to the backend and accept messages it sends, updating the screen in the process. On the frontend we don’t need to add a dependency to handle WebSockets; the WebSocket
API is built-in to React Native’s JavaScript runtime.
Open src/MessageList.js
and add the following:
const httpUrl = Platform.select({ ios: 'http://localhost:3000', android: 'http://10.0.2.2:3000', }); +const wsUrl = Platform.select({ + ios: 'ws://localhost:3000', + android: 'ws://10.0.2.2:3000', +}); + +let socket; + +const setUpWebSocket = addMessage => { + if (!socket) { + socket = new WebSocket(wsUrl); + console.log('Attempting Connection...'); + + socket.onopen = () => { + console.log('Successfully Connected'); + }; + + socket.onclose = event => { + console.log('Socket Closed Connection: ', event); + socket = null; + }; + + socket.onerror = error => { + console.log('Socket Error: ', error); + }; + } + + socket.onmessage = event => { + addMessage(JSON.parse(event.data)); + }; +}; const loadInitialData = async setMessages => {
This creates a function setUpWebSocket
that ensures our WebSocket is ready to go. If the WebSocket is not already opened, it opens it and hooks up some logging. Whether or not it was already open, we configure the WebSocket to pass any message it receives along to the passed-in addMessage
function.
Now, let’s call setUpWebSocket
from our component function:
useEffect(() => { loadInitialData(setMessages); }, []); + useEffect(() => { + setUpWebSocket(newMessage => { + setMessages([newMessage, ...messages]); + }); + }, [messages]); + return ( <View style={{ flex: 1 }}>
We call setUpWebSocket
in a useEffect
hook. We pass it a function allowing it to append a new message to the state. This effect depends on the messages
state.
As a result of these dependencies, when the messages
are changed, we create a new addMessage
callback that appends the message to the updated messages
array and then we call setUpWebsocket
again with that updated addMessage
callback. This is why we wrote setUpWebsocket
to work whether or not the WebSocket is already established; it will be called multiple times.
With this, we’re ready to give our WebSockets a try! Make sure you have both Node services running in different terminals:
$ node web
$ node workers
Then reload our Expo app:
In yet another terminal, send in a new message:
$ curl http://localhost:3000/webhook -d "this is for WebSocketyness"
You should see the message appear in the Expo app right away, without any action needed by the user. We’ve got live updates!
Now that we’ve proven out that we can get live updates to our app, we should move beyond our simple webhook and get data from real third-party services. In the next part, we’ll set up a webhook to get notifications from GitHub about pull request events.
The post Live Updates With Queues, WebSockets, and Push Notifications. Part 3: WebSockets appeared first on Big Nerd Ranch.
]]>The post Live Updates With Queues, WebSockets, and Push Notifications. Part 2: React Native Apps with Expo appeared first on Big Nerd Ranch.
]]>Before we do any more work on the backend, let’s create a React Native client app using Expo so we’ll have a frontend that’s ready for a great live-update experience as well. One of Expo’s features is great cross-platform push notification support, so we’ll be able to benefit from that in part 6 of the series.
If you like, you can download the completed client project for part 2.
If you haven’t built an app with Expo before, this tutorial will walk you through running the app on a virtual device on either Android or iOS. You will need to have one of the following installed on your development machine:
Next, install the Expo CLI globally:
$ npm install -g expo-cli
Then create a new project:
$ expo init notifier-client
You’ll be prompted to answer a few questions; choose the following answers:
After the project setup completes, go into the project directory and add a few more dependencies:
$ cd notifier-client $ yarn add axios react-native-elements
Here’s what they’re for:
axios
is a popular HTTP client.react-native-elements
is a UI library that will make our super-simple app look a bit nicer.Next, let’s start the Expo development server:
$ yarn start
This should open Expo’s dev server in your browser. It looks something like this:
If you want to run on Android, make sure you’ve followed Expo’s instructions to start an Android virtual device. If you want to run on iOS, Expo will start the virtual device for you.
Now, in the browser window click either “Run on Android device/emulator” or “Run on iOS Simulator.” In the appropriate virtual device you should see a build progress bar and, when it completes, the message “Open up App.js to start working on your app!”.
Let’s do that thing they just said!
Replace the contents of App.js
with the following:
import React, { Fragment } from 'react'; import { SafeAreaView, StatusBar } from 'react-native'; import { ThemeProvider } from 'react-native-elements'; import MainScreen from './src/MainScreen'; export default function App() { return ( <ThemeProvider> <Fragment> <StatusBar barStyle="dark-content" /> <SafeAreaView style={{ flex: 1 }}> <MainScreen /> </SafeAreaView> </Fragment> </ThemeProvider> ); }
Note that at this point the React Native app won’t build for a few steps.
The changes to App.js
will do the following:
ThemeProvider
so we can use Elements.MainScreen
component we haven’t created yet.Now let’s create that MainScreen
component. Create a src
folder, then a MainScreen.js
inside it, and add the following contents:
import React from 'react'; import { View } from 'react-native'; import MessageList from './MessageList'; export default function MainScreen() { return ( <View style={{ flex: 1 }}> <MessageList /> </View> ); }
This file doesn’t do much yet; we’ll add more to it in a future post. Right now it just displays a MessageList
we haven’t created yet. On to that component!
Create src/MessageList.js
and add the following:
import React, { useState, useEffect } from 'react'; import { FlatList, Platform, View } from 'react-native'; import { ListItem } from 'react-native-elements'; import axios from 'axios'; const httpUrl = Platform.select({ ios: 'http://localhost:3000', android: 'http://10.0.2.2:3000', }); const loadInitialData = async setMessages => { const messages = await axios.get(`${httpUrl}/list`); setMessages(messages.data); }; export default function MessageList() { const [messages, setMessages] = useState([]); useEffect(() => { loadInitialData(setMessages); }, []); return ( <View style={{ flex: 1 }}> <FlatList data={messages} keyExtractor={item => item._id} renderItem={({ item }) => ( <ListItem title={item.text} bottomDivider /> )} /> </View> ); }
Here’s what’s going on here:
messages
state item.loadInitialData
function the first time the component mounts. We pass it the setMessages
function so it can update the state.loadInitialData
makes a web service request and stores the data in the response. The way to make HTTP requests to your local development machine differs between the iOS Simulator (http://localhost
) and Android Emulator (http://10.0.2.2
), so we use React Native’s Platform.select()
function to return the appropriate value for the device we’re on.FlatList
which is React Native’s performant scrollable list. The list contains React Native Elements ListItem
s. For now we just display the text of the message.Run the following command in the Node app folder to take sure our notifier
Node app from part 1 is up:
$ node web
Reload our Expo app on the virtual device:
When the app reloads, you should see a list of the test messages you entered on your server:
With this, the basics of our client app are in place, and we’re set to begin adding live updates across our stack. In the next part we’ll introduce WebSockets that allow us to push updates to the client.
The post Live Updates With Queues, WebSockets, and Push Notifications. Part 2: React Native Apps with Expo appeared first on Big Nerd Ranch.
]]>The post Live Updates With Queues, WebSockets, and Push Notifications. Part 1: RabbitMQ Queues and Workers appeared first on Big Nerd Ranch.
]]>To explore the topic of live-updating applications, let’s create an app called Notifier that allows us to receive notifications about things that happen in various services, like GitHub pull requests or deployments on Netlify or Heroku. We’ll receive these notifications via webhooks, then send them back to a mobile application using WebSockets and push notifications. This architecture could work for any web or mobile application, but in our case we’ll use React Native so we can get push notifications in a really straightforward way.
The final project will work like this:
For our first post, let’s create the Node backend that will store and route our messages. Rather than a typical monolithic backend, we’ll use RabbitMQ to communicate between separate components. For this part, it will only provide the messages when requested from the server. In future parts we’ll add the live-update functionality.
If you like, you can download the completed server project for part 1.
Before we get fancy, we’ll create a simple Express app that connects directly to a database. This is much simpler than where our project will end up and it works like so:
There are good reasons to be cautious about using MongoDB in production. One of the benefits of MongoDB, and many other NoSQL databases, is that they offer better support for horizontal scaling than traditional SQL databases. This could be useful for a live-updating system like ours if the traffic ever got extremely high. But be carefully to weigh the pros and cons of SQL and NoSQL databases before making a choice for your production database.
Install the following on your development machine:
Be sure to start Mongo and RabbitMQ using the appropriate mechanism for the way you installed it.
Create a new folder and initialize it as a Node project:
$ mkdir notifier-server $ cd notifier-server $ yarn init -y
Add a few runtime dependencies:
$ yarn add express body-parser mongoose
Here’s what they’re for:
express
is a lightweight web server.body-parser
provides a middleware to handle plain-text request bodies to complement Express’s built-in JSON-handling middleware.mongoose
is a database client for MongoDB.Rather than interspersing database code throughout our app we’ll use what’s called the “repository pattern”: we’ll create a module that hides the database implementation and just allows the rest of the app to read and write data. Create a lib
folder, and a lib/repo.js
file inside it.
In that file, first, connect to the Mongo database:
const mongoose = require('mongoose'); const dbUrl = 'mongodb://localhost:27017/notifier'; mongoose.connect(dbUrl, { useNewUrlParser: true, useUnifiedTopology: true, }); mongoose.connection.on('error', console.error);
Next, we need to define the core data type of our application. Let’s call it a “message”: something we receive from an external service letting us know that something happened.
Let’s define a Mongoose schema and model for it:
const messageSchema = new mongoose.Schema({ text: String, url: String, }); const Message = mongoose.model('Message', messageSchema);
Finally, we’ll create the functions we want to expose to the rest of the app:
const create = attrs => new Message(attrs).save(); const list = () => Message.find().then(messages => messages.slice().reverse()); module.exports = { create, list };
create
saves a new message record with the attributes we pass it.list
returns all the message records in the database in the reverse order they were created.With our repository defined we can create our web service to provide access to it.
Create a web
folder and an index.js
file inside it. In that file create an Express app and listen on the configured port:
const http = require('http'); const express = require('express'); const app = express(); const server = http.createServer(app); const { PORT = 3000 } = process.env; server.listen(PORT); console.log(`listening on port ${PORT}`);
Note that instead of using Express’s app.listen()
shorthand, we are using Node’s built-in http
module to create a server based on the Express app and then calling server.listen()
. Setting up the HTTP server explicitly like this will allow us to add WebSockets in a future blog post.
Next, import two routers that we will define shortly and add them to the app:
const http = require('http'); const express = require('express'); +const webhookRouter = require('./webhook'); +const listRouter = require('./list'); const app = express(); +app.use('/webhook', webhookRouter); +app.use('/list', listRouter); const server = http.createServer(app);
Now let’s define those routers.
First, the webhook. Webhooks are a mechanism for one web application to inform another of an event via an HTTP request. Many services offer webhook integrations. Over the course of this series we’ll integrate with GitHub, Netlify, and Heroku. To start out, we’ll create the simplest possible webhook that allows us to POST any text content we like. This will allow us to easily test out the architecture that we’ll build future webhooks on.
In the web
folder create a webhook.js
file and add the following:
const express = require('express'); const bodyParser = require('body-parser'); const repo = require('../lib/repo'); const webhookRoute = (req, res) => { const message = { text: req.body, }; repo .create(message) .then(record => { res.end('Saved ' + JSON.stringify(record)); }) .catch(e => { console.error(e); res.status(500); res.end(e.message); }); }; const router = express.Router(); router.post('/', bodyParser.text({ type: '*/*' }), webhookRoute); module.exports = router;
This will receive any text posted to it, save it to our repo as the text
attribute of a message, and return an affirmative response.
Next, let’s provide a way to read the messages in the database. In web
create list.js
and add the following:
const express = require('express'); const repo = require('../lib/repo'); const listRoute = (req, res) => { repo .list() .then(messages => { res.setHeader('content-type', 'application/json'); res.end(JSON.stringify(messages)); }) .catch(e => { console.error(e); res.status(500); res.setHeader('content-type', 'application/json'); res.end(JSON.stringify({ error: e.message })); }); }; const router = express.Router(); router.get('/', listRoute); module.exports = router;
With this, we are ready to try our app. Start the app:
$ node web
In another terminal, POST a few messages to the app using curl
:
$ curl http://localhost:3000/webhook -d "this is a message" $ curl http://localhost:3000/webhook -d "this is another message"
The -d
flag allows us to provide an HTTP body to the request. Setting the -d
flag will make our request a POST
request by default, which is what we want here.
Now, request the list of data:
$ curl http://localhost:3000/list
You should receive back the messages you posted in JSON format (formatting added here for clarity):
[ { "_id":"5dae1d549a0ade04888a1ac6", "text":"this is another message", "__v":0 }, { "_id":"5dae1d499a0ade04888a1ac5", "text":"this is a message", "__v":0 } ]
Your _id
s that are automatically assigned by MongoDB will be different than these.
Our app runs fine so far, but what if the process to save the data to the database was kind of slow? This could cause the requests from the third-party service to time out. We don’t need to pre-emptively address this, but say we decided in our app that it was important to do so. How could we avoid this timeout?
We can decouple the processing of the data from receiving it. When we receive it, we just insert it into a queue. A separate worker process will pick up data added to the queue and do whatever we like with it. We’ll use RabbitMQ to handle the queueing, and the amqplib
client library to communicate with it. Here’s an illustration of that flow of communication:
To start, add one more runtime dependency: amqplib
is a client for RabbitMQ.
$ yarn add amqplib
As with our database, instead of accessing it directly from the rest of the app we’ll wrap it in a module to hide the implementation. Create a lib/queue.js
file and add the following:
const amqp = require('amqplib'); const queueUrl = 'amqp://localhost'; const channel = () => { return amqp.connect(queueUrl).then(connection => connection.createChannel()); };
First, we provide a private helper function channel
that will connect to the queue and create a channel for communication.
Next, let’s define a send
function to allow us to send a message on a given queue:
const send = (queue, message) => channel().then(channel => { const encodedMessage = JSON.stringify(message); channel.assertQueue(queue, { durable: false }); channel.sendToQueue(queue, Buffer.from(encodedMessage)); console.log('Sent to "%s" message %s', queue, encodedMessage); });
Note that we serialize the message
to JSON, so we can handle any object structure that’s serializable.
Next, let’s create a receive
function allowing us to listen for messages on a given queue, calling a passed-in handler when a message arrives:
const receive = (queue, handler) => channel().then(channel => { channel.assertQueue(queue, { durable: false }); console.log('Listening for messages on queue "%s"', queue); channel.consume(queue, msg => handler(JSON.parse(msg.content.toString())), { noAck: true, }); });
Finally, we export these two functions:
module.exports = { send, receive };
Let’s see these in use. We’ll change our webhook
route to enqueue the data instead of saving it to the database. Then we’ll define a separate worker process to save that message to the database.
Open web/webhook.js
and make the following changes:
const bodyParser = require('body-parser'); -const repo = require('../lib/repo'); +const queue = require('../lib/queue'); const webhookRoute = (req, res) => { const message = { text: req.body, }; - repo - .create(message) - .then(record => { - res.end('Saved ' + JSON.stringify(record)); - }) + queue + .send('incoming', message) + .then(() => { + res.end('Received ' + JSON.stringify(message)); + }) .catch(e => { console.error(e); res.status(500); res.end(e.message); }); };
Instead of saving our message to the repo, now we enqueue it.
Note one other significant difference: we can no longer confirm in the response that record has been “saved” to the database because that hasn’t happened yet; we can only say that the message was received. Also, we will not have the complete record including database ID to return; we can only echo back the message as we received it.
Now our data is being sent to a queue named incoming
. How can we listen for it to come in to actually save it to the database? We can create a worker to do so.
Create a folder workers
at the root of your project, then a file index.js
inside it. Add the following contents to it:
const queue = require('../lib/queue'); const repo = require('../lib/repo'); const handleIncoming = message => repo .create(message) .then(record => { console.log('Saved ' + JSON.stringify(record)); }); queue .receive('incoming', handleIncoming) .catch(console.error);
This is pretty simple; we define a function to handle an incoming message. It saves it to the database, then logs it out.
We will need to run these as two separate processes. Quit and restart the existing web process:
$ node web
Then in another terminal to start the worker:
$ node workers
Send a message:
$ curl http://localhost:3000/webhook -d "this is a message to be enqueued"
Both terminals will appear to update instantly. In the web process you’ll see:
Sent to "incoming" message {"text":"this is a message to be enqueued"}
And in the worker process you’ll see:
Saved {"_id":"5dae2346fc89d91e09576e70","text":"this is a message to be enqueued","__v":0}
So our two processes are now working together. Our webhook process receives the webhook, puts it on a queue, and returns an HTTP response as quickly as possible. Our worker process receives the message from the queue and saves it to the database.
In this post we’ve built a good foundation for our Node backend that we will build live-update functionality in future posts. In our next post we’ll create a React Native client app using Expo so we’ll have a frontend that’s ready for a great live-update experience as well.
The post Live Updates With Queues, WebSockets, and Push Notifications. Part 1: RabbitMQ Queues and Workers appeared first on Big Nerd Ranch.
]]>The post React Native Is Native appeared first on Big Nerd Ranch.
]]>React Native apps are native apps. It’s a heck of a coup they’ve pulled off, and while I have my concerns around adopting the technology, “Is it native?” isn’t one of them.
I suspect whether you agree with me hinges on what we each understand by “native”. Here’s what I have in mind:
Overall: Capable of achieving the same ends as any app developed using the platform’s preferred tooling by fundamentally the same mechanisms.
I claim React Native meets that bar.
I’ve spent most of my years as a professional programmer working on Mac & iOS apps. From my Apple-native point of view, React Native is a very elaborate way to marshal UIViews and other UIKit mechanisms towards the usual UIKit ends:
Well, about that one more language. Let’s talk about animation jank and asynchrony.
What is “jank”? It’s jargon for what happens when it’s time for something to show up on screen, but your app can’t render the needed pixels fast enough to show that something. As Shawn Maust put it back in 2015 in “What the Jank?”:
“Jank” is any stuttering or choppiness that a user experiences when there is
motion on the screen—like during scrolling, transitions, or animations.
The difference in language drives to something that may seem less than native at first glance. You see, there’s a context switch between UIKit-land and React Native JavaScript-action-handler-land, and at a high enough call rate – like, say, animation handlers that are supposed to run at the frame rate – the time taken in data marshaling and context switching can become noticeable.
Native apps aren’t immune from animation jank. It feels like there’s a WWDC session or three every year on how not to stutter when you scroll. But the overhead inherent in the technical mechanism eats some of your time budget, which means you get to sweep less inefficiency in your app code under Moore’s rug.
Native apps also aren’t immune from blocking rendering entirely. Do a bulk-import into Core Data on the main thread, parse a sufficiently large (or malicious) XML or JSON document on the main thread, or run a whole network request on the main thread, and the system watchdog will kill your app while leaving behind a death note of “8badf00d”. React Native’s context switch automatically enforces the best practice of doing work off the main thread: React Native developers naturally fall into the “pit of success” when it comes to aggressively pushing work off the main thread.
How do you deal with the time taken by a function call? You do less work, or you do work on the other side of the bridge.
Or you surface that gap, that asynchrony, in your programming model with:
Apple’s frameworks are rife with these mechanisms. Your standard IBAction-to-URLSession-to-spinner-to-view-update flow has a slow as a dog HTTP call in the middle. React Native’s IBAction-to-JSCore-to-view-update flow has a tiny little RPC bridge in the middle that often runs fast enough that you can pretend it’s synchronous. By the end of 2018, you may not even have to pretend – React Native will directly support synchronous cross-language calls where that’s advantageous.
React Native apps with their action handlers in JavaScript are no less native than an iOS app with their action handlers on a server on the other side of an HTTP API.
If you’ve worked on the common “all the brains are in our serverside API” flavor of iOS app, this should sound familiar. It should sound doubly familiar if that serverside API happens to be implemented in Node.js.
And, indeed, running the same language both serverside and clientside makes it a lot easier to change up which side of the pipe an operation happens on. (Such are the joys of isomorphic code, and it’s a small reason some are excited about Swift on the Server.)
React Native uses the same underlying mechanisms and benefits as much from Apple’s work on UIKit as does any other iOS app. React Native apps are native – perhaps even more native than many “iOS app as Web API frontend” apps!
The post React Native Is Native appeared first on Big Nerd Ranch.
]]>