Juan Pablo Claude - Big Nerd Ranch Wed, 16 Nov 2022 21:40:14 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.5 The Five Steps of API-First Design https://bignerdranch.com/blog/the-five-steps-of-api-first-design/ https://bignerdranch.com/blog/the-five-steps-of-api-first-design/#respond Thu, 24 Jun 2021 10:00:15 +0000 https://bignerdranch.com/?p=7582 And why API-First Design should be a part of your next web service project  Some of the greatest inventions have occurred by accident, but when it comes to creating amazing web services, intentionality is the name of the game. And there’s not much more intentional than API-First Design.  See, web services built without intention often […]

The post The Five Steps of API-First Design appeared first on Big Nerd Ranch.

]]>
And why API-First Design should be a part of your next web service project 

Some of the greatest inventions have occurred by accident, but when it comes to creating amazing web services, intentionality is the name of the game. And there’s not much more intentional than API-First Design. 

See, web services built without intention often suffer in regards to quality, such as slow delivery and rife with errors. And once put into place, low-quality web services are expensive to update.

But what is API-First Design?

From a high level, API-First Design is a purposeful approach to building high-quality web services in a way that avoids the pitfalls of an ad-hoc build. This approach can be broken down into five steps:

  • treat the API of your web service as the standout principle
  • appoint product ownership for the API development to ensure a core focus on quality
  • design the API contract before coding the web service
  • build the frontend and backend alongside one another based on the API contract.

The Five Steps of the API Process

1. Treat the API of your web service as a first-class citizen.

API-First Design is like a lot of great things—it takes time, effort, and money to do correctly. That means learning a new process, having dedicated folks on the project, and realizing that it’s an ongoing process. 

But, if you’ve looked at all options and API-First Design is the way to go, then it’s time to put all you have behind it. In practice, that will require committing to the process by getting buy-in from your entire organization. You’ll also need to clear some space in both your organization and budget to ensure you get the most out of your work. 

2. Appoint product ownership for the API to keep the focus on high quality.

Quality products always begin with having a product owner, and it’s essential that your API receives the same star treatment. See, building out an API without leadership means that changes will occur in a more haphazard way, leading to poor quality. But, with a product owner, you’ll keep the focus on what’s best for the build and best for the web service overall. 

And since this person is going to need to get stakeholders involved from both a technical and non-technical perspective, the role doesn’t have to be a developer or architect. In fact, their most important role is to advocate for your particular process to ensure that the goals are prioritized and met. 

3. Design the API contract before building the web service in code.

The next important step is to design the API before writing code. You’re going to want to create something that both your team and stakeholders can understand. It doesn’t have to be super fancy—think sticky notes or a good old-fashioned whiteboard session. Remember, this isn’t coded, and it’s vital that everyone on your team, regardless of their role, is able to understand what is being created and is able to add suggestions.

As your API first design solidifies, translate the API contract to a machine-readable format, such as OpenAPI or GraphQL Schema. This format will have major payoffs in the next step.

4. Automatically generate documentation, client/server stubs, and mocked backends from the API contract.

A lot of time spent designing and creating documentation doesn’t feel very productive, as the documents are stored somewhere and never referenced again. That’s why it’s important to store your API contract in a machine-readable format like OpenAPI or GraphQL Schema. These formats automatically generate a number of extremely useful artifacts that will help your development process go faster.

5. Build the frontend and backend in parallel based on the API contract

Usually, frontend and backend development occurring at the same time can be a slippery slope that leads to a poor quality product. But this is where the API contract comes into play. This API contract prevents frontend and backend developers from interpreting a loose specification differently because the specifications are so well designed from the start. 

The API contract is basically your North Star. Changes will still occur, but the API contract will keep you on the right track.

If done correctly, your web service will be consistent, reusable, well-documented, and easy for developers to use. 

Don’t Settle for Anything but the Best

Your web service is a valuable resource for your organization, both from a logistical and financial standpoint. If you’re curious to learn more about API-First Design, check out our ebook, API-First Design: How some of the most important work is done before the first line of code is even written, or reach out and chat with a Nerd. We’ll be happy to help!

The post The Five Steps of API-First Design appeared first on Big Nerd Ranch.

]]>
https://bignerdranch.com/blog/the-five-steps-of-api-first-design/feed/ 0
WWDC 2018: Opening Day. Augmented Reality, Machine Learning, and Siri Shortcuts https://bignerdranch.com/blog/wwdc-2018-opening-day-augmented-reality-machine-learning-and-siri-shortcuts/ https://bignerdranch.com/blog/wwdc-2018-opening-day-augmented-reality-machine-learning-and-siri-shortcuts/#respond Mon, 04 Jun 2018 10:00:23 +0000 https://nerdranchighq.wpengine.com/blog/wwdc-2018-opening-day-augmented-reality-machine-learning-and-siri-shortcuts/ WWDC 2018 began this week in San José, CA, with the usual excitement of thousands of developers from all over the world that could not wait to learn what new shiny objects Apple would unveil for them. The expectant developer community was not disappointed. Even though the announcements were largely evolutionary and no new hardware of any kind was unveiled, the enhancements to all four platforms (iOS, macOS, watchOS, tvOS) and the development tools were many, and hint at big things to come.

The post WWDC 2018: Opening Day. Augmented Reality, Machine Learning, and Siri Shortcuts appeared first on Big Nerd Ranch.

]]>

WWDC 2018 began this week in San José, CA, with the usual excitement of thousands of developers from all over the world
that could not wait to learn what new shiny objects Apple would unveil for them. The expectant developer community was not
disappointed. Even though the announcements were largely evolutionary and no new hardware of any kind was unveiled,
the enhancements to all four platforms (iOS, macOS, watchOS, tvOS) and the development tools were many, and hint at big
things to come.

Here are some of the more significant announcements.

iOS 12

Performance

I have always appreciated that with most new releases of system software, Apple tries to retain as much compatibility with
older devices as possible, extending their usable life. The newly announced iOS 12, promises to support the same range of
devices as iOS 11 and the system is undergoing a serious performance review that will significantly benefit the older ones.
Now, when the system senses increased CPU demand it ramps-up clock speed faster, leading to more responsive devices and an earlier
ramp-down that preserves batteries. Figures of 50 to 70% performance improvements were mentioned.

Apps

Several apps will undergo redesigns and enhancements in iOS 12, such as News, Stocks, Voice Memos, and iBooks. One of the more
interesting updates is Face Time, which will now include Group Face Time with up to 32 participants.
Group sessions use a vertical stack of video tiles that automatically move and resize to bring the current speaker into focus.
Conveniently, a Messages text chat group can easily become a Group Face Time.

The Photos app will boast vastly improved searching based on image content, effects suggestions also based on content, and
easier sharing. If you share with a friend a photo from an event you both attended, the recipient’s device will search for
related pictures from the same event and offer to share them back with you, so that you both can have all available pictures.

Memoji

Your wish has finally come true. iOS 12 will allow you to create an animoji version of yourself, called a “Memoji”. The camera
looks at you and makes an animoji version of you that is highly customizable. You can edit your hair style, eye color,
add glasses and hats. And the best part, if you stick your tongue out, the updated vision framework will recognize it and add
it to your memoji. The memoji can be shared with your friends with Messages, or you can use it instead of your real self in
Face Time.

Do Not Disturb, Notifications, Screen Time

Apple has recognized the concern many of us have that we are spending too much time with our iOS devices. iOS 12 improves the Do Not Disturb functionality by hiding notifications on the home screen to remove all temptation to look at them.

Notifications can now be grouped rather than listed individually, with settings per app to control the grouping behavior. Grouped notifications
will allow you to quickly triage classes of notifications as you search for the most important ones.

Finally, Screen Time will produce reports you can study to determine how you are spending time with your phone. Are you playing
that game too much or spending too much time with that social networking app? You will be able to set time limits for those attention
hogs and get a dismissible block screen when you reach that time limit. Through iCloud Family Sharing, you will also be able to set time
allowances for your children.

macOS Mojave

The WWDC Keynote announced the next version of macOS. macOS 10.14 will move away from the California mountains, and into the desert,
as “macOS Mojave”.

Mojave will include many smaller improvements to the Desktop and Finder:

  • Desktop Stacks. The stacks we have been using on the dock are breaking loose and will help you organize those loose files on your desktop.
  • Gallery View. A new view mode for Finder windows that displays a large preview for a file.
  • Enhanced Quick Look Quick Actions allow you to view and edit a file without launching an app.
  • Easier and more flexible screen shot functionality.
  • Dynamic Desktop will allow you to cycle through different desktop appearances during the day.

More notable improvements include the following.

Continuity Camera

You are probably familiar with the current Continuity feature that allows you to seamlessly transfer tasks from your phone to your computer,
such as editing an email or reading from a web site. Continuity camera will let you get a picture from your phone, as you work from
your computer. Say that you are preparing a presentation in Keynote and you need to insert a picture. You can activate your phone
camera from Keynote, take a picture, and it will be easily inserted into your document, without any kind of tedious file exchange process.

Dark Mode

Perhaps the most hyped new feature of macOS Mojave is Dark Mode. It will let you change the appearance of your computer to a, well,
darker mode. It does look gorgeous and if, as a developer, you chose to participate, you can allow your app to switch modes as well.
Dark Mode is opt-in for your application. If you chose to activate the capability, your app will probably not look great right away and
you will have to do some work. If you mostly use standard controls, you will not have much to do. Asset catalogs will help you as
they now include dark mode placeholders for image assets. If you supply these extra images, they will be correctly displayed as the user
switches modes. Custom controls will require your attention to change their tint depending on the current mode.

The announced version 10 of Xcode makes good use of Dark Mode with a very attractive dark interface.

Cross-Platform

Some of the biggest announcements presented at the 2018 WWDC include more than one Apple platform.

Siri Shortcuts

This is perhaps one of the more exciting new features for developers. The machine learning gears within your iOS and macOS devices
will allow Siri to discover some patterns in your user behavior. Let’s say that you have a small morning ritual of opening some
applications and looking at some specific information. Siri will discover this pattern and offer to make a shortcut you can launch
in the future with a phrase of your choosing. After creating the shortcut, you will be able to start your morning by telling
Siri something like: “Start morning routine”. As a developer, you can do the minimum and simply allow Siri to include your app in a
shortcut, or you can actually expose specific parts of your application to Siri and handle more fine-grained shortcut requests.

This is the beginning of making your apps voice-controlled.

Privacy and Security

Recent events in the news have definitely increased our concerns about data security. Apple has responded in several ways:

  • API protections for the use of application data, and resources such as the camera and microphone have been extended.
  • Many web sites track a user by analyzing device configuration (fonts installed, plug-ins available, etc.). This “fingerprinting” technique is becoming more difficult in the new versions of Safari for iOS and macOS.
  • Automatic passwords are more secure. The system can now detect repeated use of passwords and offer to replace them. Codes for two-factor authentication are now detected when sent to Messages and automatically added to the appropriate field.

A more important change for developers is “Notarized Apps”. This is an extension of the developer ID and certifies that apps are
trustworthy no matter how they are distributed to the user (e.g. a web-distributed Mac application). In the future Apple will require
that all macOS apps be notarized.

iOS Apps in macOS

Several new apps were announced for macOS Mojave, such as News, Stocks, Voice Memos and Home. This sounds pretty trivial
until you realize that these are all iOS apps. As it turns out, Apple has added parts of the UIKit framework into macOS so that you
can easily port iOS applications to macOS. This is very exciting but the details are still scarce, as the feature will be released
in 2019.

AR and USDZ file format

Apple is clearly excited about Augmented Reality. ARKit 2.0 has improvements in face tracking, environment textures, image
detection and tracking in 2 and 3D, recognition, and sharing of locations (maps).

An exciting new feature is the USDZ file format (Universal Scene Description). This file format allows Augmented Reality to be portable.
A USDZ file can, for example, be embeded in a web page so that an object can be examined through AR instead of a simple picture.

ML and CreateML

The current emphasis in Machine Learning is to make it compelling and easy to use by all developers. CoreML 2.0 is faster, better at
recognizing faces and face features, and can better detect people so that they can be removed from a picture and transferred to another.
CoreML also supports language recognition.

The new CreateML framework makes it easier to extend and train a model for image or language recognition. You can use Xcode and
Playgrounds to easily extend a model and train it by dragging and dropping data into the playground. The resulting model will only
contain the extension data and will be smaller and more convenient to bundle with your app.

Xcode 10

Finally, Xcode is receving some well-deserved attention in version 10. Notable enhancements include:

  • For macOS developers, NSGridView is now available in the Interface Builder library.
  • An attractive Dark Mode.
  • Better code completion.
  • New refactoring tools for Swift.
  • Better stability for code editing.
  • Integration with source control to mark changes in code since the last checkout. The changes are marked with a color-coded bar on the gutter.
  • Improved testing with inclusion and exclusion of tests, test order randomization, and parallel testing with different suites.
  • Multi-Cursor Editing. You can now easily select and edit columns of code, or select multiple similar functions to refactor them in an equivalent way simultaneously.

There you have an overview of the most important announcements made during the first day of the 2018 WWDC. I hope many of these will
get you interested to get more details from the upcoming session videos. Also keep an eye out for more technically detailed blogs from
Big Nerd Ranch during the week.

The post WWDC 2018: Opening Day. Augmented Reality, Machine Learning, and Siri Shortcuts appeared first on Big Nerd Ranch.

]]>
https://bignerdranch.com/blog/wwdc-2018-opening-day-augmented-reality-machine-learning-and-siri-shortcuts/feed/ 0
Developing Alexa Skills Locally with Node.js: Account Linking Using OAuth https://bignerdranch.com/blog/developing-alexa-skills-locally-with-node-js-account-linking-using-oauth/ https://bignerdranch.com/blog/developing-alexa-skills-locally-with-node-js-account-linking-using-oauth/#respond Thu, 12 May 2016 09:00:53 +0000 https://nerdranchighq.wpengine.com/blog/developing-alexa-skills-locally-with-node-js-account-linking-using-oauth/ One of the greatest features of Alexa is that it functions as a personal assistant you can interact with without having to physically touch the device. This allows you to get information or accomplish tasks while you are, for example, [baking a cake](https://nerdranchighq.wpengine.com/blog/developing-alexa-skills-locally-with-nodejs-implementing-persistence-in-an-alexa-skill/). One of the tasks you could accomplish in such a sticky situation could be to post a tweet about your baking adventures.

The post Developing Alexa Skills Locally with Node.js: Account Linking Using OAuth appeared first on Big Nerd Ranch.

]]>

Editor’s note: This is the sixth post in our series on developing Alexa skills.

One of the greatest features of Alexa is that it functions as a personal assistant you can interact with without having to physically touch the device. This allows you to get information or accomplish tasks while you are, for example, baking a cake. One of the tasks you could accomplish in such a sticky situation could be to post a tweet about your baking adventures.

From an Alexa developer’s point of view, the task of posting a tweet is a pretty sophisticated operation because the skill needs to authenticate with the user’s Twitter account on the web, then get authorization to access the API in order to make a posting.

From a convenience and security point of view, it would be a terrible idea for the skill to ask for the user’s credentials verbally every time access to the Twitter API is needed. Furthermore, an Alexa-enabled device does not have a way to store these credentials locally, so another approach must be used.

Fortunately, the Alexa Skills Kit features account linking, which lets you access user accounts on other services, Twitter among them, using the OAuth protocol. In this post, we will use account linking and OAuth to grant delegated authority to our Airport Info skill so that it can post an airport’s flight status to a user’s Twitter account. Delegated authority means that the Airport Info skill will be granted permission to post to the user’s Twitter account without ever having access to the actual account credentials.

Note that Alexa uses the OAuth 2.0 protocol, and some services like Twitter still use version 1.0. The differences in the implementation are not great. Essentially, dealing with OAuth 1.0 requires an additional token request step that will be handled in this exercise by a separate web application.

Flight status

Registering Airport Info as a Twitter App

If you haven’t already built an Alexa Skill, check out our previous posts on building Airport Info to get started.

The first step in enabling Twitter delegated authority to the Airport Info skill is to let Twitter know that the skill exists. We must register Airport Info as a Twitter App, so that Twitter knows the skill will later ask for authorization to post on a user’s behalf. To accomplish this, first log in to your Twitter account and visit the Twitter Apps page.

Twitter Apps

Now click on the “Create New App” button to see the application details page. In the “Name” field, enter a descriptive name for the skill. Note that this name needs to be unique across the entire Twitter Apps namespace, for all developers. Therefore, you may need to choose a name other than what is seen in the figure below. Any name collisions will result in the following error message: “The client application failed validation: Name has already been taken.”

In the “Description” field, enter a short string of text describing your Twitter app. It will be displayed when a user grants the skill the authority to post on Twitter.

In the “Website” field, enter a URL that users can visit to get more information about your skill and how it uses Twitter. In this case, we will enter “https://alexa-twitter-airport-info.herokuapp.com/app”.

This link corresponds to a Heroku-hosted web app we created for this post. It serves as a simple OAuth client or intermediary you can use for this experiment without having to host your own application, which could be an AWS Lambda function just like your skill. Note that this application has a very minimal web interface. You can also get the code for this app from GitHub.

Here’s what the “Create an application” page should look like at this point:

Create an application form

The final field is a “Callback URL”. This is the URL that will be loaded after a new user grants delegated authority to Airport Info by authenticating with Twitter. Once again, you will use the alexa-twitter-airport-info OAuth web app described above for this purpose. (The details of what happens when this URL is called back will be explained a little later on in this post.)

Finish the registration by accepting the developer agreement and clicking on the “Create your Twitter application” button. You will be redirected to the details page for your new application. Click on the “Keys and Access Tokens” tab to view the Consumer Key and Consumer Secret for your application.

Copy the Consumer Key (API Key) and Consumer Secret (API Secret) as you will be needing them soon.

Twitter keys

The Consumer Key and the Consumer Secret will be used to authenticate the Airport Info skill when it needs to request authorization to tweet on the user’s behalf.

Finally, click on the permissions tab and set the “Access” type to “Read, Write and Access direct messages”. Click on the “Update Settings” button to save your change.

Twitter app permissions

Configuring the Skill to Use Account Linking

Now that Twitter has been informed that Airport Info may ask for API access, you need to configure the skill to use that privilege. Log in to the Alexa Skills Developer Portal and display your skills list. Assuming that you have already built and staged the Airport Info skill, click on its name on the list to display its settings, and then advance to the “Configuration” stage. Enable account linking by clicking on the “Yes” radio button. After you make that selection, additional fields will appear.

Configure account linking

“Authorization URL” is one of the new fields activated by enabling account linking. This is a crucial bit of information because it is the URL a user will be directed to in order to grant delegated authority to the skill. In this particular case, redirecting to this URL should result in the Twitter log-in being displayed to the user. As the user authenticates with Twitter, the OAuth workflow is triggered, and an access token is created for the user and stored with Amazon.

Once again, you will use the alexa-twitter-airport-info OAuth web app as an intermediary to manage the OAuth workflow. This OAuth web application is designed in such a way that you and any other Alexa developer trying this exercise can use it instead of deploying their own OAuth app. The compromise in achieving this multi user flexibility is that you will have to pass developer-specific information such as the Twitter App Consumer Key and Consumer Secret when you call the OAuth web app URL. Even though the URL with the Twitter App information will be called with HTTPS, you would not want to do this in a production implementation. In that case, you would have to keep the Consumer Key, Consumer Secret and any other OAuth information securely stored in the hosting server.

The “Authorization URL” field contains the URL that has to be called on the alexa-twitter-airport-info OAuth web app with the Twitter App key, secret and vendor id.

  • Enter: “https://alexa-twitter-airport-info.herokuapp.com/oauth/request_token?vendor_id=XXXXXX&consumer_key=YYYYYY&consumer_secret=ZZZZZZ”.
  • Replace “XXXXXX” with the vendorId value found on the “Redirect URL” field. The field contains a URL equivalent to: “https://pitangui.amazon.com/spa/skill/account-linking-status.html?vendorId=XXXXXXX”.
  • The “YYYYYY” placeholder corresponds to the Twitter App Consumer Key.
  • The “ZZZZZZ” placeholder corresponds to the Twitter App Consumer Secret.

In the “Privacy Policy URL” field, you should indicate a web page that would describe your skill’s privacy policy. Enter “https://alexa-twitter-airport-info.herokuapp.com/policy” for this test scenario. Click on the “Save” button to finish configuring the skill.

The Account Linking Process

As you have finished configuring the Airport Info skill for account linking, you are now ready to walk through every step of the process.

As a new Airport Info user enables the skill for an Alexa device, the Alexa App will redirect them to the Authorization URL you specified in the previous section. This URL corresponds to the /oauth/request_token endpoint of the alexa-twitter-airport-info application. Calling this endpoint with the appropriate information (Consumer Key, Consumer Secret, Vendor ID) generates a request for Twitter to return an access token. This request results in a redirection to a Twitter authentication page that asks a user to enter a username and password.

As the authentication is completed, Twitter generates an access token and redirects to the callback URL we specified when configuring the Twitter App. This URL corresponds to the /oauth/callback endpoint of the alexa-twitter-airport-info OAuth web app. This endpoint redirects to the “Redirect URL” specified in the skill’s account linking page with the Twitter access token just generated. This token does not expire and is saved for the particular skill user with Amazon. The token is now available within the Airport Info code as you will see shortly.

The entire account linking process is described in the diagram below.

Alexa skill account linking process

Putting Account Linking to Work: A tweetAirportStatusIntent Handler


As mentioned earlier, the Twitter access token is available within the skill’s code, specifically from the sessionDetails object available from the request passed to a handler function. This will allow you to implement a new intent handler to post information to Twitter.

Open index.js in your Airport Info skill local development project and add the following new handler (here’s more info on implementing the skill):

app.intent('tweetAirportStatusIntent', {
'slots': {
    'AIRPORTCODE': 'FAACODES'
},
// Add  ‘tweet’ to utterances:
'utterances': ['tweet {|delay|status} {|info} {|for} {-|AIRPORTCODE}']
},
function(request, response) {
    var accessToken = request.sessionDetails.accessToken;
    if (accessToken === null) {
        //no token! display card and let user know they need to sign in
    } else {
        //has a token, post the tweet!
    }
});

Notice how the availability of request.sessionDetails.accessToken is tested. If it is not available, the user must be informed that they must link accounts. Update the code as shown below:

function(request, response) {
    var accessToken = request.sessionDetails.accessToken;
    if (accessToken === null) {
        response.linkAccount().shouldEndSession(true).say('Your Twitter account is not linked.
        Please use the Alexa App to link the account.');
        return true;
    } else {
        //has a token, post the tweet!
    }
});

The response.linkAccount() method displays an appropriate card on the Alexa App to guide the user on how to link accounts. This is the same card displayed when you enable the skill for the first time.

A Twitter Helper Object


The logic for posting airport status information will be handled by a new helper object called TwitterHelper. In turn, this helper object will rely on the Twit Node.js package. To install Twit, open a terminal console and navigate to the airportinfo folder within your alexa-app-server directory used for local development. Once in the airportinfo directory, issue the following command to install Twit:

$ npm install twit --save

Now create a new file in the airportinfo directory called twitter_helper.js and type the code below.

'use strict';
module.change_code = 1;
var _ = require('lodash');
var Twitter = require('twit');
var CONSUMER_KEY = 'XXXXX';
var CONSUMER_SECRET = 'XXXXX';

function TwitterHelper(accessToken) {
    this.accessToken = accessToken.split(',');
    this.client = new Twitter({
        consumer_key: CONSUMER_KEY,
        consumer_secret: CONSUMER_SECRET,
        access_token: this.accessToken[0],
        access_token_secret: this.accessToken[1]
    });
}

TwitterHelper.prototype.postTweet = function(message) {
    return this.client.post('statuses/update', {
        status: message
    }).catch(function(err) {
          console.log('caught error', err.stack);
        });
};

module.exports = TwitterHelper;

Be sure to replace the “XXXXX” placeholders with your Twitter App Consumer Key and Consumer Secret.

The TwitterHelper() function accepts the access token and creates an instance of the TwitterHelper object with authorization to post. The postTweet() method can then be used to post text to the user’s Twitter account.

Posting Tweets

Open index.js and add the following line of code right below any other require() call.


var TwitterHelper = require(./twitter_helper);

Now you can complete the implementation of the tweetAirportStatusIntent with the code below. Please note the whole intent is listed and the new code is only within the else clause as indicated.

app.intent('tweetAirportStatusIntent', {
'slots': {
    'AIRPORTCODE': 'FAACODES'
},
'utterances': ['tweet {|delay|status} {|info} {|for} {-|AIRPORTCODE}']
},
function(request, response) {
    var accessToken = request.sessionDetails.accessToken;
    if (accessToken === null) {
        response.linkAccount().shouldEndSession(true).say('Your Twitter account is not linked.
        Please use the Alexa app to link the account.');
        return true;
    } else {

        // New code begins here:
        // I've got a token! make the tweet.
        var twitterHelper = new TwitterHelper(request.sessionDetails.accessToken);
        var faaHelper = new FAADataHelper();
        var airportCode = request.slot('AIRPORTCODE');
        if (_.isEmpty(airportCode)) {
            var prompt = 'i didn't have data for an airport code of ' + airportcode;
            response.say(prompt).send();
        } else {
            faaHelper.getAirportStatus(airportCode).then(function(airportStatus) {
                return faaHelper.formatAirportStatus(airportStatus);
            }).then(function(status) {
                return twitterHelper.postTweet(status);
            }).then(
                function(result) {
                    response.say('I've posted the status to your timeline').send();
                }
            );
            return false;
        }
        // New code ends here.
    }
});

Updating the Skill Service Code

Now you need to replace the code in your AWS Lambda function with the code for the Tweeter-enabled version of the Airport Info skill.

Create a new zip archive with all the contents of the airportinfo folder, and then go to the AWS Lambda console. Click on the Airport Info function and then click on the “Upload” button. Select your archive and update the skill.

Updating the Skill Service Code

Updating the Skill Interaction Model

You have just updated the skill code, but you still need to update the skill’s interaction model because you added a new intent and new utterances. Go back to your skills list in the Amazon Developer Console and click on the Airport Info skill. Then advance to the “Skill Interaction” page.

Copy the schema below and paste it to the “Intent Schema” field on the “Interaction Model” page.

{
  "intents": [
    {
      "intent": "tweetAirportStatusIntent",
      "slots": [
        {
          "name": "AIRPORTCODE",
          "type": "FAACODES"
        }
      ]
    },
    {
      "intent": "airportInfoIntent",
      "slots": [
        {
          "name": "AIRPORTCODE",
          "type": "FAACODES"
        }
      ]
    }
  ]
}

Updated Intent Schema

Next, copy the updated utterances list including the tweetAirportStatusIntent and paste it to the “Utterances” field on the “Interaction Model” page.

tweetAirportStatusIntent    tweet {AIRPORTCODE}
tweetAirportStatusIntent    tweet delay {AIRPORTCODE}
tweetAirportStatusIntent    tweet status {AIRPORTCODE}
tweetAirportStatusIntent    tweet info {AIRPORTCODE}
tweetAirportStatusIntent    tweet delay info {AIRPORTCODE}
tweetAirportStatusIntent    tweet status info {AIRPORTCODE}
tweetAirportStatusIntent    tweet for {AIRPORTCODE}
tweetAirportStatusIntent    tweet delay for {AIRPORTCODE}
tweetAirportStatusIntent    tweet status for {AIRPORTCODE}
tweetAirportStatusIntent    tweet info for {AIRPORTCODE}
tweetAirportStatusIntent    tweet delay info for {AIRPORTCODE}
tweetAirportStatusIntent    tweet status info for {AIRPORTCODE}
airportInfoIntent    {AIRPORTCODE}
airportInfoIntent   flight {AIRPORTCODE}
airportInfoIntent   airport {AIRPORTCODE}
airportInfoIntent    delay {AIRPORTCODE}
airportInfoIntent   flight delay {AIRPORTCODE}
airportInfoIntent   airport delay {AIRPORTCODE}
airportInfoIntent    status {AIRPORTCODE}
airportInfoIntent   flight status {AIRPORTCODE}
airportInfoIntent   airport status {AIRPORTCODE}
airportInfoIntent    info {AIRPORTCODE}
airportInfoIntent   flight info {AIRPORTCODE}
airportInfoIntent   airport info {AIRPORTCODE}
airportInfoIntent    delay info {AIRPORTCODE}
airportInfoIntent   flight delay info {AIRPORTCODE}
airportInfoIntent   airport delay info {AIRPORTCODE}
airportInfoIntent    status info {AIRPORTCODE}
airportInfoIntent   flight status info {AIRPORTCODE}
airportInfoIntent   airport status info {AIRPORTCODE}
airportInfoIntent    for {AIRPORTCODE}
airportInfoIntent   flight for {AIRPORTCODE}
airportInfoIntent   airport for {AIRPORTCODE}
airportInfoIntent    delay for {AIRPORTCODE}
airportInfoIntent   flight delay for {AIRPORTCODE}
airportInfoIntent   airport delay for {AIRPORTCODE}
airportInfoIntent    status for {AIRPORTCODE}
airportInfoIntent   flight status for {AIRPORTCODE}
airportInfoIntent   airport status for {AIRPORTCODE}
airportInfoIntent    info for {AIRPORTCODE}
airportInfoIntent   flight info for {AIRPORTCODE}
airportInfoIntent   airport info for {AIRPORTCODE}
airportInfoIntent    delay info for {AIRPORTCODE}
airportInfoIntent   flight delay info for {AIRPORTCODE}
airportInfoIntent   airport delay info for {AIRPORTCODE}
airportInfoIntent    status info for {AIRPORTCODE}
airportInfoIntent   flight status info for {AIRPORTCODE}
airportInfoIntent   airport status info for {AIRPORTCODE}

Updated skill utterances

Click on the “Save” button to finish updating the skill configuration.

Testing Account Linking

To test the account linking experience for a new Airport Info user, go to the Alexa app and search for Airport Info (or whatever your skill is named). The skill should be currently enabled, so you must temporarily disable it by clicking “Disable” and then the “Disable Skill” button. If you are unable to find your skill, make sure that you have enable testing for it, in the “Test” panel of the Skill’s Configuration Page.

Disabling an Alexa skill

If you now click on “Enable”, you should be redirected to the Twitter login page. After authenticating, you will be asked if you want to authorize Airport Info to use your Twitter account. Click on the “Authorize app” button.

Twitter authorization of an Alexa skill

After authorizing the app, you will be redirected to the skill page and you should see a success message confirming the account linking.

In the event that you did not receive a successful link card, there are a number of things you can validate to determine the root cause of the issue. First, make sure all information in your skill’s Account Linking configuration page is properly set, and that the Authorization URL is properly encoded. Next, make sure your CONSUMER_KEY and CONSUMER_SECRET values are accurately pasted in the Twitter Helper Object you created earlier. Lastly, you can check your CloudWatch Logs for any recent errors reported from your AWS Lambda service. Check out the Amazon docs on accessing CloudWatch logs if you need more info.

Account linking success

Testing the Twitter-Enabled Skill

At this point, you should have your updated skill available on your Alexa-enabled device. Ask Alexa to tweet the status for ATL by saying, “Alexa, ask Airport Info to tweet flight status for ATL”.

If everything is correctly linked, you should see a tweet on your Twitter account.

Successful Alexa skill tweet

What’s Next?

In this series on building an Alexa skill, we’ve covered a lot of ground, from setting up a local development environment to submitting a skill for certification and expanding its power by linking it to other accounts. We have locally tested a skill to determine whether it behaves as expected, and also tested it in the service simulator on the Developer Console. We have even deployed it to an Alexa-enabled device. We also have gone over how to implement persistence in a skill so that users will be able to access saved information. In this final blog in the series, we’ve discussed how to link a skill to accounts for other services and make it even more useful. Let us know what you’ve built in the comments!

The post Developing Alexa Skills Locally with Node.js: Account Linking Using OAuth appeared first on Big Nerd Ranch.

]]>
https://bignerdranch.com/blog/developing-alexa-skills-locally-with-node-js-account-linking-using-oauth/feed/ 0
Developing Alexa Skills Locally with Node.js: Submitting an Alexa Skill for Certification https://bignerdranch.com/blog/developing-alexa-skills-locally-with-node-js-submitting-an-alexa-skill-for-certification/ https://bignerdranch.com/blog/developing-alexa-skills-locally-with-node-js-submitting-an-alexa-skill-for-certification/#respond Tue, 26 Apr 2016 09:00:53 +0000 https://nerdranchighq.wpengine.com/blog/developing-alexa-skills-locally-with-node-js-submitting-an-alexa-skill-for-certification/ If you are reading this post, it is likely that you have finished [writing a shiny new Alexa skill](https://nerdranchighq.wpengine.com/blog/?q=alexa) and you are ready to submit it to Amazon for review and publication. In this post, we’ll guide you through the submission process and help you get your skill published as quickly as possible.

The post Developing Alexa Skills Locally with Node.js: Submitting an Alexa Skill for Certification appeared first on Big Nerd Ranch.

]]>

Editor’s note: This is the fifth post in our series on developing Alexa skills.

If you are reading this post, it is likely that you have finished writing a shiny new Alexa skill and you are ready to submit it to Amazon for review and publication. In this post, we’ll guide you through the submission process and help you get your skill published as quickly as possible.

Haven’t written your skill yet? Read on to learn about Amazon’s guidelines so that you can have a rapid and successful skill review.

What to Keep in Mind When Designing and Submitting an Alexa Skill for Review

If you want to have your own skill available to Alexa users, you will need to submit your skill to the Alexa Team for certification.

That means that you, as a skill developer, need to follow Amazon’s content and security policies if you wish to have your skill certified for distribution. Amazon offers an official checklist for skill submission, along with policy guidelines and security requirements.

As you might expect, skills with obscene, offensive or illegal content or purposes are terminally frowned upon. What you might not expect is that the content policies do not allow skills targeted to children, as they may compromise a child’s online safety. This is a less evident restriction you should consider when a new skill idea hits you.

Security for the server-side part of your skill is also an important consideration, and it may be tricky if you decide to host the skill yourself outside of AWS Lambda. In that case, your server will need to comply with Amazon’s security requirements. As an example, any certificates for your skill service need to be issued by an Amazon-approved certificate authority.

The good news is that if you host your skill services as Amazon Web Services Lambda functions as we have done in the Developing Alexa Skills blog series, all major security requirements are automatically satisfied.

Preparing for Skill Submission

The steps described from this point on assume that you have fully tested and debugged your skill, and that you will use an AWS Lambda function to deploy it.

If you need help in getting to this point, be sure to check out Josh Skeen’s posts on developing an Alexa Skill. In the series, we built a skill called Airport Info, which keeps users informed about airport delays and conditions. We’ll use Airport Info to go through the steps involved in submitting a skill for Amazon certification.

Getting an Application ID for your Skill

The first steps in getting your skill ready for the public are making sure your skill name is available, and getting a unique identifier for it from Amazon.

Log into your Amazon Developer account and open your skills list. Now click on the “Add a New Skill” button on the upper right corner of the page. On the Create New Skill page, set the “Skill Type” radio button to “Custom Interaction Model”. Set the “Name” to whatever name you wish users to see (in this case it will be “Airport Info”), and the “Invocation Name” to the name users will say to Alexa in order to invoke your skill. For us, it will be “airport info.”

Get the Skill ID

When you are done entering information, click on the “Save” button on the lower left of the page. If all necessary fields have been completed, you will get a green checkmark beside the “Skill Information” cell on the table to the left of the page, and you will get an application ID as well. Copy this ID and save it somewhere for future use. The ID begins with the prefix “amzn1.echo-sdk-ams.app”.

Create a Lambda Function for your Skill

As mentioned before, the easiest way to meet the Amazon security requirements for the server-side component of your skill is to deploy it using AWS Lambda.

To get started, go to the Amazon Web Services console at and click on the “Lambda” button. On the next page, click on the blue “Get Started Now” button, which will take you to a page with a series of blueprints to create different services. Just click on the gray “Skip” button at the bottom of the page to continue.

On the resulting page, enter a name for your Lambda function. In this case it will be “airportinfo”. The “Description” field can have any information that can help you remember what this function does. The runtime selects the language and version used for your skill service. Amazon recommends that you use Node.js 4.3.

Lambda Function Configuration

The next step involves submitting your code. A convenient option is to create an archive with all of the files you created during your local development, and uploading this archive to the Lambda console. Begin by compressing you skill’s files into a .zip archive, and don’t forget to include the node_modules folder.

Creating a .zip archive

After creating your archive, select the “Upload a .zip File” radio button on the Lambda console, click on the “Upload” button and select your archive.

Uploading a .zip archive

Next, ensure your Handler is set to the default value of “index.handler”.

Lastly, let’s set up your IAM (Identity and Access Management) Role, which allows you to control access to various services and features within AWS. In this scenario, we need to give your Lambda function permission to read and write data to the DynamoDB you’ve set up. To enable this, select “Basic with DynamoDB” from the “Role” dropdown below the default “Handler” field. This will open a new tab, where you should select “Create a new IAM Role” from the “IAM Role” dropdown. Enter a desired name for this new role, such as “lambda_dynamo”, and click “Allow” in the bottom right corner to continue.

If everything went through, you will now see the newly created role in the “Role” field of your Lambda function’s configuration.

Lamba function handler and role

Leave all other settings with their default values and click on the “Next” blue button at the bottom of the page. Note that if you have chosen to use Java, you should enter a value of at least 512 in the “Memory (MB)” field so that the Java runtime stays loaded.

You will see a review page where you can click the blue “Create function” button.

Lamba function review page

Now you need to instruct AWS that your Lambda function will be used as an Alexa skill. This is part of the security configuration, specifying that your function should not be accessed for purposes other than Alexa events. Click on the “Event Sources” tab for your Lambda function and click on the “Add event source” link. Select “Alexa Skills Kit” from the dialog that appears on the screen, and then click on the “Submit” button.

Please note that the “Alexa Skills Kit” option will be available only if you have the correct server location selected. It must be “US East (N. Virginia)”. This setting can be changed on the upper right-hand corner of the Lambda console, beside your login name.

Selecting the ASK event source

On the upper right corner of the page, you should find a unique identifier for your function, beginning with “arn:aws:lambda:”. Make a note of this identifier, as you will need it soon.

Completing the Interaction Model for the Skill

You have now completed the first step required to configure your skill in the Amazon Developer Console. Next we’ll complete the Interaction Model for your skill.

Go back to the skills list on your Amazon Developer Console and select the skill you are deploying. In the table on the left of the page, select “Interaction Model” and enter the Intent Schema. The best place to obtain the intent schema is from the local development test page, as described in our post on implementing an intent in an Alexa skill.

Entering the Intent Schema

If your skill uses any custom slots, you need to list all the possible values that can be accepted. Do this by clicking on the “Add Slot Type” button, then entering the Type and the values.

In the case of Airport Info, the custom slots correspond to all the three-letter FAA airport codes. Within the skill code they are identified as “FAACODES”.

Custom Slot Types

Next, you must enter sample utterances for your skill. These are the phrases that your skill can recognize from the user to provide a response. After entering the utterances, click the “Next” button. The best place to obtain these utterances is from the web interface available when you test your skill locally with alexa-app and alexa-app-server.

Sample Utterances

Linking your Skill to Your Lambda Function

The next step is to configure your skill so that it is linked to the Lambda function you created earlier.

The “Configuration” page on the Amazon Developer Console lets you do that. To complete this step, you will need the ARN (Amazon Resource Name) you copied from the AWS console when creating the AWS Lambda function.

Select “Lambda ARN” for the “Endpoint” field and enter the ARN code. Select whether your users may link the skill to another type of account (e.g., Twitter), then click the “Next” button. Amazon has more info on linking skills to other accounts if you need it.

Account linking configuration

Publishing Information for the Skill

The next step involves providing information that will be used to produce a skill description for users.

The first field is a short skill description (under 160 characters) that will be used for the main list of skills in the Alexa app.

The second field is a longer description, where you should try to convince people to use your skill. Users can read this description when looking at the skill details card on the Alexa App.

Next there are three fields to enter three example phrases to interact with your skill. These must be frequent and useful phrases, and they must be 100% correct. Amazon indicates that incorrect phrases are a sure way to have your skill submission rejected. If your skill uses slots, make sure to provide valid values. Each phrase must begin with an explicit “Alexa, …” and each must directly match one of the sample utterances specified earlier.

Description and example phrases fields

In the next field, you will see a popup where you can select the skill’s category. For Airport Info, the best category is “Travel”, but you should find the best match for your skill.

The “Keywords” field lets you type words, separated by commas or whitespace, that will increase the chances of finding your skill when a user launches a search on the Alexa App. Be generous here—you want users to discover your skill.

Your skill also needs some attractive icons. You will need two versions: 108 x 108 pixels and 512 x 512 pixels. These icons can have transparency and they can be PNG or JPG files. Click on the image wells to upload the files.

Finally, the “Testing Instructions” field allows you to pass any useful testing information to the Amazon skill testers. This information is meant to help the skill certification team properly exercise your skill and test every function. You may want to provide some examples, possible slot values and any other information that may be useful.

Publishing information fields

Click on the “Next” button at the bottom right of the page to move to the final step.

Privacy and Compliance for your Alexa Skill

The final step for the skill submission asks whether your skill allows users to make purchases or spend money with your skill. Unless your skill is an extension of a service or product that already has a payment system established, you should choose “No”.

You are also asked if your skill collects a user’s personal information or passwords. (This may be the case if you skill is linked to other accounts.) Depending upon your answers, you may need to provide URLs for Privacy Policy and Terms of Use information.

The final field asks you to certify that your Alexa skill may be imported to or exported to other countries. You must click the checkbox to submit your skill.

Privacy and compliance fields

Finally, you may click on the “Submit for Certification” button. You will see the following page to confirm that the skill certification has been started.

Skill submission confirmation

At this point, you cannot make any changes to your skill, but you may withdraw it if you change your mind or discover serious problems.

You will also get an email from the Alexa Skills Team confirming your submission. The current turnaround time is seven days. If you have questions in the meantime, you can reach out to Amazon.

Submission Results

After submitting the Airport Info Skill for review, it was rejected—but that’s good news for you and this tutorial, as we learned a lot about the process. The Amazon Alexa team sent an email indicating the areas that needed attention.

Particular issues that were pointed out:

  1. The example phrases did not exactly match any sample utterance listed on the skill interaction model.
  2. The skill was not closing the stream after fulfilling a request.
  3. The skill was not responding to “stop” and “cancel” requests.
  4. The skill did not implement user help.

The solution to the first issue included extending the code-generated list of utterances. In the index.js file of the Airport Info skill, the utterances were generated with:

app.intent('airportInfo', {
    'slots': {
      'AIRPORTCODE': 'FAACODES'
    },
    'utterances': ['{|flight|airport} {|delay|status} {|info} {|for} {-|AIRPORTCODE}']
  },

// ...

The code had to be updated to:

app.intent('airportInfo', {
    'slots': {
      'AIRPORTCODE': 'FAACODES'
    },
    'utterances': ['{|flight|airport} {|delay|status} {|info|information} {|for|at} {-|AIRPORTCODE}']
  },

// ...

The second issue required a slight change to the airportInfo intent:

function(request, response) {
    var airportCode = request.slot('AIRPORTCODE');
    var reprompt = 'Tell me an airport code to get delay information.';
    if (_.isEmpty(airportCode)) {
      var prompt = 'I didn't hear an airport code. Tell me an airport code.';
      response.say(prompt).reprompt(reprompt).shouldEndSession(false);
      return true;
    } else {
      var faaHelper = new FAADataHelper();
      faaHelper.requestAirportStatus(airportCode).then(function(airportStatus) {
        response.say(faaHelper.formatAirportStatus(airportStatus)).send();
      }).catch(function(err) {
        var prompt = 'I didn't have data for an airport code of ' + airportCode;
        response.say(prompt).reprompt(reprompt).shouldEndSession(false).send();
      });
      return false;
    }
  }

Note how the session is ended now with shouldEndSession(true):

function(request, response) {
    var airportCode = request.slot('AIRPORTCODE');
    var reprompt = 'Tell me an airport code to get delay information.';
    if (_.isEmpty(airportCode)) {
      var prompt = 'I didn't hear an airport code. Tell me an airport code.';
      response.say(prompt).reprompt(reprompt).shouldEndSession(false);
      return false;
    } else {
      var faaHelper = new FAADataHelper();
      faaHelper.requestAirportStatus(airportCode).then(function(airportStatus) {
        response.say(faaHelper.formatAirportStatus(airportStatus)).shouldEndSession(true).send();
      }).catch(function(err) {
        var prompt = 'I didn't have data for an airport code of ' + airportCode;
        response.say(prompt).reprompt(reprompt).shouldEndSession(true).send();
      });
      return false;
    }
  }

The third issue required the implementation of stop and cancel intents (Amazon has more information on built-in intents):

var exitFunction = function(request, response) {
  var speechOutput = 'Goodbye!';
  response.say(speechOutput);
};

app.intent('AMAZON.StopIntent', exitFunction);
app.intent('AMAZON.CancelIntent', exitFunction);

Finally, a help intent was added to resolve the fourth issue (more information about the help intent can be found in the Amazon docs):

app.intent('AMAZON.HelpIntent', function(request, response) {
  var speechOutput = 'To request information on an airport, request it by it's status code.' +
    'For example, to get information about atlanta hartsfield airport, say airport status for ATL';
  response.say(speechOutput);
});

The Amazon Skills Team’s thorough review allowed us to quickly fix these issues. After we updated the code and uploaded it again to the AWS Lambda function, the skill was once more ready to submit for certification.

Now you’re ready to submit your own skill for certification. Using these tips, it should be a speedy and successful review. Best of luck with your submission!

The post Developing Alexa Skills Locally with Node.js: Submitting an Alexa Skill for Certification appeared first on Big Nerd Ranch.

]]>
https://bignerdranch.com/blog/developing-alexa-skills-locally-with-node-js-submitting-an-alexa-skill-for-certification/feed/ 0
Error Handling in Swift 2.0 https://bignerdranch.com/blog/error-handling-in-swift-2-0/ https://bignerdranch.com/blog/error-handling-in-swift-2-0/#respond Mon, 22 Jun 2015 09:53:37 +0000 https://nerdranchighq.wpengine.com/blog/error-handling-in-swift-2-0/

When Apple announced Swift 2.0 at this year’s WWDC, Swift’s main architect, Chris Lattner,
indicated that the 2.0 update to the language focused on three main areas: fundamentals, safety and beautiful code.
Out of the list of new features, improvements, polishes and beautifications, one that may impact your Swift 1.x
code the most is error handling.

The post Error Handling in Swift 2.0 appeared first on Big Nerd Ranch.

]]>

When Apple announced Swift 2.0 at this year’s WWDC, Swift’s main architect, Chris Lattner,
indicated that the 2.0 update to the language focused on three main areas: fundamentals, safety and beautiful code.
Out of the list of new features, improvements, polishes and beautifications, one that may impact your Swift 1.x
code the most is error handling.

That’s because you cannot opt out of it. You must embrace error handling if you want to write Swift 2.0 code, and it will change the way you interact with methods that use NSError in the Cocoa and Cocoa Touch frameworks.

A Bit of History: Humble Beginnings

As we all know, Swift was created as a modern replacement for Objective-C, the lingua franca for writing OS X and iOS applications. In its earliest releases, Objective-C did not have native exception handling. Exception handling was added later through the NSException class and the NS_DURING, NS_HANDLER and NS_ENDHANDLER macros. This scheme is now known as “classic exception handling,” and the macros are based on the setjmp() and longjmp() C functions.

Exception-catching constructs looked as shown below, where any exception thrown within the NS_DURING and NS_HANDLER macros would result in executing the code between the NS_HANDLER and NS_ENDHANDLER macros.

NS_DURING
    // Call a dangerous method or function that raises an exception:
    [obj someRiskyMethod];
NS_HANDLER
    NSLog(@"Oh no!");
    [anotherObj makeItRight];
NS_ENDHANDLER

A quick way to raise an exception is (and this is still available):

- (void)someRiskyMethod
{
    [NSException raise:@"Kablam"
                format:@"This method is not implemented yet. Do not call!"];
}

As you can imagine, this artisanal way of handling exceptions caused a lot of teasing for early Cocoa programmers.
However, those programmers kept their chins high, because they rarely used it. In both Cocoa and Cocoa Touch,
exceptions have been traditionally relegated to mark catastrophic, unrecoverable errors, such as programmer errors. A good
example is the -someRiskyMethod above, that raises an exception because the implementation is not ready. In the Cocoa
and Cocoa Touch frameworks, recoverable errors are handled with the NSError class discussed later.

Native Exception Handling

I guess the teasing arising from the classic exception handling in Objective-C got bothersome enough that Apple released
native exception handling with OS X 10.3, before any iOS version. This was done by essentially grafting C++ exceptions
onto Objective-C. Exception handling constructs now look something like this:

@try {
    [obj someRiskyMethod];
}
@catch (SomeClass *exception) {
    // Handle the error.
    // Can use the exception object to gather information.
}
@catch (SomeOtherClass *exception) {
    // ...
}
@catch (id allTheRest) {
    // ...
}
@finally {
    // Code that is executed whether an exception is thrown or not.
    // Use for cleanup.
}

Native exception handling gives you the opportunity to specify different @catch blocks for each exception type, and
a @finally block for code that needs to execute regardless of the outcome of the @try block.

Even though raising an NSException works as expected with native exception handling, the more explicit way to throw
an exception is with the @throw <expression>; statement. Normally you throw NSException instances, but any object may
be thrown.

NSError

Despite the many advantages of native versus classic exception handling in Objective-C, Cocoa and Cocoa Touch developers still rarely use exceptions, restricting them to unrecoverable programmer errors. Recoverable errors use the NSError class that predates exception handling. The NSError pattern was also inherited by Swift 1.x.

In Swift 1.x, Cocoa and Cocoa Touch methods and functions that may fail return either a boolean false or nil in place of an object to indicate failure. Additionally, an NSErrorPointer is taken as an argument to return specific information about the failure. A classic example:

// A local variable to store an error object if one comes back:
var error: NSError?
// success is a Bool:
let success = someString.writeToURL(someURL,
                                    atomically: true,
                                    encoding: NSUTF8StringEncoding,
                                    error: &error)
if !success {
    // Log information about the error:
    println("Error writing to URL: (error!)")
}

Programmer errors can be flagged with the Swift Standard Library function fatalError("Error message") to log an error message to the console and terminate execution unconditionally. Also available are the assert(), assertionFailure(), precondition() and preconditionFailure() functions.

When Swift was first released, some developers outside of Apple platforms readied the torches and pitchforks. They claimed Swift could not be a “real language” because it lacked exception handling. However, the Cocoa and Cocoa Touch communities stayed calm, as we knew that NSError and NSException were still there. Personally, I believe that Apple was still pondering the right way to implement error/exception handling. I also think that Apple deferred opening the Swift source until the issue was resolved (remember the pitchforks?). All this has been cleared up with the release of Swift 2.0.

Error Handling in Swift 2.0

In Swift 2.0, if you want to throw an error, the object thrown must conform to the ErrorType protocol. As you may have expected, NSError conforms to this protocol. Enumerations are used for classifying errors.

enum AwfulError: ErrorType {
    case Bad
    case Worse
    case Terrible
}

Then, a function or method is marked with the throws keyword if it may throw one or several errors:

func doDangerousStuff() throws -> SomeObject {
    // If something bad happens throw the error:
    throw AwfulError.Bad

    // If something worse happens, throw another error:
    throw AwfulError.Worse

    // If something terrible happens, you know what to do:
    throw AwfulError.Terrible

    // If you made it here, you can return:
    return SomeObject()
}

In order to catch errors, a new do-catch statement is available:

do {
    let theResult = try obj.doDangerousStuff()
}
catch AwfulError.Bad {
    // Deal with badness.
}
catch AwfulError.Worse {
    // Deal with worseness.
}
catch AwfulError.Terrible {
    // Deal with terribleness.
}
catch ErrorType {
    // Unexpected error!
}

The do-catch statement has some similarities with switch in the sense that the list of caught errors must be exhaustive and you can use patterns to capture the thrown error. Also notice the use of the keyword try. It is meant to explicitly label a throwing line of code, so that when you read the code you can immediately tell where the danger is.

A variant of the try keyword is try!. That keyword may be appropriate for those programmer errors again. If you mark a throwing call with try!, you are promising the compiler that that error will never happen and you do not need to catch it. If the statement does produce an error, the application will stop execution and you should start debugging.

let theResult = try! obj.doDangerousStuff()

Interacting with the Cocoa and Cocoa Touch Frameworks

The issue now is, how do you deal with grandpa’s NSError API in Swift 2.0? Apple has done a great job of unifying behavior in Swift 2.0, and they have prepared the way for future frameworks written in Swift.

Cocoa and Cocoa Touch methods and functions that could produce an NSError instance have their signature automatically converted to Swift’s new error handling.

For example, this NSString initializer has the following signature in Swift 1.x:

convenience init?(contentsOfFile path: String,
                  encoding enc: UInt,
                  error error: NSErrorPointer)

In Swift 2.0 the signature is converted to:

convenience init(contentsOfFile path: String,
                 encoding enc: UInt) throws

Notice that in Swift 2.0, the initializer is no longer marked as failable, it does not take an NSErrorPointer argument, and it is marked with throws to explicitly indicate potential failures. An example using this new signature:

do {
    let str = try NSString(contentsOfFile: "Foo.bar",
                           encoding: NSUTF8StringEncoding)
}
catch let error as NSError {
    print(error.localizedDescription)
}

Notice how the error is caught and cast as an NSError instance, so that you can access information with its familiar API. As a matter of fact, any ErrorType can be converted to an NSError.

Finally, What about @finally?

Attentive readers may have noticed that Swift 2.0 introduced a new do-catch statement, not a do-catch-finally. How do you specify code that must be run regardless of errors? For that, you now have a defer statement that will delay execution of a block of code until the current scope is exited.

// Some scope:
{
    // Get some resource.

    defer {
        // Release resource.
    }

    // Do things with the resource.
    // Possibly return early if an error occurs.

} // Deferred code is executed at the end of the scope.

Swift 2.0 does a great job of coalescing the history of error handling in Cocoa and Cocoa Touch into a modern idiom that will feel familiar to many programmers. Unifying behavior leaves the Swift language and the frameworks it inherits in a good position to evolve.

The post Error Handling in Swift 2.0 appeared first on Big Nerd Ranch.

]]>
https://bignerdranch.com/blog/error-handling-in-swift-2-0/feed/ 0
Django and Django-Template Syntax Highlighting for Coda https://bignerdranch.com/blog/django-and-django-template-syntax-highlighting-for-coda/ https://bignerdranch.com/blog/django-and-django-template-syntax-highlighting-for-coda/#respond Fri, 26 Sep 2008 23:18:35 +0000 https://nerdranchighq.wpengine.com/blog/django-and-django-template-syntax-highlighting-for-coda/

I recently started playing with Panic’s web development application, Coda, and I immediately liked its all-in-one approach. As a Django developer, I typically work with multiple text editor, terminal, and browser windows. This can sometimes get out of hand, especially on a laptop. Coda can improve this situation by keeping all these tools and more within a single application.

The post Django and Django-Template Syntax Highlighting for Coda appeared first on Big Nerd Ranch.

]]>

I recently started playing with Panic’s web development application, Coda, and I immediately liked its all-in-one approach. As a Django developer, I typically work with multiple text editor, terminal, and browser windows. This can sometimes get out of hand, especially on a laptop. Coda can improve this situation by keeping all these tools and more within a single application.

All of this looked very promising until I realized that Coda does not offer out-of-the-box Django and Django-Template syntax highlighting or autocompletion. Thus, I decided to write a couple of Mode bundles to improve the situation. You may download these bundles and place them in your /Users/username/Library/Application Support/Coda/modes directory for instant gratification.

I am still developing these bundles to make them smarter, more lexically complete, and more integrated with Coda. Please let me know about your experience with them and I will try to make them better.

Django.mode.zip

Django-Template.mode.zip

The post Django and Django-Template Syntax Highlighting for Coda appeared first on Big Nerd Ranch.

]]>
https://bignerdranch.com/blog/django-and-django-template-syntax-highlighting-for-coda/feed/ 0
Using the Django newforms Library https://bignerdranch.com/blog/using-the-django-newforms-library/ https://bignerdranch.com/blog/using-the-django-newforms-library/#respond Mon, 19 Feb 2007 20:44:40 +0000 https://nerdranchighq.wpengine.com/blog/using-the-django-newforms-library/

The original Django tools for creating HTML forms and validating user supplied data (forms, manipulators, and validators) are currently being replaced by the newforms library, which is expected to be completed for version 1.0. The newforms library will be a nice change to Django, as it is much more elegant and easier to use than the oldforms library. Unfortunately, the inclusion of the newforms library will be backwards incompatible, so the development team is going to include both libraries in Django 1.0 to ease the transition, and then completely drop oldforms from the framework in later versions.

The post Using the Django newforms Library appeared first on Big Nerd Ranch.

]]>

The original Django tools for creating HTML forms and validating user supplied data (forms, manipulators, and validators) are currently being replaced by the newforms library, which is expected to be completed for version 1.0. The newforms library will be a nice change to Django, as it is much more elegant and easier to use than the oldforms library. Unfortunately, the inclusion of the newforms library will be backwards incompatible, so the development team is going to include both libraries in Django 1.0 to ease the transition, and then completely drop oldforms from the framework in later versions.

Thus, current Django developers are encouraged to embrace the newforms library as soon as possible, and new developers are discouraged from spending time learning the oldforms API altogether. This all sounds great, except that the newforms documentation is far from complete at this time. This article’s goal is to give you enough information so that you can get started using the library now.

If you want to learn all about Django, I’ll be teaching the Django Bootcamp at Big Nerd Ranch, April 2 – 6.

The transition path

from django import forms

If you will be using the newforms library, you are encouraged to import it in the following way:

from django import newforms as forms

so that when the newforms library is renamed to “forms” in the future, you will not have to change your code.

The model

For the examples we will be discussing, we will use the following model class:

from django.db import models

class Item(models.Model):
        STATUS_CHOICES = (
                ('stk', 'In stock'),
                ('bac', 'Back ordered'),
                ('dis', 'Discontinued'),
                ('nav', 'Not available'),
                )
        serial_number = models.CharField(maxlength=15)
        name = models.CharField(maxlength=100)
        description = models.TextField(blank=True)
        date_added = models.DateField(auto_now_add=True)
        date_removed = models.DateField(blank =True, null=True)
        date_backordered = models.DateField(blank=True, null=True)
        comments = models.TextField(blank=True)
        status = models.CharField(maxlength=3, choices=STATUS_CHOICES, default='stk')

The date_added field will automatically set the date when an Item is created, and we have listed some choices for the status field.

Using newforms

One of the neat things about newforms is that you can create them from specific model classes or their instances:

from django import newforms as forms
from yourproject.yourapplication.models import Item

ItemFormClass = forms.models.form_for_model(Item)   # This creates a form *class* for Item
form = ItemFormClass()                              # Then you instantiate the form class

Note how the form_for_model() method created a class for us, and we have to make an instance to use the form. You can look at the HTML generated if you simply print the form in a shell session:

>>> print form
<tr><th><label for="id_serial_number">Serial number:</label></th><td>
     <input id="id_serial_number" type="text" name="serial_number" maxlength="15" /></td></tr>
<tr><th><label for="id_name">Name:</label></th><td>
     <input id="id_name" type="text" name="name" maxlength="100" /></td></tr>
<tr><th><label for="id_description">Description:</label></th><td>
     <textarea name="description" id="id_description"></textarea></td></tr>
<tr><th><label for="id_date_added">Date added:</label></th><td>
     <input type="text" name="date_added" id="id_date_added" /></td></tr>
<tr><th><label for="id_date_removed">Date removed:</label></th><td>
     <input type="text" name="date_removed" id="id_date_removed" /></td></tr>
<tr><th><label for="id_date_backordered">Date backordered:</label></th><td>
     <input type="text" name="date_backordered" id="id_date_backordered" /></td></tr>
<tr><th><label for="id_comments">Comments:</label></th><td>
     <textarea name="comments" id="id_comments"></textarea></td></tr>
<tr><th><label for="id_status">Status:</label></th><td>
     <input id="id_status" type="text" name="status" maxlength="3" /></td></tr>

By default, the form is laid-out as a table, and labels and id’s are created for each field in the form. This behavior can be easily changed to other lay-out conventions and tagging schemes. The current Django documentation explains all of these features quite nicely, so I will skip further details here. Also note that the form HTML is not embedded within <table></table> and <form></form> tags, and the <input type=”submit”> tag is missing. You have to supply those in your own form skeleton. As an example, see the listing of Add_Item.html template below:

<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN"
        "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">
<html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en">
<head>
        <meta http-equiv="Content-Type" content="text/html; charset=utf-8"/>
        <title>Add Item</title>
</head>

<body>
        <form action="." method="post">
                <table>
                        
                </table>
                <input type="submit" value="Add Item" />
        </form>
</body>
</html>

Now we are in a position to create a new Item entry with the following view:

from django.shortcuts import render_to_response
from django.http import HttpResponse, HttpResponseRedirect, Http404
from yourproject.yourapplication.models import Item
from django import newforms as forms

def add_item(request):
        AddItemFormClass = forms.form_for_model(Item)   # Create the form class

        if request.POST:
                form = AddItemFormClass(request.POST)       # Instantiate and load POST data
                if form.is_valid():                         # Validate data
                        form.save()                             # Add the item
                        return HttpResponseRedirect('/index')
        else:
                form = AddItemFormClass()                   # Instantiate empty form

        return render_to_response('Add_Item.html', {'form': form})

Note how we can initialize an AddItemFormClass instance with POST data or empty. Also, if there are validation errors, the form will automatically be redisplayed with errors listed above the appropriate fields. Finally, saving the form automatically creates a new instance of Item and saves it to the database. Cool!

However, not all is well. The Status field in our form is a text input and we must manually type a choice, such as ‘bac’. We need to change this to a popup menu to select a valid choice with a meaningful tag. Thankfully, we can easily correct this using a widget. Widgets are ways of displaying fields in HTML, and we can change the default widget for any field of our AddItemFormClass:

from django.shortcuts import render_to_response
from django.http import HttpResponse, HttpResponseRedirect, Http404
from NewForms.warehouse.models import Item
from django import newforms as forms
from django.newforms import widgets

def add_item(request):
        AddItemFormClass = forms.form_for_model(Item)   # Create the form class
        AddItemFormClass.base_fields['status'].widget = widgets.Select(choices=Item.STATUS_CHOICES)

        if request.POST:
                form = AddItemFormClass(request.POST)    # Instantiate and load POST data
                if form.is_valid():                      # Validate data
                        form.save()                          # Add the item
                        return HttpResponseRedirect('/index')
        else:
                form = AddItemFormClass()                # Instantiate empty

        return render_to_response('Add_Item.html', {'form': form})

You can look at django/newforms/widgets.py to find other available widgets.

For updates of Item entries, we want to pre-populate the form with data. How do we do that? The answer is to create the form class from an instance, not a model class.

def update_item(request, item_id):
        current_item = Item.objects.get(id=item_id)                # Get the Item instance
        AddItemFormClass = forms.form_for_instance(current_item)   # Create the form class
        AddItemFormClass.base_fields['status'].widget = widgets.Select(choices=Item.STATUS_CHOICES)

        if request.POST:
                form = AddItemFormClass(request.POST)       # Instantiate and load POST data
                if form.is_valid():                         # Validate data
                        form.save()                             # Save the item
                        return HttpResponseRedirect('/index')
        else:
                form = AddItemFormClass()                   # Instantiate empty

        return render_to_response('Add_Item.html', {'form': form})

The two view functions add_item() and update_item() are quite similar and you probably would prefer to combine them into a single view. For the sake of simplicity and brevity, that task will be left as an exercise.

So far, we have easily written two views to add new Item entries to our database and to modify current ones. But this has been an all or nothing proposition up to this point: the form displays all the fields in the model. Frequently you will want to modify only some of the fields and leave others hidden. For example, our date_added field automatically sets the date, and we may not want the user to fiddle with that.

def update_description(request, item_id):
        current_item = Item.objects.get(id=item_id)                # Get the Item instance
        AddItemFormClass = forms.form_for_instance(current_item)   # Create the form class
        AddItemFormClass.base_fields['serial_number'].widget = widgets.HiddenInput()
        AddItemFormClass.base_fields['name'].widget = widgets.HiddenInput()
        AddItemFormClass.base_fields['date_added'].widget = widgets.HiddenInput()
        AddItemFormClass.base_fields['date_removed'].widget = widgets.HiddenInput()
        AddItemFormClass.base_fields['date_backordered'].widget = widgets.HiddenInput()
        AddItemFormClass.base_fields['comments'].widget = widgets.HiddenInput()
        AddItemFormClass.base_fields['status'].widget = widgets.HiddenInput()

        if request.POST:
                form = AddItemFormClass(request.POST)     # Instantiate and load POST data
                if form.is_valid():                       # Validate data
                        form.save()                           # Save the item
                        return HttpResponseRedirect('/index')
        else:
                form = AddItemFormClass()                 # Instantiate empty

        return render_to_response('Add_Item.html', {'form': form})

By using the widget HiddenInput, we have left all fields out of the form except for the description, which can be edited.

You can even trick the form into not asking for a required field, and then add the data yourself, programmatically:

def add_item_without_name(request):
        AddItemFormClass = forms.form_for_model(Item)                  # Create the form class
        AddItemFormClass.base_fields['name'].widget = widgets.HiddenInput()   # Hide name field
        AddItemFormClass.base_fields['name'].required = False    # Make name field not required
        AddItemFormClass.base_fields['status'].widget = widgets.Select(choices=Item.STATUS_CHOICES)

        if request.POST:
                form = AddItemFormClass(request.POST)   # Instantiate and load POST data
                if form.is_valid():                     # Validate data
                        newItem = form.save(commit=False)   # Create the item
                        newItem.name = 'To be determined'   # Add the data required by the model
                        newItem.save()
                        return HttpResponseRedirect('/index')
        else:
                form = AddItemFormClass()               # Instantiate empty

        return render_to_response('Add_Item.html', {'form': form})

Even though the name field is required by the model class, we tricked the form into not validating it by setting that field to not required. But if we were to save the new Item at this point, we would have a database error, so we create an Item instance with newItem = form.save(commit=False), where commit=False prevents writing to the DB. After newItem is created, we set a name for it and save the valid Item to the database.

Finally, what if you want to have a completely custom form, not attached to any specific database model? In that case you manually define a form class with any field specifications required:

class CustomForm(forms.Form):
        serial_number = forms.CharField(max_length=15)
        status = forms.CharField(max_length=3, widget=widgets.Select(choices=Item.STATUS_CHOICES))

This custom form will allow us to add an Item to the database with minimal information:

def add_minimal(request):

        if request.POST:
                form = CustomForm(request.POST)     # Instantiate and load POST data
                if form.is_valid():                 # Validate data
                        newItem = Item(serial_number=form.clean_data['serial_number'],
                                                        name='To be determined',
                                                        description='No description',
                                                        comments='No comments',
                                                        status=form.clean_data['status'])
                        newItem.save()

                        return HttpResponseRedirect('/NewForms/index')
        else:
                form = CustomForm()                 # Instantiate empty

        return render_to_response('Add_Item.html', {'form': form})

In this case, we just use the custom form to validate the user data, and just get that valid data to create a new Item instance and save it in the old fashioned way. If you want to show some data on the form, you could use the “initial” field argument when defining the CustomForm class or later using the base_fields dictionary.

class CustomForm(forms.Form):
        serial_number = forms.CharField(max_length=15, initial='Acme_wazmo_123')
        status = forms.CharField(max_length=3,
                             widget=widgets.Select(choices=Item.STATUS_CHOICES),
                             initial='stk')

I hope this article will provide enough information to get you going with newforms in a useful way. Happy coding.

The post Using the Django newforms Library appeared first on Big Nerd Ranch.

]]>
https://bignerdranch.com/blog/using-the-django-newforms-library/feed/ 0