Josh Skeen - Big Nerd Ranch Tue, 19 Oct 2021 17:46:29 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.5 Why We’re Excited about KotlinConf https://bignerdranch.com/blog/why-were-excited-about-kotlinconf/ https://bignerdranch.com/blog/why-were-excited-about-kotlinconf/#respond Thu, 26 Oct 2017 09:55:53 +0000 https://nerdranchighq.wpengine.com/blog/why-were-excited-about-kotlinconf/ This year is the first year of KotlinConf, JetBrains's annual conference for sharing with the world all things Kotlin, straight from the architects of the language itself.

The post Why We’re Excited about KotlinConf appeared first on Big Nerd Ranch.

]]>

Now more than ever, it’s an exciting time to be an Android developer. A huge reason for the excitement: this year, Google announced official support for the Kotlin programming language! This is a natural evolution from Java that finally gives Android developers the modern features they’ve asked for.

This year is also the first year of KotlinConf, JetBrains’s annual conference for sharing with the world all things Kotlin, straight from the architects of the language itself.

We’re especially looking forward to learning about new Kotlin features in the pipeline and hearing how the community is adopting Kotlin in their applications.

We’re thrilled to be in attendance this year – look for David Greenhalgh; he’s the one handing out nerd glasses! If you have an interest in leveling up your Kotlin skills, make sure not to miss it!

KotlinConf banner

Curious to better know Kotlin? Our two-day Kotlin Essentials course delivers in spades, while our Android Essentials with Kotlin course will set you on the right path for Android development.

The post Why We’re Excited about KotlinConf appeared first on Big Nerd Ranch.

]]>
https://bignerdranch.com/blog/why-were-excited-about-kotlinconf/feed/ 0
Migrating an Android App from Java to Kotlin https://bignerdranch.com/blog/migrating-an-android-app-from-java-to-kotlin/ https://bignerdranch.com/blog/migrating-an-android-app-from-java-to-kotlin/#respond Sun, 18 Jun 2017 11:08:20 +0000 https://nerdranchighq.wpengine.com/blog/migrating-an-android-app-from-java-to-kotlin/ We're kicking off a series of posts that demonstrate what Kotlin offers by migrating a 100% Java Android app to be a 100% Kotlin Android app. In this first post, we'll get our project configured to use Kotlin and go through converting our first file.

The post Migrating an Android App from Java to Kotlin appeared first on Big Nerd Ranch.

]]>

Now that Google has announced official support for Kotlin on Android, Kotlin is widely viewed as the first viable alternative to Java on Android.

If you haven’t yet heard of Kotlin, it’s a modern JVM language in use at companies like Pinterest, Trello, Square, Kickstarter and Google, to list just a few.

On Android, Kotlin enables a modern programming experience without requiring third-party workarounds that would exclude large percentages of users (Java 8 support on Android requires a minSdk of 24 which excludes 95% of devices) or that would introduce the risk of using Java 8 language features the Android toolchain doesn’t support.

My colleague David wrote a great introduction to Kotlin programming, and in this series, we’ll learn what Kotlin offers by migrating a 100% Java Android app to a 100% Kotlin Android app. In this first post, we’ll get our project configured to use Kotlin and run through converting our first file.

Starting the Migration

The project we’ll migrate is called StockWatcher. Given a ticker symbol, StockWatcher looks up the current stock price using a REST API. StockWatcher also includes many of the popular patterns and libraries you’re likely to find in a modern (2017) Android and Java app:

  • RxJava 2 for event propagation, background work scheduling, and data manipulation
  • Retrofit for networking
  • Data binding for generating view binding classes
  • Retrolambda to backport a subset of Java 8 features to Java 6, enabling support for older API levels
  • Dagger 2 for dependency injection
  • Gson for JSON parsing and serialization
  • Timber for logging

By the way, I previously wrote about StockWatcher when I showed an RxJava 2 pattern for handling lifecycle configuration changes.

Kotlin IDE Plugin

To get started, we first need to add the Kotlin IDE plugin in Android Studio. To add it, select Android Studio > Preferences > Plugins > Install Jetbrains Plugin. Type in “kotlin” and click “Install”. Restart Android Studio before continuing; otherwise, it won’t have loaded your new Kotlin plugin!

Updating Gradle

Next up, we’ll update the project’s build.gradle to support Kotlin. Here’s what I changed to get Kotlin wired up in the legacy project:

build.gradle

buildscript {
+    ext.kotlin_version = '1.1.1'
+    ext.gradle_plugin_version = '2.3.0'
      repositories {
          jcenter()
      }
      dependencies {
-        classpath 'com.android.tools.build:gradle:2.3.0-beta3'
-        classpath 'me.tatarka:gradle-retrolambda:3.4.0'
-        // NOTE: Do not place your application dependencies here; they belong
-        // in the individual module build.gradle files
+        classpath "com.android.tools.build:gradle:$gradle_plugin_version"
+        classpath "org.jetbrains.kotlin:kotlin-gradle-plugin:$kotlin_version"
      }
  }

app/build.gradle

apply plugin: 'com.android.application'
-apply plugin: 'me.tatarka.retrolambda'
+apply plugin: 'kotlin-android'

 android {
     compileSdkVersion 25
     buildToolsVersion "25.0.2"
     defaultConfig {
         applicationId "com.bignerdranch.stockwatcher"
         minSdkVersion 19
     }
-    compileOptions {
-        sourceCompatibility JavaVersion.VERSION_1_8
-        targetCompatibility JavaVersion.VERSION_1_8
-    }
    dataBinding {
        enabled = true
    }
 }

 dependencies {
     compile fileTree(dir: 'libs', include: ['*.jar'])

     //support libraries
     compile 'com.android.support:appcompat-v7:25.1.0'
     compile 'com.android.support:design:25.1.0'
     //dependency injection
     compile 'com.google.dagger:dagger:2.8'
-    annotationProcessor 'com.google.dagger:dagger-compiler:2.8'
+    kapt 'com.google.dagger:dagger-compiler:2.8'
+    kapt "com.android.databinding:compiler:$gradle_plugin_version"
     provided 'org.glassfish:javax.annotation:10.0-b28'
     compile 'com.google.auto.factory:auto-factory:1.0-beta3'
-    //code generation
-    provided 'org.projectlombok:lombok:1.16.12'
-    annotationProcessor 'org.projectlombok:lombok:1.16.12'
     //networking
     compile 'com.jakewharton.retrofit:retrofit2-rxjava2-adapter:1.0.0'
     compile 'io.reactivex.rxjava2:rxandroid:2.0.1'
     compile 'com.squareup.okhttp3:logging-interceptor:3.4.2'
     compile 'com.squareup.retrofit2:converter-gson:2.1.0'
     //logging
     compile 'com.jakewharton.timber:timber:4.3.1'
     //kotlin stdlib
+    compile "org.jetbrains.kotlin:kotlin-stdlib:$kotlin_version"
+}
+
+kapt.generateStubs = true

I notice several improvements in the Gradle configuration changes to add Kotlin.

The first is that we get to drop the gradle-retrolambda dependency and compileOptions block, since the Kotlin compiler will generate bytecode that is 100% compatible with the Java bytecode the Android toolchain supports.

Notice that I also removed the lombok dependency? Lombok makes writing Java easier by generating boilerplate code for you. This comes at the cost of direct manipulation of the bytecode in your compiled Android code. I anticipate being able to put Kotlin’s Data Class to use in place of what Lombok enabled. Note that this will break the build, but I’m biasing towards removing this dependency now as we’ll remove the Lombok annotations in the scope of the migration to take place in the next couple of articles. If you’re following along step-by-step, you may opt to leave Lombok to allow the build to work correctly.

Also, notice the addition of the kapt statements? The annotationProcessor support built-in to recent versions of Android Studio (which made android-apt obsolete) didn’t support Kotlin. kapt is therefore needed to get the legacy dagger and databinding functionality to work correctly. The generateStubs = true line was also required to allow kapt to generate stubs that enable generated Java and Kotlin to work together correctly.

Automated Migration Tools

The Android Studio Kotlin plugin ships with a nice feature: an automated conversion tool that will rewrite our Java classes for us, which will serve as a good start. Another advantage Kotlin provides: Java and Kotlin can exist side-by-side in the same project. We are free improve the codebase one file at a time, instead of paying upfront to migrate everything at once.

For the first file to auto-migrate, I chose RxFragment.java, the base class for all Fragments in StockWatcher that make use of RxJava 2. To run the migration, select Code > Convert Java File to Kotlin File.

Let’s see what changed:

- public abstract class RxFragment extends Fragment {
+ abstract class RxFragment : Fragment() {
+
-        private static final java.lang.String EXTRA_RX_REQUEST_IN_PROGRESS = "EXTRA_RX_REQUEST_IN_PROGRESS";
-       @Getter
-       @Setter
-       private boolean requestInProgress;
+       private var requestInProgress: Boolean = false
-       private CompositeDisposable compositeDisposable;
+       private var compositeDisposable: CompositeDisposable? = null

-        @Override
-        public void onCreate(@Nullable Bundle savedInstanceState) {
+        override fun onCreate(savedInstanceState: Bundle?) {
            super.onCreate(savedInstanceState)
-           compositeDisposable = new CompositeDisposable();
+           compositeDisposable = CompositeDisposable()
            if (savedInstanceState != null) {
                requestInProgress = savedInstanceState.getBoolean(EXTRA_RX_REQUEST_IN_PROGRESS, false)
            }
        }

        //... onResume & onPause methods which resembled the same thing here

-        @Override
-        public void onSaveInstanceState(Bundle outState) {
+        override fun onSaveInstanceState(outState: Bundle) {
             super.onSaveInstanceState(outState);
-            outState.putBoolean(EXTRA_RX_REQUEST_IN_PROGRESS, requestInProgress
+            outState.putBoolean(EXTRA_RX_REQUEST_IN_PROGRESS, requestInProgress)
        }

+       companion object {
+           private val EXTRA_RX_REQUEST_IN_PROGRESS = "EXTRA_RX_REQUEST_IN_PROGRESS"
+       }
}

After the Automatic Migration: What Changed?

I noticed several things about the automatically applied changes to RxFragment.java (running from top to bottom of the changes, roughly):

- public abstract class RxFragment extends Fragment {
+ abstract class RxFragment : Fragment() {
-       private boolean requestInProgress;
+       private var requestInProgress: Boolean = false
  • Usage of the var keyword. As noted in the Properties and Fields page of the language guide, there are two ways to define properties: var for mutable or val for read-only.
  • Removal of a boolean primitive in favor of a Kotlin Boolean object. Kotlin represents all primitive types with objects.
    The Boolean had to be initialized for the class to compile—in other words, no default value is provided (like a Java boolean primitive’s behavior).
-       private CompositeDisposable compositeDisposable;
+       private var compositeDisposable: CompositeDisposable? = null
  • Note that the ? symbol was attached to CompositeDisposable. To allow a field to be null, you must explicitly declare so with ?.
-      @Override
-      public void onCreate(@Nullable Bundle savedInstanceState) {
+      override fun onCreate(savedInstanceState: Bundle?) {
          super.onCreate(savedInstanceState)
-         compositeDisposable = new CompositeDisposable();
+         compositeDisposable = CompositeDisposable()
  • No more new keyword (or semicolons).
  • fun keyword for defining functions. This adds specificity and perhaps clarity since constructor calls could look a lot like method calls now that the new keyword is gone, eg: DoStuff() vs doStuff() -> fun doStuff() This seems like a nice measure to avoid any confusion.
-    compositeDisposable = new CompositeDisposable();
+    compositeDisposable = CompositeDisposable()
  • Semicolons have been nixed. Kotlin’s linter considered them “redundant” in most cases, but you can still use them if you absolutely must jam multiple statements on the same line.
-  private static final java.lang.String EXTRA_RX_REQUEST_IN_PROGRESS = "EXTRA_RX_REQUEST_IN_PROGRESS";
+  companion object {
+      private val EXTRA_RX_REQUEST_IN_PROGRESS = "EXTRA_RX_REQUEST_IN_PROGRESS"
+  }
  • The companion object has been added as a replacement for private static final String EXTRA_RX_REQUEST_IN_PROGRESS = "EXTRA_RX_REQUEST_IN_PROGRESS";. Note that there is no static keyword in Kotlin. In the Kotlin world, you have the object keyword instead (but, there is no Object class as you might be familiar with from Java), which is what to use whenever you require a single instance of something—acting very similarly to a Java static. Read up on it here.
    This design choice appears to be inherited from Scala.
  • companion appears to make any values, even those marked private on the companion object, available as if they were local to the class you are using them from. We’ll explore cleaning this up further in the next article, because I think we can do better than what the conversion tool outputs here.
- public abstract class RxFragment extends Fragment {
+ abstract class RxFragment : Fragment() {
  • Support for extending a plain old Java class from a Kotlin class. As you can see, we extended Fragment, a legacy Java class, without any extra effort. This is great for interoperability with legacy Android apps written in Java because it offers a gradual migration path.

Up Next

So, after converting our first file, we’ve seen features Kotlin that we can put to work for us as we complete the migration. Keep in mind, this is just what we could complete automatically with the Kotlin conversion tool.

Now that the automatic converter is done, we will improve on its work by hand.
We’ll revisit this in the next article and make it more Kotlin-esque than the automatic converter could do.

Spoiler: For those who can’t wait, here’s a sneak peak at the completed StockWatcher Kotlin migration.

Learn more about what Kotlin means for your Android apps. Download our ebook for a deeper look into how this new first-party language will affect your business.

The post Migrating an Android App from Java to Kotlin appeared first on Big Nerd Ranch.

]]>
https://bignerdranch.com/blog/migrating-an-android-app-from-java-to-kotlin/feed/ 0
The RxJava Repository Pattern https://bignerdranch.com/blog/the-rxjava-repository-pattern/ https://bignerdranch.com/blog/the-rxjava-repository-pattern/#respond Thu, 09 Mar 2017 08:45:00 +0000 https://nerdranchighq.wpengine.com/blog/the-rxjava-repository-pattern/ What happens if a network request is made using RxJava & Retrofit, a user rotates the phone, and the request hasn't completed yet? By default, the Observable will be recreated, the request re-requested, and the previous data lost! Take a look at a solution to this common problem: the Repository pattern.

The post The RxJava Repository Pattern appeared first on Big Nerd Ranch.

]]>

What happens if a network request is made using RxJava & Retrofit, a user rotates the phone, and the request hasn’t completed yet? By default, the Observable will be recreated, the request re-requested, and the previous data lost!

In this article we’ll take a look at a solution to this common problem when using RxJava with Android. I call this the Repository pattern, which is a pragmatic way of fitting RxJava into the Android lifecycle so that the UI and data layer of your app can stay in sync with one another. We’ll solve the problem of how to cache data previously processed by an Observable and replay it when the activity that hosts it is recreated.We’ll learn it by example via a small sandbox application called Stockwatcher.

Setup

To follow along, you can clone the repository here. One small gotcha—once you open Stockwatcher in Android Studio, make sure you’ve installed the Lombok plugin. Stockwatcher uses Lombok to remove a lot of the boilerplate our plain old Java would otherwise require. The readme for Stockwatcher will guide you through how to install the Lombok plugin if you have never done it before.

Introducing Stockwatcher

Stockwatcher allows wolves of Wall Street to request current stock information for valid symbols, so that they may then make informed stock market decisions. Here, we’re using the Market On Demand Market APIs REST service as a back-end.

Here’s how it will work:

Handling Rotation

Now the central focus of the article: how will we handle a rotation, amidst a request for stock data that’s in-progress? Notice how Stockwatcher handles this common RxJava/RxJava problem with ease?

How did Stockwatcher accomplish this? To understand, we first need to take a step back and examine a couple of key classes the project includes. We’ll start by taking at look at how the dependencies have been defined for Stockwatcher. Up first, the AppModule class.

Understanding the Repository Pattern: Dagger 2 Setup

Stockwatcher wires up its dependencies via the Dagger 2 DI framework to keep things easy to manage and test. Dagger 2 introduces two unique concepts, Modules and Components. Modules define how specifically we should construct the objects in our program, and Components will define which classes make use of the injected objects.

Check out the AppModule.java class:

@Module
public class AppModule {

    private static final String STOCK_SERVICE_ENDPOINT = "http://dev.markitondemand.com/MODApis/Api/v2/";

    private final Application application;
    private final ServiceConfig serviceConfig;

    AppModule(Application application) {
        this.application = application;
        serviceConfig = new ServiceConfig(STOCK_SERVICE_ENDPOINT);
    }

    @Provides
    @Singleton
    StockDataRepository provideStockDataRepository() {
        StockService stockService = new StockService(serviceConfig);
        return new StockDataRepository(stockService);
    }
}

StockDataRepository

Notice the provideStockDataRepository method? This StockDataRepository class will wrap over our StockService class, the class which actually makes the API requests. This is a standard Retrofit service definition. Note that we mark it as a @Singleton so that it can hold onto the state of the results of each request. When we request a Repository object in our app, since we’ve annotated it as a @Singleton, we’ll get back the same object if it’s been instantiated already.

Understanding the Repository Pattern

Now that we understand how we’re using Dagger 2 to create a Repository object as a Singleton, look at what the Repository object does as it relates to Stockwatcher in the diagram below:

As you can see, StockDataRepository serves as a bridge between Service and UI layers. We will use it to manage caching Observables, and the events individual Observable instances have emitted.

Let’s dig into StockDataRepository to understand how the UI state and data are kept in sync with one another and how the caching works.

A Closer look at StockDataRepository

StockDataRepository’s job is to manage caching results from service requests made by StockService and to hand them back to the Fragment it will be used within. Fragments will request data from the repository (by subscribing to the Observables it manages), and the repository will save the Observable instances so that they can be subscribed to and played back as Android UI changes take place in the Fragment/Activity layer.

Let’s see what the Repository object contains:

public class StockDataRepository extends BaseRepository {

    private static final String CACHE_PREFIX_GET_STOCK_INFO = "stockInfo";
    private static final String CACHE_PREFIX_GET_STOCK_INFO_FOR_SYMBOL = "getStockInfoForSymbol";
    private static final String CACHE_PREFIX_GET_STOCK_SYMBOLS = "lookupStockSymbols";

    private final StockService service;

    public StockDataRepository(StockService service) {
        this.service = service;
    }

    public Observable<StockInfoForSymbol> getStockInfoForSymbol(String symbol) {
        Timber.i("method: %s, symbol: %s", CACHE_PREFIX_GET_STOCK_INFO_FOR_SYMBOL, symbol);
        Observable<StockInfoForSymbol> stockInfoForSymbolObservable = Observable.combineLatest(
                lookupStockSymbol(symbol),
                fetchStockInfoFromSymbol(symbol),
                StockInfoForSymbol::new);
        return cacheObservable(CACHE_PREFIX_GET_STOCK_INFO_FOR_SYMBOL + symbol, stockInfoForSymbolObservable);
    }

    //stock info request, which depends on the first result from lookup stock request
    private Observable<StockInfoResponse> fetchStockInfoFromSymbol(String symbol) {
        return lookupStockSymbol(symbol)
                .map(StockSymbol::getSymbol)
                .flatMap(this::getStockInfo);
    }

    //return a single symbol from the list of symbols, or an error to catch if not.
    private Observable<StockSymbol> lookupStockSymbol(String symbol) {
        return lookupStockSymbols(symbol)
                .doOnNext(stockSymbols -> {
                    if (stockSymbols.isEmpty()) {
                        throw new StockSymbolError(symbol);
                    }
                }).flatMap(Observable::fromIterable).take(1);
    }

    private Observable<List<StockSymbol>> lookupStockSymbols(String symbol) {
        Timber.i("%s, symbol: %s", CACHE_PREFIX_GET_STOCK_SYMBOLS, symbol);
        return cacheObservable(CACHE_PREFIX_GET_STOCK_SYMBOLS + symbol, service.lookupStock(symbol).cache());
    }

    private Observable<StockInfoResponse> getStockInfo(String symbol) {
        Timber.i("method: %s, symbol: %s", CACHE_PREFIX_GET_STOCK_INFO, symbol);
        Observable<StockInfoResponse> observableToCache = service
                .stockInfo(symbol).delay(3, TimeUnit.SECONDS).cache();
        return cacheObservable(CACHE_PREFIX_GET_STOCK_INFO + symbol, observableToCache);
    }

}

There are a few keys to understand what’s going on in the code above. First, notice there’s only one public method here:

getStockInfoForSymbol(String symbol)

The stockFragment will call this method to subsequently kick off 2 requests: lookupStockSymbols, and fetchStockInfoFromSymbol.

With a bit of RxJava magic, we’re able to combine the multiple requests (combineLatest),and handle the case that the user’s input (the symbol they typed) resolved to an actual stock symbol the Stock API knows about. To understand Repository’s primary concern, caching, let’s trace one of the two requests the Repository wraps:

private Observable<List<StockSymbol>> lookupStockSymbols(String symbol) {
		Timber.i("%s, symbol: %s", CACHE_PREFIX_GET_STOCK_SYMBOLS, symbol);
		return cacheObservable(CACHE_PREFIX_GET_STOCK_SYMBOLS + symbol, service.lookupStock(symbol).cache());
}

BaseRepository

Note that we are returning a call to a method called cacheObservable.cacheObservable’s definition lives in the BaseRespository class. Let’s take a look:

abstract class BaseRepository {

    private LruCache<String, Observable<?>> apiObservables = createLruCache();

    @NonNull
    private LruCache<String, Observable<?>> createLruCache() {
        return new LruCache<>(50);
    }

    @SuppressWarnings("unchecked")
    <T> Observable<T> cacheObservable(String symbol, Observable<T> observable) {
        Observable<T> cachedObservable = (Observable<T>) apiObservables.get(symbol);
        if (cachedObservable != null) {
            return cachedObservable;
        }
        cachedObservable = observable;
        updateCache(symbol, cachedObservable);
        return cachedObservable;
    }

    private <T> void updateCache(String stockSymbol, Observable<T> observable) {
        apiObservables.put(stockSymbol, observable);
    }
}

The cacheObservable method is our main interface into the functionality the StockDataRepository is responsible for: keeping an instance of an Observable in a cache and returning it when we ask for it. Instead of beginning anew with a brand new request, we’ll cache the observable in an LRUCache, and hand that back so we can update the UI with the cached observable instead.

return cacheObservable(CACHE_PREFIX_GET_STOCK_SYMBOLS + symbol, service.lookupStock(symbol).cache());

Notice that in the above excerpt from StockDataRepository there are actually two levels of caching going on? One is cacheObservable, which returns a cached observable instance from the LRUCache that was initialized in the BaseRepository. The second is the .cache() operator, which instructs that Observable instance to record and then play back events it has previously emitted. Without the .cache() operator, rotation would work correctly, but we wouldn’t actually replay any of the events that had been previously emitted in the last subscription.

Wiring it up to the UI

Up next we’ll take a look at the StockFragment itself, where the request will be triggered when the user provides the symbol they would like information for.

public class StockInfoFragment extends RxFragment {

    @Inject
    StockDataRepository stockDataRepository;
    private FragmentStockInfoBinding binding;

    @Override
    public void onCreate(@Nullable Bundle savedInstanceState) {
        StockWatcherApplication.getAppComponent(getActivity()).inject(this);
        super.onCreate(savedInstanceState);
    }

    @Nullable
    @Override
    public View onCreateView(LayoutInflater inflater, @Nullable ViewGroup container, @Nullable Bundle savedInstanceState) {
        super.onCreateView(inflater, container, savedInstanceState);
        binding = DataBindingUtil.inflate(inflater, R.layout.fragment_stock_info, container, false);
        binding.fetchDataButton.setOnClickListener(v -> {
            binding.errorMessage.setVisibility(View.GONE);
            loadRxData();
        });
        return binding.getRoot();
    }

    @Override
    public void loadRxData() {
        Observable.just(binding.tickerSymbol.getText().toString())
                .filter(symbolText -> symbolText.length() > 0)
                .singleOrError().toObservable()
                .flatMap(symbol -> stockDataRepository.getStockInfoForSymbol(symbol))
                .compose(RxUtil.applyUIDefaults(StockInfoFragment.this))
                .subscribe(this::displayStockResults, this::displayErrors);

    }
    private void displayStockResults(StockInfoForSymbol stockInfoForSymbol) {
        binding.stockValue.setText(stockInfoForSymbol.toString());
    }
}

Here, we hand the user input to the Repository object, which then makes the request when the user clicks the button. Notice that all of the requests for data occur within loadRxData()? Whenever resubscription is required, if we follow this rule, then we’ll be able to simply call loadRxData().

RxFragment

Now, we’ll look at RxFragment, the superclass for StockFragment. We will use this abstract class as a superclass any time a fragment should work with Observable data from the repository.

public abstract class RxFragment extends Fragment {

    private static final java.lang.String EXTRA_RX_REQUEST_IN_PROGRESS = "EXTRA_RX_REQUEST_IN_PROGRESS";

    @Getter @Setter //Lombok getter/setter generation
    private boolean requestInProgress;

    @Getter @Setter
    private CompositeDisposable compositeDisposable;

    public abstract void loadRxData();

    @Override
    public void onCreate(@Nullable Bundle savedInstanceState) {
        super.onCreate(savedInstanceState);
        compositeDisposable = new CompositeDisposable();
        if (savedInstanceState != null) {
            requestInProgress = savedInstanceState.getBoolean(EXTRA_RX_REQUEST_IN_PROGRESS, false);
        }
    }

    @Override
    public void onSaveInstanceState(Bundle outState) {
        super.onSaveInstanceState(outState);
        outState.putBoolean(EXTRA_RX_REQUEST_IN_PROGRESS, requestInProgress);
    }

    @Override
    public void onResume() {
        super.onResume();
        if (isRequestInProgress()) {
            loadRxData();
        }
    }

    @Override
    public void onPause() {
        super.onPause();
        compositeDisposable.clear();
    }
}

Note that we’re persisting the state of the “requestInProgress” boolean via onSaveInstanceState:

@Override
    public void onSaveInstanceState(Bundle outState) {
        super.onSaveInstanceState(outState);
        outState.putBoolean(EXTRA_RX_REQUEST_IN_PROGRESS, requestInProgress);
    }

This is the key to allowing the Repository object to play back its cached results in the case where a user rotates the device while we’re making a request with RxJava and Retrofit. If isRequestInProgress returns true, loadRxData() is called. loadRxData() will then subsequently fetch the data from the Repository cache, and will re-register to update the UI upon completion.

Understanding the RxUtil class

Now, for the last piece of the puzzle: how did isRequestInProgress on RxFragment actually get set?Take another look at StockFragment’s loadRxData() method:

@Override
public void loadRxData() {
    Observable.just(binding.tickerSymbol.getText().toString())
            .filter(symbolText -> symbolText.length() > 0)
            .singleOrError().toObservable()
            .flatMap(s -> stockDataRepository.getStockInfoForSymbol(s))
            .compose(RxUtil.applyUIDefaults(StockInfoFragment.this))
            .subscribe(this::displayStockResults, this::displayErrors);

}

Notice the line:

.compose(RxUtil.applyUIDefaults(StockInfoFragment.this))

This is what configured the behavior of setting isRequestInProgress when the subscription begins and set it to false upon completition. If you have not discovered Transformers (of the non-autobot variety) yet, they are a great way to apply a uniform set of changes to Observables in a generic way, so we’ll use them. By the way, if you’re new to Transformer and the compose operator, a good start to understanding is Dan Lew’s article on what they offer and why you will want to use them: Don’t Break the Chain.

Let’s take a look at RxUtil class it uses:

public class RxUtil {

    private static final String LOADING_MESSAGE = "Loading";

    public static <T> ObservableTransformer<T, T> applyUIDefaults(RxFragment rxFragment) {
        return upstream -> upstream
                .compose(RxUtil.addToCompositeDisposable(rxFragment))
                .compose(RxUtil.applySchedulers())
                .compose(RxUtil.applyRequestStatus(rxFragment))
                .compose(RxUtil.showLoadingDialog(rxFragment));
    }

    private static <T> ObservableTransformer<T, T> applyRequestStatus(RxFragment rxFragment) {
        return upstream -> upstream.doOnSubscribe(disposable -> rxFragment.setRequestInProgress(true))
                .doOnTerminate(() -> rxFragment.setRequestInProgress(false));
    }

    private static <T> ObservableTransformer<T, T> applySchedulers() {
        return (ObservableTransformer<T, T>) schedulersTransformer;
    }

    private static <T> ObservableTransformer<T, T> addToCompositeDisposable(RxFragment rxFragment) {
        return upstream -> upstream.doOnSubscribe(disposable -> rxFragment.getCompositeDisposable().add(disposable));
    }

    private static <T> ObservableTransformer<T, T> showLoadingDialog(RxFragment rxFragment) {
        return observable -> observable
                .doOnSubscribe(disposable -> DialogUtils.showProgressDialog(rxFragment.getFragmentManager(), LOADING_MESSAGE))
                .doOnTerminate(() -> DialogUtils.hideProgressDialog(rxFragment.getFragmentManager()));
    }
}

Triggering loadRxData()

Notice the applyRequestStatus method? We composed an RxJava transformer onto the Observable to manage the isRequestInProgress boolean depending on the lifecycle of the request’s progress. Upon subscription, any Observable with the applyRequestStatus composed on it will call setRequestInProgress(true) on the RxFragment it was passed, and upon termination (when the subscription completed and is unsubscribed) will call setRequestInProgress(false). When RxFragment is instantiated, it will use this value to determine if loadRxData() should be called again to resubscribe to the Observable.

  @Override
    public void onResume() {
        super.onResume();
        if (isRequestInProgress()) {
            loadRxData();
        }
    }

Since onResume will be called in the normal Android lifecycle for the fragment, Observable subscriptions will be resubscribed if they are required. This means rotation will be correctly supported with the Observables we created and added to the Repository cache, and will play back their events.

The RxJava Repository Pattern, Understood

If you’ve followed the example and understood the Stockwatcher codebase you have now seen an approach for allowing RxJava to work with device rotation support and data caching on Android. Now you should be free to worry less about manually dealing with the edgecases of whether a subscription has been completed or not when the fragment or activity is destroyed and recreated. By caching the Observable in the model layer and fitting the Observable subscriptions into loadRxData we have a general purpose solution that will fit Observables into the Android lifecycle.

In the next article, I will be showing a solution to another often needed yet strangely elusive pattern: how can I test the RxJava and Retrofit based service layer of my Android app, with mocked API responses? If you’d like to test the whole networking stack but provide canned responses from the server API instead, check back soon!

And, as always, please share your comments, insights, and thoughts about the RxJava Repository pattern. Submit pull requests and get in touch with your questions, code refinements or ideas!

The post The RxJava Repository Pattern appeared first on Big Nerd Ranch.

]]>
https://bignerdranch.com/blog/the-rxjava-repository-pattern/feed/ 0
Developing Alexa Skills Locally with Node.js: Implementing Persistence in an Alexa Skill https://bignerdranch.com/blog/developing-alexa-skills-locally-with-node-js-implementing-persistence-in-an-alexa-skill/ https://bignerdranch.com/blog/developing-alexa-skills-locally-with-node-js-implementing-persistence-in-an-alexa-skill/#respond Thu, 14 Apr 2016 10:00:53 +0000 https://nerdranchighq.wpengine.com/blog/developing-alexa-skills-locally-with-node-js-implementing-persistence-in-an-alexa-skill/ By now, we’ve made a lot of progress in building our Airport Info skill. We [tested the model](https://nerdranchighq.wpengine.com/blog/developing-alexa-skills-locally-with-nodejs-setting-up-your-local-environment/) and [verified that the skill service behaves as expected](https://nerdranchighq.wpengine.com/blog/developing-alexa-skills-locally-with-nodejs-implementing-an-intent-with-alexa-app-and-alexa-app-server/). Then we [tested the skill](https://nerdranchighq.wpengine.com/blog/developing-alexa-skills-locally-with-nodejs-deploying-your-skill-to-staging/) in the simulator and on an Alexa-enabled device. In this post, we’ll implement persistence in a new skill so that users will be able to access information saved from their previous interactions.

The post Developing Alexa Skills Locally with Node.js: Implementing Persistence in an Alexa Skill appeared first on Big Nerd Ranch.

]]>

Editor’s note: This is the fourth post in our series on developing Alexa skills.

By now, we’ve made a lot of progress in building our Airport Info skill. We tested the model and verified that the skill service behaves as expected. Then we tested the skill in the simulator and on an Alexa-enabled device. In this post, we’ll implement persistence in a new skill so that users will be able to access information saved from their previous interactions.

We’ll go over how to write Alexa skill data to storage, which is useful in cases where the skill would time out or when the interaction cycle is complete. You can see this at work in skills like the 7-Minute Workout skill, which allows users to keep track of and resume existing workouts, or when users want to resume a previous game in The Wayne Investigation.

Let’s Bake a Cake with CakeBaker

For this experiment, we’ll build upon an existing codebase and improve it. The skill, a cooking assistant called CakeBaker, guides users in cooking a cake, step by step. A user interacts with CakeBaker by asking Alexa to start a cake, then advances through the steps of the recipe by saying “next” after each response, like so:

CakeBaker steps

This continues until the user reaches the last step. But what if the skill closes before the user is able to finish a step? By default, Alexa skills close if a user doesn’t respond within 16 seconds. Right now, that means that a user would be forced to start over at the first step, losing the progress made.

Let’s fix that by implementing two new methods in our skill, called saveCakeIntent and loadCakeIntent, which will allow users to save and load their current progress to and from a database. We’ll also test the database functionality in our local environment using the alexa-app-server and alexa-app libraries we discussed in our post on implementing an intent in an Alexa Skill.

This experiment will use Node.js and alexa-app-server to develop and test the skill locally, so we will need to set up those dependencies first. If you haven’t yet done so, read our posts on setting up a local environment and implementing an intent—they will guide you in setting up a local development environment for this skill, which will involve more advanced requirements.

Let’s get started by downloading the source code for CakeBaker. We’ll be improving this source code so that it supports saving and loading cakes to the database.

To complete the experiment, we’ll need a working installation of alexa-app-server and Node.js. If you haven’t done so, install Node.js and then install alexa-app-server, using the instructions outlined in the linked posts.

Clone CakeBaker into the alexa-app-server/examples/apps directory by opening a new terminal window and entering the following within the alexa-app-server/examples/apps directory:

git clone https://github.com/bignerdranch/alexa-cakebaker

Change directories into alexa-app-server/examples/apps/alexa-cakebaker and run the following command:

npm install

This will fetch the dependencies the project requires in order to work correctly.

DynamoDB

The database we will use to store the state of the cake is Amazon’s DynamoDB, a NoSQL-style database that will ultimately live in the cloud on Amazon’s servers. To facilitate testing, we’ll install a local instance of DynamoDB. We will use the brew package manager to add DynamoDB to our local development environment.

Install Homebrew if you haven’t already done so:

/usr/bin/ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"

Once this command completes, install a local version of DynamoDB via Homebrew:

brew install dynamodb-local

On Windows? Follow these steps.

When the brew command completes, open a new tab in your terminal and run the following command:

dynamodb-local -sharedDb -port 4000

You should see something similar to the following:

Initializing DynamoDB Local with the following configuration:
Port:   4000
InMemory:   false
DbPath: null
SharedDb:   true
shouldDelayTransientStatuses:   false
CorsParams: *

Now we can begin developing our database functionality and test db behaviour in our local environment. Leave the tab open while you work.

Adding the new Intents

At this point, the CakeBaker skill is cloned locally and our test database instance is set up, so we’re ready to begin adding the save and load features. In order to implement them, we need two new intents for these actions: saveCakeIntent and loadCakeIntent. Let’s begin by adding the intent definitions to the bottom of the index.js file.

One for saving the cake:

skillService.intent('saveCakeIntent', {
    'utterances': ['{save} {|a|the|my} cake']
  },
  function(request, response) {
  //code goes here!
  }
);

And one for loading the cake:

skillService.intent('loadCakeIntent', {
    'utterances': ['{load|resume} {|a|the} {|last} cake']
  },
  function(request, response) {
  //code goes here!
  }
);

Implementing the Save Command

Here’s a diagram of how the save command will work:

CakeBaker save steps

In the diagram, the user’s utterance is resolved to the saveCakeIntent and then processed by the skill service. The skill service saves the cake data to the database, and once this operation completes the service, it responds to the skill interface, indicating that the write to the database succeeded.

The CakeBaker source code we checked out contains a helper called database_helper.js. Open this file, and you should see the following:

'use strict';
module.change_code = 1;
var _ = require('lodash');
var CAKEBAKER_DATA_TABLE_NAME = 'cakeBakerData';
var dynasty = require('dynasty')({});

function CakeBakerHelper() {}
var cakeBakerTable = function() {
  return dynasty.table(CAKEBAKER_DATA_TABLE_NAME);
};

CakeBakerHelper.prototype.createCakeBakerTable = function() {
  return dynasty.describe(CAKEBAKER_DATA_TABLE_NAME)
    .catch(function(error) {
      return dynasty.create(CAKEBAKER_DATA_TABLE_NAME, {
        key_schema: {
          hash: ['userId',
            'string'
          ]
        }
      });
    });
};

CakeBakerHelper.prototype.storeCakeBakerData = function(userId, cakeBakerData) {
  return cakeBakerTable().insert({
    userId: userId,
    data: cakeBakerData
  }).catch(function(error) {
    console.log(error);
  });
};

CakeBakerHelper.prototype.readCakeBakerData = function(userId) {
  return cakeBakerTable().find(userId)
    .then(function(result) {
      return result;
    })
    .catch(function(error) {
      console.log(error);
    });
};

module.exports = CakeBakerHelper;

This file contains the basic logic for creating a new table in DynamoDB we will name cakeBakerData to write cake data to. It also contains methods for reading and writing the cake data to the DynamoDB instance.

Our first task, saving a cake, will be aided by the storeCakeBakerData method the helper contains. Notice storeCakeBaker’s signature: it expects a userId and cakeBakerData. The userId is a unique identifier provided by the Alexa service upon a user enabling a skill. We will pull the userId from the request received by our service from the skill interface, and it will uniquely identify the Alexa account that the Skill is attached to so that the skill can keep track of data for different users. It is also the key we will use to look up a user’s cakeBakerData on the database.

The helper also makes use of Dynasty, an open-source library for interacting with the DynamoDB instance. Because we are developing locally, the first code change we will make is to the connection settings for the Dynasty object.

For testing locally, we will use our local machine’s DynamoDB instance. In order to do that we will need to edit the database_helper.js file and comment the line:

//var dynasty = require('dynasty')({});

and add:

//var dynasty = require('dynasty')({});
var localUrl = 'http://localhost:4000';
var localCredentials = {
  region: 'us-east-1',
  accessKeyId: 'fake',
  secretAccessKey: 'fake'
};
var localDynasty = require('dynasty')(localCredentials, localUrl);
var dynasty = localDynasty;

This will enable us to test against the local DynamoDB instance we started in the terminal using port 4000.

Creating the Cake Table

Before we can save or read cake data from DynamoDB, we’ll first need to ask DynamoDB to create a table to store it in. We can use a helpful feature of alexa-app called a “pre” hook, which will execute before the intent handlers in the skill are executed.

Open the index.js file in the alexa-cakebaker folder and add the following at line 9, right below var databaseHelper = new DatabaseHelper();:

skillService.pre = function(request, response, type) {
  databaseHelper.createCakeBakerTable();
};

This will execute before any intent is handled. If the table doesn’t exist, and if it’s already created, Dynasty will simply return an error, which we handle in the DatabaseHelper class.

Saving the Cake

Let’s implement a saveCake function, at the bottom of the index.js file before the module.exports = CakeBakerHelper; :

var saveCake = function(cakeBakerHelper, request) {
  var userId = request.userId;
  databaseHelper.storeCakeBakerData(userId, JSON.stringify(cakeBakerHelper))
    .then(function(result) {
      return result;
    }).catch(function(error) {
      console.log(error);
    });
};

The method pulls the userId from the request, passing it and a stringified version of the cake data to be written to the database.

Now we’ll put the saveCake method to use. Update the saveCakeIntent intent handler we defined earlier in the index.js file:

skillService.intent('saveCakeIntent', {
    'utterances': ['{save} {|a|the|my} cake']
  },
  function(request, response) {
    var cakeBakerHelper = getCakeBakerHelperFromRequest(request);
    saveCake(cakeBakerHelper, request);
    response.say('Your cake progress has been saved!');
    response.shouldEndSession(true).send();
    return false;
  }
);

Perfect! This should write the cake’s progress to the database when a user explicitly requests it from the skill.

We will also need to update the advanceStepIntent to make use of the saveCake method as well. When a user requests “next,” the cake should be saved implicitly to avoid any lost progress due to a timeout or the skill’s request cycle ending.

Update the advanceStepIntent to call saveCake, just after the cakeBakerHelper is incremented:

skillService.intent('advanceStepIntent', {
    'utterances': ['{next|advance|continue}']
  },
  function(request, response) {
    var cakeBakerHelper = getCakeBakerHelperFromRequest(request);
    cakeBakerHelper.currentStep++;
    saveCake(cakeBakerHelper, request);
    cakeBakerIntentFunction(cakeBakerHelper, request, response);
  }
);

Loading the Cake

A user should be able to load the cake after the skill has exited. Once the cake is loaded, the skill should pick back up at the step that the user left from, eliminating the pain of starting over from the beginning.

In order to enable this, we will have the skill read the cake from the database after looking it up with our userId and set up the CakeBakerHelper object from the persisted state. Then we’ll call cakeBakerIntentFunction to generate the response that should be sent to Alexa. Edit the index.js file and replace the loadCakeIntent Intent with the following:

skillService.intent('loadCakeIntent', {
    'utterances': ['{load|resume} {|a|the} {|last} cake']
  },
  function(request, response) {
    var userId = request.userId;
    databaseHelper.readCakeBakerData(userId).then(function(result) {
      return (result === undefined ? {} : JSON.parse(result['data']));
    }).then(function(loadedCakeBakerData) {
      var cakeBakerHelper = new CakeBakerHelper(loadedCakeBakerData);
      return cakeBakerIntentFunction(cakeBakerHelper, request, response);
    });
    return false;
  }
);

Testing that it Works

Now we can test that the new functionality works against the local database. First, let’s start the alexa-app-server. Change to the alexa-app-server/examples directory and run the local development server:

node server

Now, visit the test page at http://localhost:8080/alexa/cakebaker. We want to mimic a cake that has advanced several steps, so we’ll send several requests on the server. Configure the type to IntentRequest, and the Intent to cakeBakeIntent and hit “Send Request”. This should start a new cake.

CakeBaker intent request

Next, change the Intent to advanceStepIntent and hit “Send Request”—this mimics a user saying “next” in order to move the recipe along to the next step. Hit “Send Request” three more times. In the response area of the test page, you should see:

"response": {
    "shouldEndSession": false,
    "outputSpeech": {
      "type": "SSML",
      "ssml": "<speak>Beat 2 sticks butter and 1 and 1/2 cups sugar in a large bowl with a mixer on medium-high speed until light and fluffy, about 3 minutes. to hear the next step, say next</speak>"
    },
    "reprompt": {
      "outputSpeech": {
        "type": "SSML",
        "ssml": "<speak>I didn’t hear you. to hear the next step, say next</speak>"
      }
    }
  },

Great! Now we can test that saving to the database works. Switch the Intent to saveCakeIntent and click “Send Request”. You should see the following in the response area:

{
  "version": "1.0",
  "sessionAttributes": {
    "cake_baker": {
      "started": false,
      "currentStep": 4,
      "steps": [
        //removed for brevity
              ]
    }
  },
 "response": {
    "shouldEndSession": true,
    "outputSpeech": {
      "type": "SSML",
      "ssml": "<speak>Your cake progress has been saved!</speak>"
    }
  }

Our cake has now been saved to the database! To verify whether the skill service is working, reload the test page, then set the Intent to loadCakeIntent. Click “Send Request”. This mimics a user saying “Alexa, ask Cake Baker to load the cake.”

The response should pick up where the user left off with the fourth step in the cake recipe.

{
  "version": "1.0",
  "sessionAttributes": {
    "cake_baker": {
      "started": false,
      "currentStep": 4,
      "steps": [
        //removed for brevity
              ]
    }
  },
  "response": {
    "shouldEndSession": false,
    "outputSpeech": {
      "type": "SSML",
      "ssml": "<speak>Beat 2 sticks butter and 1 and 1/2 cups sugar in a large bowl with a mixer on medium-high speed until light and fluffy, about 3 minutes. to hear the next step, say next</speak>"
    },
    "reprompt": {
      "outputSpeech": {
        "type": "SSML",
        "ssml": "<speak>I didn’t hear you. to hear the next step, say next</speak>"
      }
    }
  },
  "dummy": "text"
}

Going Live

Now that we’ve tested the skill locally, let’s deploy it live! Fortunately, because DynamoDB is already wired to work easily with an AWS Lambda skill, we will have to do very little to deploy.

First, let’s change database_helper.js to production mode. Open database_helper.js. Uncomment:

var dynasty = require('dynasty')({});

Then comment the local development configuration we added. The top of our database_helper.js file should look like this:

'use strict';
module.change_code = 1;
var _ = require('lodash');
var CAKEBAKER_DATA_TABLE_NAME = 'cakeBakerData';
var dynasty = require('dynasty')({});
// var localUrl = 'http://localhost:4000';
// var localCredentials = {
//   region: 'us-east-1',
//   accessKeyId: 'fake',
//   secretAccessKey: 'fake'
// };
...

Now, before we run through the usual Alexa Skill deployment process, we need to configure DynamoDB on AWS. Visit https://console.aws.amazon.com/dynamodb/home and create a new table. For the table name, enter “cakeBakerData”, and for primary key enter “userId”. Finally, click “Create”.

Configuring DynamoDB on AWS

Next, we will follow the usual Alexa Skill deployment process—but with two big differences. First, we will run through setting up the skill service on AWS Lambda. Visit the Lambda dashboard and click “Create Lambda Function”. Click “skip” on the resulting page.

Zip the files within the cakebaker directory and click the “Upload a ZIP file” option in the Lambda configuration, keeping in mind that your index.js file should be at the parent level of your archive. Click “Upload” and select the archive you created.

  • In the “Name” field, enter “cakeBaker”.
  • In the “Runtime” field, be sure to select the Node.js option.

It’s important to note that the Lambda function handler and role selection are different here than they are in an AWS Lambda skill without a database. Rather than “Basic Execution Role”, select “Basic with DynamoDB”. This will redirect to a new screen, where you should click “Allow”. This step allows our AWS Lambda-backed skill service to use a DynamoDB datastore on our AWS Account.

Here is what your configuration should now look like:

CakeBaker configuration

Click “Next” and then “Create Function”.

Note the long “ARN” at the top right of the page. This is the Amazon Resource Name, and it will look something like arn:aws:lambda:us-east-1:333333289684:function:myFunction. You will need it when setting up the skill interface, so be sure to copy it from your AWS Lambda function.

CakeBaker Amazon Resource Name

Finally, click on the “Event sources” tab and click “Add event source”. Select “Alexa Skills Kit” in the Event Source Type dropdown and hit “Submit”.

CakeBaker Event sources tab

Setting up the Skill Interface

Next, we’ll set up the skill interface. Visit the Amazon Developer Console skills panel and click “Add a New Skill”. In the Skill Information tab, enter “Cake Baker” for the “Name” and “Invocation Name” fields. Leave “Custom Interaction Model” selected for the Skill Type.

CakeBaker skill interface

Click “Next”.

Now we need to set up the interaction model. Copy the intent schema and utterances from the alexa-app-server test page into the respective fields.

For the “Intent Schema” field, use:

{
  "intents": [
    {
      "intent": "advanceStepIntent",
      "slots": []
    },
    {
      "intent": "repeatStepIntent",
      "slots": []
    },
    {
      "intent": "cakeBakeIntent",
      "slots": []
    },
    {
      "intent": "loadCakeIntent",
      "slots": []
    },
    {
      "intent": "saveCakeIntent",
      "slots": []
    }
  ]
}

and for the “Sample Utterances” field, use:

advanceStepIntent   next
advanceStepIntent   advance
advanceStepIntent   continue
cakeBakeIntent  new cake
cakeBakeIntent  start cake
cakeBakeIntent  create cake
cakeBakeIntent  begin cake
cakeBakeIntent  build cake
cakeBakeIntent  new a cake
cakeBakeIntent  start a cake
cakeBakeIntent  create a cake
cakeBakeIntent  begin a cake
cakeBakeIntent  build a cake
cakeBakeIntent  new the cake
cakeBakeIntent  start the cake
cakeBakeIntent  create the cake
cakeBakeIntent  begin the cake
cakeBakeIntent  build the cake
loadCakeIntent  load cake
loadCakeIntent  resume cake
loadCakeIntent  load a cake
loadCakeIntent  resume a cake
loadCakeIntent  load the cake
loadCakeIntent  resume the cake
loadCakeIntent  load last cake
loadCakeIntent  resume last cake
loadCakeIntent  load a last cake
loadCakeIntent  resume a last cake
loadCakeIntent  load the last cake
loadCakeIntent  resume the last cake
saveCakeIntent  save cake
saveCakeIntent  save a cake
saveCakeIntent  save the cake
saveCakeIntent  save my cake

CakeBaker intent schema utterances

Click “Next”.

On the Configuration page, select “Lambda ARN (Amazon Resource Name)” and enter the ARN you copied when you set up the Lambda endpoint. Click “Next”. You can now test that the skill behaves as it did in local development. If you have an Alexa-enabled device registered to your developer account, you can now test the save and load functionality with the device. Amazon has more information on registering an Alexa-enabled device for testing, if you’re not familiar with the process.

Try the following commands, either in the test page or against a real device: “Alexa, ask Cake Baker to bake a cake”, “next”, “next”, and “Save Cake”.

Wait for a moment while the skill times out, and then say, “Alexa, ask Cake Baker to load the cake”. The skill should pick up where we left off, on the third step of Cake Baker.

Congratulations; you’ve implemented basic persistence in an Alexa Skill! In the next post, we’ll cover submitting your custom Alexa skills for certification so that they can be used by anybody with an Alexa-enabled device.

The post Developing Alexa Skills Locally with Node.js: Implementing Persistence in an Alexa Skill appeared first on Big Nerd Ranch.

]]>
https://bignerdranch.com/blog/developing-alexa-skills-locally-with-node-js-implementing-persistence-in-an-alexa-skill/feed/ 0
Developing Alexa Skills Locally with Node.js: Deploying Your Skill to Staging https://bignerdranch.com/blog/developing-alexa-skills-locally-with-node-js-deploying-your-skill-to-staging/ https://bignerdranch.com/blog/developing-alexa-skills-locally-with-node-js-deploying-your-skill-to-staging/#respond Thu, 31 Mar 2016 10:00:53 +0000 https://nerdranchighq.wpengine.com/blog/developing-alexa-skills-locally-with-node-js-deploying-your-skill-to-staging/ Now that we have [tested the model](https://nerdranchighq.wpengine.com/blog/developing-alexa-skills-locally-with-nodejs-setting-up-your-local-environment/) for our Airport Info Alexa Skill and [verified that the skill service behaves as expected](https://nerdranchighq.wpengine.com/blog/developing-alexa-skills-locally-with-nodejs-implementing-an-intent-with-alexa-app-and-alexa-app-server/), it's time to move from the local development environment to staging, where we’ll be able to test the skill in the simulator and on an Alexa-enabled device.

The post Developing Alexa Skills Locally with Node.js: Deploying Your Skill to Staging appeared first on Big Nerd Ranch.

]]>

Editor’s note: This is the third in a series of posts about developing Alexa skills. Read the rest of the posts in this series to learn how to build and deploy an Alexa skill.

Now that we have tested the model for our Airport Info Alexa Skill and verified that the skill service behaves as expected, it’s time to move from the local development environment to staging, where we’ll be able to test the skill in the simulator and on an Alexa-enabled device.

What’s Next in Order to Deploy an Alexa Skill

To deploy our Alexa skill to the staging environment, we first need to register the skill with the skill interface, then configure the skill interface’s interaction model. We’ll also need to configure an AWS Lambda instance that will run the skill service we developed locally.

The Alexa skill interface is what’s responsible for resolving utterances (words a user spoke) to intents (events our skill service receives) so that Alexa can correctly respond to what a user has asked. For example, when we ask our Airport Info skill to give status information for the airport code of Atlanta, Georgia (ATL), the skill interface determines that the AirportInfo intent matches the words that were spoken aloud, and that ATL is the airport code a user would like information about.

Here’s what the journey from a user’s spoken words to Alexa’s response looks like:

Alexa response journey

In our post on implementing Alexa intents, we simulated the skill interface with alexa-app-server so that we could test our skill locally. We sent a mock event to the skill service from alexa-app-server by selecting IntentRequest with an intent value of airportInfo and an AIRPORTCODE of ATL in the Alexa Tester interface.

By comparison, in a deployed skill, the skill interface lives on Amazon’s servers and works with users’ utterances that are sent from Alexa to the skill service.

Setting up AWS Lambda

The first step in deploying our Alexa skill to staging is getting the skill service uploaded to AWS Lambda, a compute service that runs the code on our behalf using AWS infrastructure. AWS Lambda will accept a zipped archive of the skill service, so let’s create that now. Go to the AirportInfo directory we created in the last post and zip everything within the AirportInfo folder.

How to archive the skill service

Log into AWS Lambda. You must be logged in on US-East (N. Virginia) in order to access the Alexa Service from AWS Lambda. To switch your location, simply click the location displayed next to your name in the top right of the AWS Console.
Click “Create a Lambda Function”. On the screen about selecting a blueprint, click “skip”. Now we can configure the Lambda function that will host our skill service by filling out the empty fields here:

  • For Name, enter “airportInfoService”
  • For Runtime, select “Node.js”
  • For Role, select “lambda_basic_execution”

Finally, select “upload a ZIP file” under “Lambda function code” and choose the archive file you created. You should wind up with a completed form that looks like this:

Completed AWS Lambda form

Click “Next”, and then click “Create function”. This takes us to a screen where will configure one last detail: the event source.

Click on the “Event source” tab and click “Add event source”. Select “Alexa Skills Kit” for the Event Source Type and click “Submit”.

Add event source

Make sure to copy the “ARN” at the top right of the page. This is the Amazon Resource Name, and the skill interface we’re going to configure next needs that value to know where to send events. The ARN will look something like arn:aws:lambda:us-east-1:333333289684:function:myFunction.

We can test that the Lambda function works correctly by sending it a test event within the Lambda interface. Click the blue “Test” button at the right of the screen, and under “Sample Event Template” select “Alexa Start Session”. Click “Save and Test”.

Input test event

You should see text under the “Execution result” area resembling the following:

{
  "version": "1.0",
  "sessionAttributes": {},
  "response": {
    "shouldEndSession": false,
    "outputSpeech": {
      "type": "SSML",
      "ssml": "<speak>For delay information, tell me an Airport code.</speak>"
    },
    "reprompt": {
      "outputSpeech": {
        "type": "SSML",
        "ssml": "<speak>For delay information, tell me an Airport code.</speak>"
      }
    }
  }
}

If you aren’t receiving this response, there are a few things you can check. Verify that:

  • the zip archive you’ve uploaded has all necessary files at the root of the archive, especially index.js.
  • there are no syntax errors in your scripts. Not sure? Grab our complete project from Github!
  • your region is set to US-East (N. Virginia).
  • your event source is properly set to Alexa Skills Kit.

At this point, the skill service we wrote is live. Now we can move on to getting the skill interface set up.

Configuring the Skill Interface

The AWS Lambda instance is live and we’ve got its ARN value copied, so now we can start configuring the skill interface to do its work: resolving a user’s spoken utterances to events our service can process. Go to the Alexa Skills Kit Portal and click “Add a New Skill”. Enter “Airport Info” for “Name” and “Invocation Name”.

The value for “Invocation Name” is what a user will say to Alexa to trigger the skill. Paste the ARN you copied in the Endpoint box (make sure the “Lambda ARN” button is selected). The Skill Information page should look like this:

Alexa skill information page

Click “Next” to proceed to the Interaction Model page.

Setting up the Interaction Model

We’ve already got the values needed for the Interaction Model page, thanks to the work we did in our post on using alexa-app and alexa-app-server.

Let’s refer back to the local test page for the skill we wrote earlier, at http://localhost:8080/alexa/airportinfo. (If you stopped the service locally earlier, run node server in the alexa-app-server examples directory.)

Alexa interaction model page

Copy the values from the “Schema” and “Utterances” fields on the local server test page and paste them into the respective fields on the “Interaction Model” page, like so:

Alexa schema and utterances

Defining the Custom Slot Type

Our skill also uses a custom slot type called FAACODES that we need to define on the Interaction Model page. A custom slot type helps teach the skill interface how to recognize what a user has said, and sends it to the server as a variable.

Remember when we tested locally and provided “ATL” as a value for the airportInfoIntent’s AIRPORTCODE slot? In a real skill, the skill interface has to figure out what the user said in order to get this value—and the custom slot type definition in the Interaction Model makes it possible.

To create the custom slot type, click “Add Slot Type” and input FAACODES under “Enter Type”. You can download our list of all of the FAA Airport Codes, then paste those values into the “Enter Values” box and click “OK”.

Now FAACODES is a real Custom Slot Type that the skill interface can work with!

Testing the Skill in Staging

Now, let’s advance to the “Test” page in the skill interface so that we can verify that the deployed skill works correctly. Under the “Service Simulator” section of the page, enter “about status info for ATL”. This still simulates a user’s spoken words, but will test that a resolution between the slot type and sample utterances takes place as expected.

Testing the Alexa skill in staging

Click “Ask Airport Info”. If everything works as planned, we get the following in the “Lambda Response” area:

{
  "version": "1.0",
  "response": {
    "outputSpeech": {
      "type": "SSML",
      "ssml": "<speak>There is currently no delay at Hartsfield-Jackson Atlanta International. The current weather conditions are Partly Cloudy, 65.0 F (18.3 C) and wind South at 10.4mph.</speak>"
    },
    "shouldEndSession": true
  },
  "sessionAttributes": {}
}

Success! You can click the play icon in the bottom right of the Service Response panel to hear Alexa output the request, and if you have an Amazon Echo or any Alexa-enabled device handy, you can also test the Airport Info skill there.

Skills that are in staging can be accessed by a device signed on with the same development account. Test the skill on your device by saying: “Alexa, ask airport info about airport status for ATL.” Alexa should respond with the text we saw in the test we did in the simulator.

To learn more about registering or if you’ve already set up your device using an account other than your Amazon developer account, follow these steps from Amazon.

Ship It!

Congratulations! Your skill interface and skill service are now live in the staging environment, and you’ve tested it there. In Part 4 of this series, we’ll take the final step of going live with our skill: submitting it for Amazon review so that it can be enabled on devices around the world.

The post Developing Alexa Skills Locally with Node.js: Deploying Your Skill to Staging appeared first on Big Nerd Ranch.

]]>
https://bignerdranch.com/blog/developing-alexa-skills-locally-with-node-js-deploying-your-skill-to-staging/feed/ 0
Developing Alexa Skills Locally with Node.js: Implementing an Intent with Alexa-app and Alexa-app-server https://bignerdranch.com/blog/developing-alexa-skills-locally-with-node-js-implementing-an-intent-with-alexa-app-and-alexa-app-server/ https://bignerdranch.com/blog/developing-alexa-skills-locally-with-node-js-implementing-an-intent-with-alexa-app-and-alexa-app-server/#respond Tue, 22 Mar 2016 10:00:53 +0000 https://nerdranchighq.wpengine.com/blog/developing-alexa-skills-locally-with-node-js-implementing-an-intent-with-alexa-app-and-alexa-app-server/ In our [last post on building Alexa skills](https://nerdranchighq.wpengine.com/blog/developing-alexa-skills-locally-with-nodejs-setting-up-your-local-environment/), we implemented a model that knows how to talk to the FAA. Now we’ll see how to hook it up to a new Alexa skill. We’ll be using [alexa-app](https://github.com/matt-kruse/alexa-app) as a framework to build our skill, and [alexa-app-server](https://github.com/matt-kruse/alexa-app-server) will allow us to test interacting with the skill locally.

The post Developing Alexa Skills Locally with Node.js: Implementing an Intent with Alexa-app and Alexa-app-server appeared first on Big Nerd Ranch.

]]>

Editor’s note: This is the second in a series of posts about developing Alexa skills. Read the rest of the posts in this series to learn how to build and deploy an Alexa skill.

In our last post on building Alexa skills, we implemented a model that knows how to talk to the FAA. Now we’ll see how to hook it up to a new Alexa skill. We’ll be using alexa-app as a framework to build our skill, and alexa-app-server will allow us to test interacting with the skill locally.

We will be using these libraries because they grant a path to supporting a local development and testing workflow with an Alexa skill, which allows us to rapidly test and develop. (H/T Matt Kruse!).

Setting up Alexa-app-server

The source code for this project is available on GitHub. To begin, pull down the alexa-app-server GitHub repo with the following command:

git clone https://github.com/matt-kruse/alexa-app-server.git

Once you have cloned the repository, install any required dependencies by running:

npm install

Move the faa-info folder you created in the previous post (or via Github) into the alexa-app-server repository’s ./examples/apps directory and create a new index.js file in the faa-info directory. This is where we’ll implement our skill using the alexa-app Node module we installed just above.

Within the repository, change to the examples directory and run:

node server

You should see:

Loading apps from: apps
Listening on HTTP port 8080

You should now have a running local “skill server,” which gives us a way to test interaction with our skill locally as we build it.

Now we can begin building out our skill.

Handling Launch

The first thing our Airport Info skill should do is respond to a launch request from Alexa. This is what will ultimately happen if a user says, “Alexa, launch Airport Info.”

Because we’re developing this skill using the alexa-app module, we have a nice shorthand syntax for defining both the launch request and the speech response. Add the following to the index.js file you’ve created in the faa-info directory.

'use strict';
module.change_code = 1;
var _ = require('lodash');
var Alexa = require('alexa-app');
var app = new Alexa.app('airportinfo');
var FAADataHelper = require('./faa_data_helper');

app.launch(function(req, res) {
  var prompt = 'For delay information, tell me an Airport code.';
  res.say(prompt).reprompt(prompt).shouldEndSession(false);
});
module.exports = app;

Notice the line var app = new Alexa.app('airportinfo');. This will be what our skill is known as within the context of alexa-app-server.

We recommend having two separate Terminal tabs open at this point: one for directory navigation and one for running your alexa-app-server.

Restart alexa-app-server using “node server” as you did earlier, and open a web browser to http://localhost:8080/alexa/airportinfo. You should see a page with a dropdown for selecting a Request Type. Select “LaunchRequest” and hit “Send Request.” You should see a response similar to the following:

{
  "version": "1.0",
  "sessionAttributes": {},
  "response": {
    "shouldEndSession": false,
    "outputSpeech": {
      "type": "SSML",
      "ssml": "<speak>For delay information, tell me an Airport code.</speak>"
    },
    "reprompt": {
      "outputSpeech": {
        "type": "SSML",
        "ssml": "<speak>For delay information, tell me an Airport code.</speak>"
      }
    }
  },
  "dummy": "text"
}

This looks like exactly what we want: instructions that show a user how to use the Airport Info skill.

Let’s take a moment to understand what output was just generated for us as a launch request through Alexa. If you refer to the index.js file we’ve just created, you’ll see an app.launch method. This method will be automatically triggered any time the skill is invoked, and it requires a response to be sent to the user. In this case, we’re calling a response.say(prompt), which will output the text provided in the var prompt above it.

Afterwards, you will see a reprompt(prompt). This is used any time a user is given the opportunity to say something, and will trigger approximately 8 seconds after the user is prompted the first time. Depending upon the design of your Alexa skill, you may want to shorten this to your liking, e.g. “Tell me an Airport code.”

Lastly, the shouldEndSession(false) method is the key determining factor as to whether your skill will continue listening for user interaction, or close the skill completely. When designing your skill, it’s important to weigh the benefits of keeping your stream open, as it may feel cumbersome to users if you expect them to continually interact with your skill. In this case, however, we want the user to be able to follow up after the welcome message with a desired airport code, so we’re choosing to leave this stream open (shouldEndSession(false)).

Intent Schema and Utterances for Airport Info

We need to map utterances (training data of what the user may say) to intents (requests to perform functions on your skill service). Then in our web service, we take those intents and run any logic we need. If you are new to the concept of utterance and schema, you can learn more by reading the Amazon documentation for a detailed inquiry into those topics.

Here, we’ll get started on the Intent Schema. First, let’s specify the airportinfo intent schema and the utterances that can be used to invoke it using the alexa-app and alexa-utterances module. These will define the voice interface for how our user will interact with the skill.

Add the following to index.js before the module.exports:

app.intent('airportinfo', {
 'slots': {
    'AIRPORTCODE': 'FAACODES'
  },
  'utterances': ['{|flight|airport} {|delay|status} {|info} {|for} {-|AIRPORTCODE}']
},
  function(req, res) {
  }
);

Now visit http://localhost:8080/alexa/airportinfo again. Select “IntentRequest” and then “airportinfo” from the list.

Since we haven’t yet implemented anything, these “Response” fields won’t contain anything interesting—but check out the rather lengthy sample utterances list that was generated for the airportinfo intent:

airportinfo  {AIRPORTCODE}
airportinfo flight {AIRPORTCODE}
airportinfo airport {AIRPORTCODE}
airportinfo  delay {AIRPORTCODE}
airportinfo flight delay {AIRPORTCODE}
airportinfo airport delay {AIRPORTCODE}
airportinfo  status {AIRPORTCODE}
airportinfo flight status {AIRPORTCODE}
airportinfo airport status {AIRPORTCODE}
airportinfo  info {AIRPORTCODE}
airportinfo flight info {AIRPORTCODE}
airportinfo airport info {AIRPORTCODE}
airportinfo  delay info {AIRPORTCODE}
airportinfo flight delay info {AIRPORTCODE}
airportinfo airport delay info {AIRPORTCODE}
airportinfo  status info {AIRPORTCODE}
airportinfo flight status info {AIRPORTCODE}
airportinfo airport status info {AIRPORTCODE}
airportinfo  for {AIRPORTCODE}
airportinfo flight for {AIRPORTCODE}
airportinfo airport for {AIRPORTCODE}
airportinfo  delay for {AIRPORTCODE}
airportinfo flight delay for {AIRPORTCODE}
airportinfo airport delay for {AIRPORTCODE}
airportinfo  status for {AIRPORTCODE}
airportinfo flight status for {AIRPORTCODE}
airportinfo airport status for {AIRPORTCODE}
airportinfo  info for {AIRPORTCODE}
airportinfo flight info for {AIRPORTCODE}
airportinfo airport info for {AIRPORTCODE}
airportinfo  delay info for {AIRPORTCODE}
airportinfo flight delay info for {AIRPORTCODE}
airportinfo airport delay info for {AIRPORTCODE}
airportinfo  status info for {AIRPORTCODE}
airportinfo flight status info for {AIRPORTCODE}
airportinfo airport status info for {AIRPORTCODE}

This was generated for us by this line:

'utterances': ['{|flight|airport} {|delay|status} {|info} {|for} {-|AIRPORTCODE}']

Under the hood, alexa-app uses the alexa-utterances module to generate this list. Check that link to get details on the allowed syntax.

We have our intent schema visible here as well. Notice the line in our skill code:

'slots': {
    'AIRPORTCODE': 'FAACODES'
  }

This defined AIRPORTCODE, which is a slot with a custom type of FAACODES we will share with Amazon when registering our skill, and gave alexa-app enough info to generate the schema for us:

{
  "intents": [
    {
      "intent": "airportinfo",
      "slots": [
        {
          "name": "AIRPORTCODE",
          "type": "FAACODES"
        }
      ]
    }
  ]
}

Now that we have an intent schema definition including an airportcode slot, and sample utterances defined for the airportinfo intent, we can now put to work the FAADataHelper we built in our first post on developing Alexa Skills locally.

To do that, we will need to update the airportinfo intent definition located inside the index.js file we created in the faa-info directory. Update it to the following:

app.intent('airportinfo', {
  'slots': {
    'AIRPORTCODE': 'FAACODES'
  },
  'utterances': ['{|flight|airport} {|delay|status} {|info} {|for} {-|AIRPORTCODE}']
},
  function(req, res) {
    //get the slot
    var airportCode = req.slot('AIRPORTCODE');
    var reprompt = 'Tell me an airport code to get delay information.';
if (_.isEmpty(airportCode)) {
      var prompt = 'I didn't hear an airport code. Tell me an airport code.';
      res.say(prompt).reprompt(reprompt).shouldEndSession(false);
      return true;
    } else {
      var faaHelper = new FAADataHelper();

faaHelper.requestAirportStatus(airportCode).then(function(airportStatus) {
        console.log(airportStatus);
        res.say(faaHelper.formatAirportStatus(airportStatus)).send();
      }).catch(function(err) {
        console.log(err.statusCode);
        var prompt = 'I didn't have data for an airport code of ' + airportCode;
        res.say(prompt).reprompt(reprompt).shouldEndSession(false).send();
      });
      return false;
    }
  }
);

//hack to support custom utterances in utterance expansion string
var utterancesMethod = app.utterances;
app.utterances = function() {
return utterancesMethod().replace(/{-|/g, '{');
};

Now we can test how this behaves from alexa-app-server.

Testing the airportinfo Intent

We have three cases to test:

Given an airportInfo intent request:

  1. When the airportCode slot is empty, I should should get “I didn’t hear an airport code. Tell me an airport code.”
  2. When the FAA Server didn’t recognize our airportCode, I should get “I didn’t have data for an airport code of AIRPORTCODE.”
  3. When the airportCode was a code the FAA Server recognized, then I should get a response matching the FAADataHelper’s response we tested earlier.

Let’s try the first test case. Hit http://localhost:8080/alexa/airportinfo and select “IntentRequest” and “airportinfo” from the list. Don’t type anything into “AIRPORTCODE.” Hit “Send Request,” and you should then see the following in the Response box:

{
  "version": "1.0",
  "sessionAttributes": {},
  "response": {
    "shouldEndSession": false,
    "outputSpeech": {
      "type": "SSML",
      "ssml": "<speak>I didn't hear an airport code. Tell me an airport code.</speak>"
    },
    "reprompt": {
      "outputSpeech": {
        "type": "SSML",
        "ssml": "<speak>Tell me an airport code to get delay information.</speak>"
      }
    }
  },
  "dummy": "text"
}

Nice! That looks right. Now let’s try the second case. Do the same thing but enter “PUNKYBREWSTER” for the AIPRORTCODE field.

{
  "version": "1.0",
  "sessionAttributes": {},
  "response": {
    "shouldEndSession": false,
    "outputSpeech": {
      "type": "SSML",
      "ssml": "<speak>I didn't have data for an airport code of PUNKYBREWSTER</speak>"
    },
    "reprompt": {
      "outputSpeech": {
        "type": "SSML",
        "ssml": "<speak>Tell me an airport code to get delay information.</speak>"
      }
    }
  },
  "dummy": "text"
}

Just what we expected. Now for the last case, let’s try entering “ATL” for the AIRPORTCODE.

{
  "version": "1.0",
  "sessionAttributes": {},
  "response": {
    "shouldEndSession": true,
    "outputSpeech": {
      "type": "SSML",
      "ssml": "<speak>There is currently no delay at Hartsfield-Jackson Atlanta International. The current weather conditions are A Few Clouds, 51.0 F (10.6 C) and wind North at 0.0mph.</speak>"
    }
  },
  "dummy": "text"
}

Success! Thanks to these local results, we can begin the process of deploying to Amazon while feeling fairly confident that our skill works as expected. We’ll still need to test interaction with the the actual device, but with this workflow, we should have greatly improved our chances of deploying without bugs.

Going Live

At this point, you’ve been given some useful tools and techniques for more easily developing an Alexa skill locally. With a local development environment setup, you will gain access to the debugger and the stack trace, and you’ll be able to work more efficiently by quickly testing changes without uploading files to a remote server.

In Part 3 of this series, we move from local to live. We’ll go over how you can test your skill on an Amazon Echo or another Alexa-enabled device by creating a new skill on the Amazon Alexa Developer Console, and then deploying your code to AWS Lambda.

Don’t wait until then. Check out the source code for the project we built above and get going!

The post Developing Alexa Skills Locally with Node.js: Implementing an Intent with Alexa-app and Alexa-app-server appeared first on Big Nerd Ranch.

]]>
https://bignerdranch.com/blog/developing-alexa-skills-locally-with-node-js-implementing-an-intent-with-alexa-app-and-alexa-app-server/feed/ 0
Developing Alexa Skills Locally with Node.js: Setting Up Your Local Environment https://bignerdranch.com/blog/developing-alexa-skills-locally-with-node-js-setting-up-your-local-environment/ https://bignerdranch.com/blog/developing-alexa-skills-locally-with-node-js-setting-up-your-local-environment/#respond Mon, 29 Feb 2016 11:00:53 +0000 https://nerdranchighq.wpengine.com/blog/developing-alexa-skills-locally-with-node-js-setting-up-your-local-environment/ If you want to build Alexa Skills, where should you start? You could begin with building one of the sample skills like the color picker or the trivia game. But when you’ve already tackled “Hello, World,” you’re ready to dive in. Well, not quite. You can more efficiently with a local development environment, so let's set one up with Node.js.

The post Developing Alexa Skills Locally with Node.js: Setting Up Your Local Environment appeared first on Big Nerd Ranch.

]]>

Editor’s note: This is the first in a series of posts about developing Alexa skills. Read the rest of the posts in this series to learn how to build and deploy an Alexa skill.

If you want to build Alexa Skills, where should you start? You could begin with building one of the sample skills like the color picker or the trivia game. But when you’ve already tackled “Hello, World,” you’re ready to dive in.

Not quite yet. First, let’s set up a local development environment. Why should you use a local development environment over testing on a live server? There are many benefits. Chief among them are the fact that you gain access to the debugger and the stack trace, and you can quickly test changes without uploading files to a remote server, cutting down your iteration time.

In addition to time considerations, there are other concerns: what if the network is running slowly, you’re on a plane, or the Wi-Fi isn’t working? With a local dev environment, you can still get work done.

That’s where this post comes in: it will guide you through setting up a local development environment so that you can work more efficiently, enabling you to rapidly test your skills as you develop them. We will first set up a working environment with Node.js, and then we will build a model for our Alexa Skill. This skill—Airport Info—will track airport flight delays and weather conditions, and will give us a chance to try developing a more complex Alexa Skill locally.

Building the Skill Model

We will begin by building a model for the Airport Info skill. For this blog post, we’re going to use Node.JS and Amazon’s AWS Lambda compute service. You have the flexibility to use other languages with Lambda or even any HTTPS endpoint, but we chose Node.js here because it’s easy to get started with and is widely used in the Alexa Skill development community.

The source code for this project is available on GitHub. Let’s set up our environment.

First Steps: Using nvm

Before we can begin developing on our Node.js-backed Alexa skill locally, we will of course need to install Node.js. To get started, I’d suggest pulling down Node Version Manager (nvm) to easily keep track of different versions of Node.

Let’s begin by installing nvm. Note that NVM does not exist for a Windows environment. There is an alternative available, with a slightly different set of commands.

➜  local ✗ curl -o- https://raw.githubusercontent.com/creationix/nvm/v0.30.2/install.sh | bash

Close and reopen the terminal to ensure the NVM binary is loaded on the classpath, and verify that you’ve got NVM installed by typing:

nvm ls

You should see something similar to the following:

➜  local ✗ nvm ls
➜       system

Note that NVM does not exist for a Windows environment. There is an alternative, with a slightly different set of commands (nvm list vs nvm ls, for example).

Our skill will live on Amazon’s AWS Lambda Service, so we want to ensure our local environment matches the AWS Lambda environment. Though this may change, at the time of this writing, AWS Lambda currently supports one version of Node.js: v0.10.36.

While today this means we’ll be working with ES5, in a future article we’ll explore enabling ES6 support.

Install Node v0.10.36 using the following command:

nvm install v0.10.36

After this completes, set the default version of Node so we won’t have to fiddle with it in the future:

➜  localhost ✗ nvm alias default v0.10.36
default -> v0.10.36

npm config

Included with Node.js is the node package manager (npm), which we’ll use to add the dependencies our skill needs. Create a new directory for the project called faa-info. This is where our skill service code will live.

Open a terminal window and go to this directory. Next, we will initialize a new package.json file to hold a list of dependencies our project will use:

npm init

Run through the dialogs, accepting the defaults for each dialog, and when you’re prompted to enter a test command, type:

mocha

We will use Chai and Mocha, two JavaScript assertion and test libraries, to build our tests, so we’re adding them while we set up npm.

Now that we’ve configured npm as our test runner, we will add all of the dependencies the project will use. Add the required dependencies to our package.json:

npm install --save alexa-app chai chai-as-promised mocha lodash request-promise

npm should download these dependencies and add them to the package.json file. A new “node_modules” directory containing the dependencies should appear.

If everything was successfully added to your project, your final package.json should read as follows. Note that the version numbers may be slightly different:

{
  "name": "faa-info",
  "version": "1.0.0",
  "description": "",
  "main": "index.js",
  "scripts": {
        "test": "mocha"
  },
  "author": "",
  "license": "ISC",
  "dependencies": {
        "alexa-app": "^2.3.2",
        "alexa-app-server": "^2.2.4",
        "chai": "^3.5.0",
        "chai-as-promised": "^5.2.0",
        "lodash": "^4.5.0",
        "mocha": "^2.4.5",
        "request-promise": "^2.0.1"
  }
}

Now we’re ready to begin building our skill.

Foundation

Let’s consider the feature our skill offers: in response to an IATA airport code provided by the user, our skill will give information relevant for that particular airport.

The service we’ll use is the FAA’s public airport status service. The endpoint accepts an IATA airport code and a format, like http://services.faa.gov/airport/status/SFO?format=application/json to get information about the San Francisco airport. If you visit that URL, you will see the information we would like our skill to ultimately read back to the user: delay status, weather conditions and temperature info. The FAA payload contains all of this:

{"delay":"false","IATA":"SFO","state":"California","name":"San Francisco International","weather":{"visibility":9.00,"weather":"Mostly Cloudy","meta":{"credit":"NOAA's National Weather Service","updated":"9:56 AM Local","url":"http://weather.gov/"},"temp":"58.0 F (14.4 C)","wind":"Northeast at 4.6mph"},"ICAO":"KSFO","city":"San Francisco","status":{"reason":"No known delays for this airport.","closureBegin":"","endTime":"","minDelay":"","avgDelay":"","maxDelay":"","closureEnd":"","trend":"","type":""}}

To proceed, we’ll first address the problem of requesting the data, then tackle formatting it so that Alexa can use it as a response.

Making the Request

Let’s start with some unit tests to ensure that the model works as we expect. Create a directory called test within faa-info and add a file called test_faa_data_helper.js. As I mentioned earlier, we will use Chai and Mocha to build our tests. They should already be installed, since we added them earlier via npm.

Your project directory should now look like this:

/faa-info
    *package.json*
    /test
        *test_faa_data_helper.js*
    /node_modules
        /alexa-app
        /chai
        /chai-as-promised
        /lodash
        /mocha
        /request-promise

Enter the following text in the test_faa_data_helper.js file you’ve just created.

'use strict';
var chai = require('chai');
var expect = chai.expect;
var FAADataHelper = require('../faa_data_helper');

describe('FAADataHelper', function() {
  var subject = new FAADataHelper();
});

Our test will describe the behavior of the FAADataHelper class. The test doesn’t yet make any assertions, but it does require a helper class. It should fail, because we don’t have a helper class created yet.

We can now run our test with mocha test/test_faa_data_helper.js from the terminal in the root of our project.

➜  localhost ✗ npm test
module.js:340
    throw err;
    ^

Error: Cannot find module '../faa_data_helper'

It looks like our test file ran, but couldn’t find the module because we haven’t created it. Let’s create a new file called faa_data_helper.js in the root directory (../faa-info) and start a module called FAADataHelper.

'use strict';

function FAADataHelper() { }

module.exports = FAADataHelper;

Now let’s see what we get:

➜  localhost ✗ npm test

  0 passing (2ms)

We need to make some assertions in our test. Our first test should check a made-up method called requestAirportStatus(airportCode) on FAADataHelper, which will eventually make the actual request to the Airport Info service, then return the JSON we saw earlier.

Update test_faa_data_helper.js with the following changes:

'use strict';
var chai = require('chai');
var chaiAsPromised = require('chai-as-promised');
chai.use(chaiAsPromised);
var expect = chai.expect;
var FAADataHelper = require('../faa_data_helper');
chai.config.includeStack = true;

describe('FAADataHelper', function() {
  var subject = new FAADataHelper();
  var airport_code;
  describe('#getAirportStatus', function() {
    context('with a valid airport code', function() {
      it('returns matching airport code', function() {
        airport_code = 'SFO';
        var value = subject.requestAirportStatus(airport_code).then(function(obj) {
          return obj.IATA;
        });
        return expect(value).to.eventually.eq(airport_code);
      });
    });
  });
});

This test asserts that a Promise-based network request returns a value we expect. (Our project will use request-promise.)

Notice the eventually portion of the matcher in use above? This is a special part of the chai-as-promised matcher we added, which allows us to make assertions about the data a Promise returns without requiring the use of callbacks, “sleeps” or other less-than-ideal approaches to waiting for the request to complete. For more in-depth coverage, Jani Hartikainen has written a great article on testing Promises.

If we run our test, we should see something like this:

  FAADataHelper
    #getAirportStatus
      with a valid airport code
        1) returns matching airport code


  0 passing (8ms)
  1 failing

  1) FAADataHelper #getAirportStatus with a valid airport code returns airport code:
     TypeError: subject.requestAirportStatus is not a function

Failure. requestAirportStatus is not a function—so let’s make it one! After adding a boilerplate requestAirportStatus() method, our FAADataHelper class looks like this:

'use strict';
var _ = require('lodash');
var rp = require('request-promise');
var ENDPOINT = 'http://services.faa.gov/airport/status/';

function FAADataHelper() { }

FAADataHelper.prototype.requestAirportStatus = function(airportCode) {
};

module.exports = FAADataHelper;

Running the test again, we get a different failure:

  1) FaaDataHelper #getAirportStatus with a valid airport code returns airport code:
TypeError: Cannot read property 'then' of undefined

Making our Tests Pass

Now let’s go ahead and implement the request. We’ll use the request-promise library to build a request to the FAA service mentioned earlier. Add the following to faa_data_helper.js that you created in the project’s root directory:’

'use strict';
var _ = require('lodash');
var rp = require('request-promise');
var ENDPOINT = 'http://services.faa.gov/airport/status/';

function FAADataHelper() { }

FAADataHelper.prototype.requestAirportStatus = function(airportCode) {
  return this.getAirportStatus(airportCode).then(
    function(response) {
      console.log('success - received airport info for ' + airportCode);
      return response.body;
    }
  );
};

FAADataHelper.prototype.getAirportStatus = function(airportCode) {
  var options = {
    method: 'GET',
    uri: ENDPOINT + airportCode,
    resolveWithFullResponse: true,
    json: true
  };
  return rp(options);
};
module.exports = FAADataHelper;

The test should pass now:

FAADataHelper
  #getAirportStatus
    with a valid airport code
success - received airport info for SFO
      ✓ returns airport code (180ms)

Let’s also add a test for the eventuality that an IATA airport code isn’t one the FAA service knows about or is responding with a non-200 status code. If the server is given a faulty IATA code, it appears to return a 404 status code. Our request library interprets this as an error.

Add the following test within the describe('#getAirportStatus', function() { part of the test_faa_data_helper.js file. We will assert that an invalid IATA airport code such as PUNKYBREWSTER is not a valid airport code:

...
context('with an invalid airport code', function() {
  it('returns an error', function() {
    airport_code = 'PUNKYBREWSTER';
    return expect(subject.requestAirportStatus(airport_code)).to.be.rejectedWith(Error);
  });
});
...

Using npm test, we find that our test should now pass:

  FAADataHelper
    #getAirportStatus
      with an invalid airport code
        ✓ returns invalid airport code (295ms)
      with a valid airport code
success - received airport info for SFO
        ✓ returns airport code (301ms)

A Hypothetical Response from Alexa

Now that we’ve proven our request works as expected, let’s consider the response Alexa should give to a user with this data. The data we receive indicates if there’s a flight delay—or not—so let’s have our helper give Alexa either, depending on the response from the FAA’s server:

"There is currently no delay at Hartsfield-Jackson Atlanta International. The current weather conditions are Light Snow, 36.0 F (2.2 C) and wind Northeast at 9.2mph."

or

"There is currently a delay for Hartsfield-Jackson Atlanta International. The average delay time is 57 minutes. Delay is because of the following: AIRLINE REQUESTED DUE TO DE-ICING AT AIRPORT / DAL AND DAL SUBS ONLY. The current weather conditions are Light Snow, 36.0 F (2.2 C) and wind Northeast at 9.2mph."

Note: Alexa will read glyphs, so you’ll need to test the output in the voice simulator to make sure that everything is working as it should.

Let’s write a test for a helper method to build these strings. We’ll have two tests, one for a case where there isn’t a delay, and one where there is. I grabbed the following JSON for our test harness from the service, but feel free to get your own.

Add this to test/test_faa_data_helper.js within the describe('FAADataHelper', function():

  describe('#formatAirportStatus', function() {
    var status = {
      'delay': 'true',
      'name': 'Hartsfield-Jackson Atlanta International',
      'ICAO': 'KATL',
      'city': 'Atlanta',
      'weather': {
        'visibility': 5.00,
        'weather': 'Light Snow',
        'meta': {
          'credit': 'NOAA's National Weather Service',
          'updated': '3:54 PM Local',
          'url': 'http://weather.gov/'
        },
        'temp': '36.0 F (2.2 C)',
        'wind': 'Northeast at 9.2mph'
      },
      'status': {
        'reason': 'AIRLINE REQUESTED DUE TO DE-ICING AT AIRPORT / DAL AND DAL SUBS ONLY',
        'closureBegin': '',
        'endTime': '',
        'minDelay': '',
        'avgDelay': '57 minutes',
        'maxDelay': '',
        'closureEnd': '',
        'trend': '',
        'type': 'Ground Delay'
      }
    };
    context('with a status containing no delay', function() {
      it('formats the status as expected', function() {
        status.delay = 'false';
        expect(subject.formatAirportStatus(status)).to.eq('There is currently no delay at Hartsfield-Jackson Atlanta International. The current weather conditions are Light Snow, 36.0 F (2.2 C) and wind Northeast at 9.2mph.');
      });
    });
    context('with a status containing a delay', function() {
      it('formats the status as expected', function() {
        status.delay = 'true';
        expect(subject.formatAirportStatus(status)).to.eq(
          'There is currently a delay for Hartsfield-Jackson Atlanta International. The average delay time is 57 minutes. Delay is because of the following: AIRLINE REQUESTED DUE TO DE-ICING AT AIRPORT / DAL AND DAL SUBS ONLY. The current weather conditions are Light Snow, 36.0 F (2.2 C) and wind Northeast at 9.2mph.'
        );
      });
    });
  });

Keep in mind that we are bouncing between adding to FAADataHelper and its test as we go—we’re TDDing it. For the sake of this article, we’re skipping ahead several “ping-pongs” between test file and implementation.

Crafting our Response

To make these tests pass, we’ll implement a formatAirportStatus(status) method that accepts the response from the FAA server and turns it into a sentence matching the expected above.

Let’s use the _.template() feature available in lodash to simplify the task of generating the strings Alexa will respond with. Add the following method to your FAADataHelper class:

FAADataHelper.prototype.formatAirportStatus = function(airportStatus) {
  var weather = _.template('The current weather conditions are ${weather}, ${temp} and wind ${wind}.')({
    weather: airportStatus.weather.weather,
    temp: airportStatus.weather.temp,
    wind: airportStatus.weather.wind
  });
  if (airportStatus.delay === 'true') {
    var template = _.template('There is currently a delay for ${airport}. ' +
      'The average delay time is ${delay_time}. ' +
      'Delay is because of the following: ${delay_reason}. ${weather}');
    return template({
      airport: airportStatus.name,
      delay_time: airportStatus.status.avgDelay,
      delay_reason: airportStatus.status.reason,
      weather: weather
    });
  } else {
    //    no delay
    return _.template('There is currently no delay at ${airport}. ${weather}')({
      airport: airportStatus.name,
      weather: weather
    });
  }
};

Now, run the test file. You should see the following:

  FAADataHelper
    #getAirportStatus
      with an invalid airport code
        ✓ returns invalid airport code (295ms)
      with a valid airport code
success - received airport info for SFO
        ✓ returns airport code (301ms)
    #formatAirportStatus
      with a status containing no delay
        ✓ formats the status as expected
      with a status containing a delay
        ✓ formats the status as expected

  4 passing (608ms)

Excellent! We’re green.

Now our FAADataHelper is ready to be hooked up to a skill, which we will build in the next post. We’ll also go over how we can host the skill locally and mock the requests from Amazon Echo, enabling rapid feedback on any bugs.

Been reading along without setting up your environment? Check out the source code and get started!

In the next post, we discuss how to implement an intent using using open-source libraries.

Big Nerd Ranch and Amazon have partnered to create no-cost training for Alexa Skills Kit. Get the details in our press release, then sign up to get more info as it becomes available.

The post Developing Alexa Skills Locally with Node.js: Setting Up Your Local Environment appeared first on Big Nerd Ranch.

]]>
https://bignerdranch.com/blog/developing-alexa-skills-locally-with-node-js-setting-up-your-local-environment/feed/ 0
What is Functional Reactive Programming? https://bignerdranch.com/blog/what-is-functional-reactive-programming/ https://bignerdranch.com/blog/what-is-functional-reactive-programming/#respond Thu, 12 Feb 2015 09:53:37 +0000 https://nerdranchighq.wpengine.com/blog/what-is-functional-reactive-programming/

Functional Reactive Programming (FRP) offers a fresh perspective on solving modern programming problems. Once understood, it can greatly simplify your project, especially when it comes to code dealing with asynchronous events with nested callbacks, complex list filtering/transformation or timing concerns.

The post What is Functional Reactive Programming? appeared first on Big Nerd Ranch.

]]>

Functional Reactive Programming (FRP) offers a fresh perspective on solving modern programming problems. Once understood, it can greatly simplify your project, especially when it comes to code dealing with asynchronous events with nested callbacks, complex list filtering/transformation or timing concerns.

I will strive to skip academic explanations of Functional Reactive Programming (there are many of those on the internet already!) and focus instead on helping you gain a pragmatic understanding of what Functional Reactive Programming is—and how we can put it to work for us. This article will revolve around a particular implementation of Functional Reactive Programming called RxJava, which works on Java and Android.

Getting Started

Let’s start with a tangible example of how the Functional Reactive Programming ideas can improve our code’s readability. Our task is to query GitHub’s API to first get a list of users, and then request the details for each user. This will involve two web service endpoints:

https://api.github.com/users – retrieve list of users

https://api.github.com/users/{username} – retrieve details for a specific username, such as https://api.github.com/users/mutexkid

Old Style

The example below shows what you may already be familiar with: It calls to the web service, uses a callbacks interface to pass the successful result to the next web service call, define another success callback, and then moves on to the next web service request. As you can see, this results in two nested callbacks:

//The "Nested Callbacks" Way
    public void fetchUserDetails() {
        //first, request the users...
        mService.requestUsers(new Callback<GithubUsersResponse>() {
            @Override
            public void success(final GithubUsersResponse githubUsersResponse,
                                final Response response) {
                Timber.i(TAG, "Request Users request completed");
                final List<GithubUserDetail> githubUserDetails = new ArrayList<GithubUserDetail>();
                //next, loop over each item in the response
                for (GithubUserDetail githubUserDetail : githubUsersResponse) {
                    //request a detail object for that user
                    mService.requestUserDetails(githubUserDetail.mLogin,
                                                new Callback<GithubUserDetail>() {
                        @Override
                        public void success(GithubUserDetail githubUserDetail,
                                            Response response) {
                            Log.i("User Detail request completed for user : " + githubUserDetail.mLogin);
                            githubUserDetails.add(githubUserDetail);
                            if (githubUserDetails.size() == githubUsersResponse.mGithubUsers.size()) {
                                //we've downloaded'em all - notify all who are interested!
                                mBus.post(new UserDetailsLoadedCompleteEvent(githubUserDetails));
                            }
                        }

                        @Override
                        public void failure(RetrofitError error) {
                            Log.e(TAG, "Request User Detail Failed!!!!", error);
                        }
                    });
                }
            }

            @Override
            public void failure(RetrofitError error) {
                Log.e(TAG, "Request User Failed!!!!", error);
            }
        });
    }

Though it isn’t the worst code—it is asynchronous and therefore doesn’t block while it waits for each request to complete, at least—it’s less than desirable for the reasons that it’s messy (adding more levels of callbacks exponentially increases the illegibility), and isn’t very easy to work with when we inevitably need to change it (we’re dependent upon the previous callback’s state and so it doesn’t lend itself to modularization or changing the data handed off to the next callback in the previous web service call). Affectionately, this is referred to as “callback hell.”

The RxJava Way

Now, let’s look at the same functionality written with RxJava:

public void rxFetchUserDetails() {
        //request the users
        mService.rxRequestUsers().concatMap(Observable::from)
        .concatMap((GithubUser githubUser) ->
                        //request the details for each user
                        mService.rxRequestUserDetails(githubUser.mLogin)
        )
        //accumulate them as a list
        .toList()
        //define which threads information will be passed on
        .subscribeOn(Schedulers.newThread())
        .observeOn(AndroidSchedulers.mainThread())
        //post them on an eventbus
        .subscribe(githubUserDetails -> {
            EventBus.getDefault().post(new UserDetailsLoadedCompleteEvent(githubUserDetails));
        });
    }

As you can see, we lose the callbacks entirely when using the Functional Reactive Programming model, and wind up with a much smaller program. Let’s begin to unpack what just happened here by starting with a basic definition of Functional Reactive Programming, and work our way toward understanding the code above, which is available on GitHub.

Fundamentally, Functional Reactive Programming is the Observer pattern, with support for manipulating and transforming the stream of data our Observables emit. In the example above, the Observables are a pipeline our data will flow through.

To recap, the Observer Pattern involves two roles: an Observable, and one or many Observers. The Observable emits events, while the Observer subscribes and receives them. In the example above, the .subscribe() call adds an Observer to the Observable and the requests are made.

Building an Observable Pipeline

Each operation on the Observable pipeline returns a new Observable, with either the same data as its content, or a transformation of that data. By taking this approach, it allows us to decompose the work we do and modify the stream of events into smaller operations, and then plug those Observables together to build more complex behavior or reuse individual pieces in the pipeline. Every method call we do on the Observable is adding onto the overall pipeline for our data to flow through.

Let’s look at a concrete example by setting up an Observable for the first time:

Observable<String> sentenceObservable = Observable.from(this, is, a, sentence);

Here we have defined the first segment of our pipeline, an Observable. The data that flows through it is a series of strings. The first thing to realize is that this is non-blocking code that hasn’t done anything yet, it’s just a definition of what we want to accomplish. Observable will only begin to do it’s work when we “subscribe” to it – in other words register an observer with it:

Observable.subscribe(new Action1<String>() {
          @Override
          public void call(String s) {
          	System.out.println(s);
          }
 }); 

Only now will Observable emit each chunk of data added in the from() call as individual Observables. The pipeline will continue to emit the observables until it runs out of them.

Transforming streams

Now that we’ve got a stream of strings being emitted, we can transform them as we like, and build up more complicated behavior.

Observable<String> sentenceObservable = Observable.from(this, is, a, sentence);

sentenceObservable.map(new Func1<String, String>() {
            @Override
            public String call(String s) {
                return s.toUpperCase() + " ";
            }
        })
.toList()
.map(new Func1<List<String>, String>() {
            @Override
            public String call(List<String> strings) {
                Collections.reverse(strings);
                return strings.toString();
            }
        })
//subscribe to the stream of Observables
.subscribe(new Action1<String>() {
            @Override
            public void call(String s) {
                System.out.println(s);
            }
        });

Once the Observable is subscribed to, we will get “SENTENCE A IS THIS.”
The .map method we call above takes a Func1 object with two generic types: its input (the previous observable’s contents), and its output (in this case, a new string that’s capitalized, formatted and wrapped in a new Observable instance, then passed on to the next method). As you can see, we’ve composed more complicated behaviour out of reusable chunks of a pipeline.

One thing about the above example: it can be much more simply expressed using java 8 lambda syntax as:

Observable.just("this", "is", "a", "sentence").map(s -> s.toUpperCase() + " ").toList().map(strings -> {
            Collections.reverse(strings);
            return strings.toString();
        });

In the subscribe method, we pass an Action1 object as an argument with the type String for its generic parameter. This defines a Subscriber’s behavior when the last item emitted from the Observable, a String, is received. This is the simplest from of the .subscribe() method (see this documentation for more elaborate signatures).

This example shows one transformation method: .map(), and one aggregation method: .toList(). This is barely the tip of the iceberg with what’s possible in manipulating the data stream (a list of all of the available stream operators, but it shows you the underlying concept: In Functional Reactive Programming, we can transform a stream of data with discrete pieces of a pipeline of transformations/data manipulation, and we can reuse those individual pieces as needed on other pipelines composed of Observables. By plugging these Observable pieces together, we can compose more complicated features, but keep them as smaller pieces of composable logic that are easy to understand and modify.

Managing Threads with The Scheduler

In the web service example, we showed how to make a web request using RxJava. We talked about transforming, aggregating and subscribing to the stream of Observables, but we didn’t talk about how the Observable stream of web requests is made asynchronous.

This falls under that category of what the FRP model calls the Scheduler—a strategy for defining which thread Observable stream events occur on, and which thread consumes the result of the emitted Observables as a Subscriber. In the web service example, we want the requests to happen in the background, and the subscription to occur on the main thread, so we define it this way:

        .subscribeOn(Schedulers.newThread())
        .observeOn(AndroidSchedulers.mainThread())
        //post them on an eventbus
        .subscribe(githubUserDetails -> {
            EventBus.getDefault().post(new UserDetailsLoadedCompleteEvent(githubUserDetails));
        });

Observable.subscribeOn(Scheduler scheduler) specifies that the work done by the Observable should be done on a particular Scheduler (thread).

Observable.observeOn(Scheduler scheduler) specifies in which Scheduler (thread) the Observable should invoke the Subscribers'onNext( ), onCompleted( ) and onError( ) methods, and call the Observable’s observeOn( ) method, passing it the appropriate Scheduler.

Here are the possible Schedulers:

  • Schedulers.computation( ):
    meant for computational work such as event loops and callback processing; do not use this scheduler for I/O (use Schedulers.io( ) instead)

  • Schedulers.from(executor): uses the specified Executor as a Scheduler

  • Schedulers.immediate( ): schedules work to begin immediately in the current thread

  • Schedulers.io( ): meant for I/O-bound work such as asynchronous performance of blocking I/O, this scheduler is backed by a thread pool that will grow as needed; for ordinary computational work, switch to Schedulers.computation( )

  • Schedulers.newThread( ): creates a new thread for each unit of work

  • Schedulers.test( ): useful for testing purposes; supports advancing events to unit test behavior

  • Schedulers.trampoline( ): queues work to begin on the current thread after any already queued work

By setting the observeOn and subscribeOn schedulers, we define which threads we will use for the network requests (Schedulers.newThread( )).

Next Steps

We’ve covered a lot of ground in this article, but you should have a good idea of how Functional Reactive Programming works at this point. Examine the GitHub project we shared in the article and understand it, read the RxJava documentation and check out the rxjava-koans project for a test-driven approach to mastering the Functional Reactive Programming paradigm.

Want to learn even more about Java? Check out our new Introduction to Java for Android bootcamp. And if you’ve already mastered Java, consider our Android bootcamp.

The post What is Functional Reactive Programming? appeared first on Big Nerd Ranch.

]]>
https://bignerdranch.com/blog/what-is-functional-reactive-programming/feed/ 0
Implementing Swipe to Refresh, an Android Material Design UI Pattern https://bignerdranch.com/blog/implementing-swipe-to-refresh-an-android-material-design-ui-pattern/ https://bignerdranch.com/blog/implementing-swipe-to-refresh-an-android-material-design-ui-pattern/#respond Thu, 11 Dec 2014 10:10:22 +0000 https://nerdranchighq.wpengine.com/blog/implementing-swipe-to-refresh-an-android-material-design-ui-pattern/ One of the great ideas formalized in the new Material Design user interface guidelines is the [Swipe to Refresh UI pattern](http://www.google.com/design/spec/patterns/swipe-to-refresh.html). In fact, you've probably already seen and used it. It's found its way into many popular Android apps like Facebook, Google Newsstand, Trello, Gmail and many others.

The post Implementing Swipe to Refresh, an Android Material Design UI Pattern appeared first on Big Nerd Ranch.

]]>

One of the great ideas formalized in the new Material Design user interface guidelines is the Swipe to Refresh UI pattern. In fact, you’ve probably already seen and used it. It’s found its way into many popular Android apps like Facebook, Google Newsstand, Trello, Gmail and many others.

Here’s what it looks like:

cat names gif

The Swipe to Refresh pattern is a nice fit for adapter-backed views (RecyclerView and ListView, for example) that also need to support user-requested refreshes, like a list that displays a Twitter newsfeed that’s updated by a user, on demand.

Introduced alongside KitKat and enhanced with the Lollipop release of the v4 Android support library, a working implementation of the Swipe to Refresh UI pattern is included, called SwipeRefreshLayout. All we have to do is set it up! For your reference, the project we’ll build is available for download on GitHub here.

Setting up Swipe To Refresh

We begin implementing the Swipe to Refresh pattern with a brand new Android Studio project and the most recent version of the Android Support Library (your SDK manager should show an Android Support Library version of at least 21.0).

The first thing we need to do is add the support library to our application€’s build.gradle file.

compile 'com.android.support:support-v4:21.0.+'

Gradle sync your project and open the layout file that was generated when we created our first Activity from the new project wizard, named res/layouts/activity_main.xml. We’re going to add a ListView and a SwipeRefreshLayout widget to the layout file. The ListView will display content we want to update using the Swipe to Refresh pattern, and the SwipeRefreshLayout widget will provide the basic functionality.

 <android.support.v4.widget.SwipeRefreshLayout
        android:id="@+id/activity_main_swipe_refresh_layout"
        android:layout_width="match_parent"
        android:layout_height="wrap_content">

        <ListView
            android:id="@+id/activity_main_listview"
            android:layout_width="match_parent"
            android:layout_height="match_parent"
            >
        </ListView>

    </android.support.v4.widget.SwipeRefreshLayout>

Notice that ListView is nested within the SwipeRefreshLayout. Any time we swipe the ListView beyond the edge of the SwipeRefreshLayout, the SwipeRefreshLayout widget will display a loading icon and trigger an onRefresh event. This event is a hook for adding our own on-demand data refresh behavior for the list.

Wiring up the Adapter

Now that we’ve got our layout file ready, let’s set up a simple data adapter. In real life, our adapter would likely be backed by a web service, remote API or database, but to keep things simple we’ll fake a web API response. Include the following XML snippet in your res/strings.xml file:

<string-array name="cat_names">
    <item>George</item>
    <item>Zubin</item>
    <item>Carlos</item>
    <item>Frank</item>
    <item>Charles</item>
    <item>Simon</item>
    <item>Fezra</item>
    <item>Henry</item>
    <item>Schuster</item>
</string-array>

and set up an adapter to fake out new responses from the “cat names web API” for our experiment.

class MainActivity extends Activity {

  ListView mListView;
  SwipeRefreshLayout mSwipeRefreshLayout;
  Adapter mAdapter;

  @Override
  public void onCreate(Bundle savedInstanceState) {
    super.onCreate(savedInstanceState);
    setContentView(R.layout.acivity_main);
    mSwipeRefreshLayout = (SwipeRefreshLayout) findViewById(R.id.activity_main_swipe_refresh_layout);
    mListView = findViewById(R.id.activity_main_listview);
 mListView.setAdapter(new ArrayAdapter<String>(){
    String[] fakeTweets = getResources().getStringArray(R.array.fake_tweets);
    mAdapter = new ArrayAdapter<String>(this, android.R.layout.simple_list_item_1, fakeTweets);
    mListview.setAdapter(mAdapter);
  });
  }
}

Defining our Data Refresh

Now that our adapter is set up, we can wire up the refresh triggered by swiping down. We’ll get an animated loading indicator already “for free”; we just need to define what happens to our ListView. We will do that by defining the SwipeRefreshLayout widget’s expected OnRefreshListener interface. We’ll also simulate getting new data back from the CatNames webservice (we’ll build a method called getNewCatNames() that will build an array of randomly shuffled fake responses from the cat names API).

@Override
  public void onCreate(Bundle savedInstanceState) {
  ...
    listView.setAdapter();
    mSwipeRefreshLayout.setOnRefreshListener(new SwipeRefreshLayout.OnRefreshListener() {
      @Override
      public void onRefresh() {
            refreshContent();
          ...
  }
  
  // fake a network operation's delayed response 
  // this is just for demonstration, not real code!
  private void refreshContent(){ 
  new Handler().postDelayed(new Runnable() {
          @Override
          public void run() {
    mAdapter = new ArrayAdapter<String>(MainActivity.this, android.R.layout.simple_list_item_1, getNewTweets());
    mListView.setAdapter(mAdapter);
    mSwipeRefreshLayout.setRefreshing(false);
  });
  }

  // get new cat names.
  // Normally this would be a call to a webservice using async task,
  // or a database operation
  
 private List<String> getNewCatNames() {
        List<String> newCatNames = new ArrayList<String>();
        for (int i = 0; i < mCatNames.size(); i++) {
          int randomCatNameIndex = new Random().nextInt(mCatNames.size() - 1);
          newCatNames.add(mCatNames.get(randomCatNameIndex));
        }
        return newCatNames;
    }

Notice the last line in refreshContent() in the previous listing: setRefreshing(false);. This notifies the SwipeRefreshLayout widget instance that the work we wanted to do in onRefresh completed, and to stop displaying the loader animation.

Now try running your app. Try swiping the ListView down and verify that the SwipeRefreshLayout loading icon displays, is dismissed, and that new tweets are loaded into the ListView. Congratulations! You’ve implemented the Swipe to Refresh pattern in your app.

Customization

You can also customize SwipeRefreshLayout’s appearance. To define your own custom color scheme to use with SwipeRefreshLayout’s animated loading icon, use the appropriately named setColorSchemeResources() method. This takes a varargs of color resource ids.

First, define colors you want to appear in the SwipeRefreshLayout’s animated loader:

<resources>
    <color name="orange">#FF9900</color>
  <color name="green">#009900</color>
    <color name="blue">#000099</color>
</resources>

Then call setColorSchemeResources(R.color.orange, R.color.green, R.color.blue); in the onCreate portion of your activity, after the SwipeRefreshLayout is loaded. Deploy your program and notice the customized colors the swipe animation now uses cat names, with color!

cat names color

SwipeRefreshLayout will rotate through the colors we provided as the loader continues to be displayed.

As you can see from this simple example, Swipe to Refresh is a great pattern for simplifying the problem of user-requested data updates in your app. For more info about available API options for SwipeRefreshLayout, check out the official Android documentation page.

The post Implementing Swipe to Refresh, an Android Material Design UI Pattern appeared first on Big Nerd Ranch.

]]>
https://bignerdranch.com/blog/implementing-swipe-to-refresh-an-android-material-design-ui-pattern/feed/ 0
Testing the Android way https://bignerdranch.com/blog/testing-the-android-way/ https://bignerdranch.com/blog/testing-the-android-way/#respond Wed, 24 Apr 2013 19:52:14 +0000 https://nerdranchighq.wpengine.com/blog/testing-the-android-way/

As a Rails developer, I learned the benefits of a test-driven style, and I never want to go back to the old style of writing code that might work, but might not! So when I decided to learn Android programming, finding the right strategies and tools to drive tests was the first task at hand. I found several excellent libraries that help facilitate a test-driven workflow on Android, and a test culture that is rapidly adopting the use of automated testing.

The post Testing the Android way appeared first on Big Nerd Ranch.

]]>

As a Rails developer, I learned the benefits of a test-driven style, and I never want to go back to the old style of writing code that might work, but might not! So when I decided to learn Android programming, finding the right strategies and tools to drive tests was the first task at hand. I found several excellent libraries that help facilitate a test-driven workflow on Android, and a test culture that is rapidly adopting the use of automated testing.

Robolectric

The largest barrier when getting started on this challenge was the dependency of the Android core libraries themselves upon the actual Android operating system. The AndroidTestCase classes provided by Google, for example, need a new instance of an emulator to run. This emulator then needs a new instance of the APK under test on it. The whole cycle of spinning up the emulator, deploying the APK and running the actual test could take a few minutes upon every run of the suite to get set up—a major time sink.

These slow tests were a dealbreaker, and I felt it would likely reduce the chances we’d actively use the tests. Fortunately for us, the Robolectric Project from Pivotal Labs has done the hard work of removing this dependency.

Effectively, Robolectric replaces the behavior of code that would otherwise require an emulator or actual device with its own, and once it’s set up, we can write jUnit-style tests against our classes without needing the baggage of the emulator. It does this using so-called “Shadow Classes,” mock implementations of Android core libraries.

Robolectric also goes further than this, enabling you to provide your own custom behavior for these shadow classes. This is extremely helpful for custom test behavior you may need. For example, my current project depends upon loading large sets of data to the SQLite database managed by Android. Using the “Shadow Class” notion, Robolectric allowed me to implement a ‘test fixture’ style database reset, where any changes that may have happened to a default set of data I provide gets reset to ensure that I can assert results dependent upon this data. Here’s what I mean:

public class MyTestRunner extends RobolectricTestRunner{
  @Override
  public void beforeTest(Method method) {
    super.beforeTest(method);
    // swaps in custom implementations of the sqlite android database class.
    Robolectric.bindShadowClass(MyShadowSQLiteDatabase.class);
  }
}

The MyShadowSQLiteDatabase implementation provides custom behavior that points to our custom test database. Without Robolectric, we would be dependent upon the core Android class behavior. Within the ShadowSQLiteDatabase class, we now change how Android’s core behavior would normally work:

@Implements(SQLiteDatabase.class) //this annotation indicates to android what implementation this class is providing
public class MyShadowSQLiteDatabase extends ShadowSQLiteDatabase{
    @Implementation
    public static SQLiteDatabase openDatabase(String path, SQLiteDatabase.CursorFactory factory, int flags){
        try {
            //Replace Robolectric's in-memory only connection with a real sqlite database connection.
            connection = DriverManager.getConnection("jdbc:sqlite:" + path);
        } catch (SQLException e) {
            e.printStackTrace();
        }
        return newInstanceOf(SQLiteDatabase.class); //an instance of the sqlite-jdbc sqliitedatabase class
    }
}

And then in the test itself we can finally use this new behavior:

@RunWith(MyTestRunner.class)
public class TestDatabaseStuff extends BaseTest{
  public static final String TEST_DATABASE = "test/fixtures/test_database.s3db";
  public static final String ORIG_DATABASE = "test/fixtures/test_database.orig";
  @Before
  public void setup() throws IOException{
    //reset the test fixture
    originalDatabase = new File(ORIG_DATABASE);
    fileToWrite = new File(TEST_DATABASE);
    FileUtils.copyFile(originalDatabase, fileToWrite); //resets the test database with a new copy
  }
}

So I had a way to run tests quickly, but I then began to look for ways to write tests faster and more cleanly.

Fest for Android

The first discovery was Fest-Android, a library extension for the FEST framework from the fine folks at Square Labs. Fest-Android is a great improvement upon the regular jUnit-style assertions. It gives a chainable (or “fluent”) syntax for checking assertions, and makes tests easier to write (and read!). A bonus of this is that any decent Java IDE can code-complete the available assertions for any property, cleaning up the confusion with the “expected” and “actual” syntax in the jUnit-style test syntax.

For example:

//regular junit:
assertEquals(View.GONE, view.getVisibility());
//fest!
assertThat(view).isGone();

At the point assertThat(view) is called, the IDE can then give intelligent feedback about the types of assertions available.

Mockito = rspec-mocks… Sort of

For more complicated classes, replacing elaborate functionality coupled to other classes in order to isolate tests is done using the notion of ‘mocks’. The Mockito framework for Java does this well. An example from a project was to selectively change the behavior of a method to return a timestamp I could test against, rather than a dynamically generated one. Mockito made this a snap:

Mockito.doReturn((long) 1363027600).when(myQueryObject).getCurrentTime();

Whenever myQueryObject.getCurrentTime is called, a predefined value is returned instead of the current one. This is very useful for having classes return results you can actually test!

Robotium = Integration Tests

Sometimes, unit tests alone fail to address behavior you will want to check with your tests. For example, let’s say I want to assert that entering a valid username and password, clicking the login button, and then clicking on the account details button takes me to the right view within my application. A unit test doesn’t really capture this “path” through the application, and is better described using an “integration test,” or a test that checks whether multiple components of the system work properly in conjunction with one another. For this type of work, I found Robotium, which uses a Selenium-like style to run the test from the UI. This of course requires an emulator or device to work. I found that keeping the unit test projects and integration test projects separate was a good compromise between the fast Robolectric tests, and the sometimes-necessary Robotium tests to check overall behavior across the system.

Dependency Injection

To reduce the amount of effort required to set up classes for tests, the use of dependency injection is a huge win. Simply put, it allows you to configure the instantiation of classes across your application without manually doing so. It also allows the configuration of special behavior for classes without writing code from scratch. Roboguice brings the Google GUICE injection framework to Android, and makes it possible to use dependency injection throughout your Android app. Check out the Roboguice wiki on how to use this great tool.

While test-driven development isn’t as common in the Android community as it is with Rails, it seems that doing things the TDD way is gaining momentum. I hope this post helped to give a good overview of some of the tools that will help you test your Android project. Don’t hesitate to ask any questions, or point out any other tools I should know about below.

The post Testing the Android way appeared first on Big Nerd Ranch.

]]>
https://bignerdranch.com/blog/testing-the-android-way/feed/ 0