Bolot Kerimbaev - Big Nerd Ranch Tue, 19 Oct 2021 17:47:09 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.5 Google I/O 2018: AI, Commoditized https://bignerdranch.com/blog/google-i-o-2018-ai-commoditized/ https://bignerdranch.com/blog/google-i-o-2018-ai-commoditized/#respond Thu, 31 May 2018 10:00:23 +0000 https://nerdranchighq.wpengine.com/blog/google-i-o-2018-ai-commoditized/ Adding machine learning and AI into mobile apps became easier with Google's Firebase MLKit. Last year, [Google democratized AI](https://nerdranchighq.wpengine.com/blog/google-io-2017-ai-democratized/). This year, they commoditized it.

The post Google I/O 2018: AI, Commoditized appeared first on Big Nerd Ranch.

]]>

The audience at the Google I/O 2018 Keynote gasped and applauded at the demonstrations of pervasive and ubiquitous AI in Google’s products. Almost every mention of a new feature highlighted a machine learning application that makes seemingly magical things available at our fingertips. But it’s just sufficiently advanced technology, commoditized into nice packages such as MLKit and TensorFlow, that make it possible for all developers, even those outside Google, to add such features in their own applications.

Impressive Demos

Google Assistant has learned to perform multiple actions and gained the ability to continue conversations so that we don’t have to keep prompting it with “OK Google” after every request. One of the most impressive demonstrations was that of the prototype Google Assistant that is capable of carrying out telephone conversations to make appointments and check hours of operation. So convincing was the assistant at mimicking humans that it raised concerns, but it also demonstrated state of the art technology that was unimaginable just a few years ago.

Applications of AI and ML to medicine continue to impress and inspire. Various ML models trained with the assistance of medical professionals often match the performance of practitioners and even reach the level of experts. Furthermore, as demonstrated at the keynote, sometimes these models are able to explore and discover additional pieces of information that the researchers had not explored. For example, the diabetic retinopathy research was able to determine the biological sex, age and other factors that were not thought to be possible to extract from the images of retinas.

ML models are very good at making predictions. Gmail can now generate smart replies and even help compose emails based on contextual cues, such as the subject of the email, current date or recipient. Google Photos can determine the content of the photo and suggest the most likely action, like scanning a document or sharing your friends photos with them.

AI and ML are at the core of the next version of Android, bringing features like adaptive battery, adaptive brightness and predictive actions. The OS adapts to user’s usage patterns and allocates resources accordingly, resulting in a 30% reduction in battery usage.

Machine Learning Tools for Developers

As impressive as these product demos are, most developers are interested in the tools that enable us to build our own applications. Fortunately, we were not disappointed. TensorFlow continues to evolve and gain new features. Eager execution allows easier easier experimentation, TensorFlow Lite scales down models to execute on mobile hardware, Swift for TensorFlow brings compiler technology to graph extraction and promises to combine the best of both worlds in terms of performance, flexibility and expressivity.

Google also introduced MLKit, a new set of machine learning APIs to Firebase. MLKit APIs can categorize images, detect faces, recognize text, scan barcodes, and identify landmarks out of the box. They do this by using appropriate ML models for each of the tasks. Some of the APIs can work on the device, while others are cloud-only. It is also possible to roll out a custom ML model with TensorFlow Lite. New built-in models are being added to MLKit, such as the smart replies functionality from the Gmail demo.

The goal of MLKit is to make it easier to incorporate advanced capabilities in mobile applications. Here’s an example of using MLKit for identifying image labels:

FirebaseVision.getInstance().visionLabelDetector
       .detectInImage(image)
       .addOnSuccessListener { labels ->
           val output = labels.map { "$it.label: $it.confidence" }
           Log.d(TAG, "Found labels:n$output")
       }

Every year it is getting easier to take advantage of the amazing progress made by AI and ML researchers. Many things recently considered impossible are becoming commonplace features in mobile applications. MLKit aims to make ML widely accessible to a much broader audience. The power of off-the-shelf ML is just a Gradle dependency away.

Appendix: Full Code Example

const val REQUEST_GET_IMAGE = 1
const val TAG = "MLKitDemo"

class MainActivity : AppCompatActivity() {

    override fun onCreate(savedInstanceState: Bundle?) {
        super.onCreate(savedInstanceState)
        FirebaseApp.initializeApp(baseContext)
        setContentView(R.layout.activity_main)
        findViewById<Button>(R.id.get_image_button).setOnClickListener { getImage() }
    }

    private fun getImage() {
        val intent = Intent(Intent.ACTION_PICK)
        intent.type = "image/*"
        startActivityForResult(intent, REQUEST_GET_IMAGE)
    }

    override fun onActivityResult(requestCode: Int, resultCode: Int, data: Intent?) {
        if (resultCode != Activity.RESULT_OK) return
        when(requestCode) {
            REQUEST_GET_IMAGE -> data?.let { processImage(data) }
        }
    }

    private fun processImage(data: Intent) {
        val image = FirebaseVisionImage.fromFilePath(baseContext, data.data)
        FirebaseVision.getInstance().visionLabelDetector
                .detectInImage(image)
                .addOnSuccessListener { labels ->
                    val output = labels.map { "$it.label: $it.confidence" }
                    Log.d(TAG, "Found labels:n$output")
                }
                .addOnFailureListener {
                    Log.e(TAG, "Error", it)
                }
    }
}

The post Google I/O 2018: AI, Commoditized appeared first on Big Nerd Ranch.

]]>
https://bignerdranch.com/blog/google-i-o-2018-ai-commoditized/feed/ 0
Kotlin: When to Use Lazy or Lateinit https://bignerdranch.com/blog/kotlin-when-to-use-lazy-or-lateinit/ https://bignerdranch.com/blog/kotlin-when-to-use-lazy-or-lateinit/#respond Sun, 08 Oct 2017 10:00:53 +0000 https://nerdranchighq.wpengine.com/blog/kotlin-when-to-use-lazy-or-lateinit/ Use lateinit for view properties in activities and fragments. While lazy is appealing, it has some rough corners.

The post Kotlin: When to Use Lazy or Lateinit appeared first on Big Nerd Ranch.

]]>

You start the development sprint full of energy, but the ancient curse of Java bogs you down and you realize you are in for a marathon.

“Is it safe?” the massive code base keeps asking you.

“Is it safe?” forcing you to check whether your variables are null.

“Is it safe?” the sadistic voice is relentless.

“It’s so safe you won’t believe it!” you utter, but you are not sure anymore.

Is It Safe? With Kotlin It Is

Java does not protect you from the “billion dollar mistake” – the null pointer is lurking everywhere. Every reference can potentially be null. How can anyone be safe in Java?

Many Android developers find refuge in Kotlin, a modern programming language that eliminates a lot of pain and suffering. Less boilerplate, more expressive code and, yes, it is null safe – if you choose the path of avoiding torture.

Still, Kotlin has to live in the world where Java was king, and on Android the Activity lifecycle further complicates things. Consider, for example, storing a reference to a view in a property. This is a common practice because Android developers try to avoid repeated calls to findViewById.

Ideally, an object’s properties would all be defined at the time it is created, but, because for Activities and Fragments object creation is separate from view loading, the properties intended to store views must start out uninitialized.

This post explores several approaches to handling properties that reference views:

  • Using a nullable type
  • Using lateinit
  • Using a custom getter
  • Using by lazy
  • Two custom property delegates

Nullable Type

The simplest way to reference a view in a property is to use a nullable type.

var showAnswerButton: Button? = null

Since all variables must be initialized, null is assigned to showAnswerButton. Later, in Activity.onCreate (or Fragment.onCreateView), it will be assigned again, this time the value that we actually want it to have.

showAnswerButton = findViewById(R.id.showAnswerButton)

When using a nullable type, the ?. or !! operators have to be used to access the nullable variable. Using ?. avoids a crash by returning null should showAnswerButton be null for some reason.

showAnswerButton?.setOnClickListener { /* */ }

This is equivalent to the following code in Java:

if (showAnswerButton != null) {
    showAnswerButton.setOnClickListener(/* */);
}

The !! operator would cause a crash in the following code if showAnswerButton were null:

showAnswerButton!!.setOnClickListener { /* */ }

Using these operators at least make it obvious that showAnswerButton is nullable. This is not as easy to spot in Java:

showAnswerButton.setOnClickListener(/* */);

Lateinit

There is a better alternative: lateinit.

lateinit var questionTextView: TextView

Using lateinit, the initial value does not need to be assigned. Furthermore, at the use sites the questionTextView is not a nullable type, so ?. and !! are not used. However, we have to be careful to assign our lateinit var a value before we use it. Otherwise, a lateinit property acts as if we performed !!: it will crash the app on a null value.

Custom getter

You could also create a property with a custom getter:

val anotherTextView: TextView
    get() = findViewById(R.id.another_text_view)

This approach has a big drawback, each time the property is accessed, the findViewById method is called. Furthermore, for Fragments, you have to use the !! operator on the view property. Bang-bang – any illusion of null safety is gone.

Need an introduction to Kotlin? Download our free eBook to learn what this language means for your Android App.

Lazy

A property defined via by lazy is initialized using the supplied lambda upon first use, unless a value had been previously assigned.

val nameTextView by lazy { view!!.findViewById<TextView>(R.id.nameTextView) }

This approach will cause a crash if nameTextView is accessed before setContentView in an Activity. It is even trickier in Fragments, because this code would cause a crash inside onCreateView even after the view is inflated.
That’s because the view property of the Fragment is not set until after onCreateView completes, and it is referenced in the lazy variable’s initializer. It is possible to use lazy properties in onViewCreated.

Using a lazily initialized property on a retained fragment causes a memory leak, since the property holds a reference to the old view.

Custom Property Delegate

Unlike other languages, lazy in Kotlin is not a language feature, but a property delegate implemented in the standard library. Thus, it is possible to draw on it as inspiration and perform a mind experiment. Would it be possible to resolve the memory leak and the lack of lifecycle-awareness of the by lazy approach?

Android Architecture Components includes support for lifecycle awareness.
The cityTextView property is defined using LifecycleAwareLazy, which takes in a Lifecycle instance and clears out the value when the ON_STOP event occurs. This ensures that the value is initialized again.

val cityTextView by lifecycleAwareLazy(lifecycle) { view!!.findViewById<TextView>(R.id.cityTextView) }

This creates an interesting curiosity: while cityTextView has been defined as a val, by virtue of the property delegate it can still change, as if it were a var. In fact, properties that have a delegate cannot even be declared with a var.

The stateTextView is defined using LifecycleAwareFindView, which takes a Fragment that also happens to implement the LifecycleOwner interface and the view ID.

val stateTextView: TextView by findView(this, R.id.stateTextView)

These two property delegates solve one problem, but not completely.
They still contain a memory leak.

When to Use Lazy or Lateinit

Lazy is a good fit for properties that may or may not be accessed.
If we never access them, we avoid computing their initial value.
They may work for Activities, as long as they are not accessed before setContentView is called. They are not a great fit for referencing views in a Fragment, because the common pattern of configuring views inside onCreateView would cause a crash. They can be used if view configuration is done in onViewCreated.

With Activities or Fragments, it makes more sense to use lateinit for their properties, especially the ones referencing views. While we don’t control the lifecycle, we know when those properties will be properly initialized. The downside is we have to ensure they are initialized in the appropriate lifecycle methods.

References

The post Kotlin: When to Use Lazy or Lateinit appeared first on Big Nerd Ranch.

]]>
https://bignerdranch.com/blog/kotlin-when-to-use-lazy-or-lateinit/feed/ 0
Want Kotlin on the Server? Do Ktor https://bignerdranch.com/blog/want-kotlin-on-the-server-do-ktor/ https://bignerdranch.com/blog/want-kotlin-on-the-server-do-ktor/#respond Mon, 31 Jul 2017 10:00:53 +0000 https://nerdranchighq.wpengine.com/blog/want-kotlin-on-the-server-do-ktor/ It has always been possible to use Kotlin on the server. Numerous Java server frameworks happily run any JVM bytecode, whether the code was originally written in Java, Kotlin, Scala or even JRuby. But if you are an Android developer who wants to build simple JSON APIs for your apps, why not use a framework that was written in Kotlin by people who brought you Kotlin?

The post Want Kotlin on the Server? Do Ktor appeared first on Big Nerd Ranch.

]]>

It has always been possible to use Kotlin on the server.
Numerous Java server frameworks happily run any JVM bytecode, whether the code was originally written in Java, Kotlin, Scala or even JRuby.
But if you are an Android developer who wants to build simple JSON APIs for your apps, why not use a framework that was written in Kotlin by people who brought you Kotlin?

In this post, you will learn how to write a simple server application in Kotlin using Ktor.
You can check out our project on GitHub or build it from scratch by following the instructions below.

Getting Started

To develop server apps, download IntelliJ IDEA, the Community edition.
It should look familiar to Android developers, since Android Studio is based on it.

Create a new project (File > New Project…), then:

  • Select Gradle in the left-side navigation and check Kotlin (Java):

Select Gradle

  • Hit Next, then enter the group ID (e.g., your company’s reverse DNS name, just like for Android packages) and artifact ID (the name of the project).
  • Hit Next, then make sure that “Use default gradle wrapper” is selected.
    You may also want to check “Use auto-import”.
  • Hit Next. On the final screen, you can adjust where the project files are stored.
  • Hit Finish.

At this point, you may want to create a git repository and add a .gitignore file.
You can use the same one you use for your Android Studio projects.

Adding Ktor Dependencies

Open build.gradle and add Ktor dependencies.

Add repositories where Ktor components are hosted:

repositories {
    mavenCentral()
+   maven { url "https://dl.bintray.com/kotlin/ktor" }
+   maven { url "https://dl.bintray.com/kotlin/kotlinx" }
}

Define the ktor_version variable:

buildscript {
    ext.kotlin_version = '1.1.3-2'
+   ext.ktor_version = '0.4.0-alpha-11'

Add the dependencies to the top-level dependencies, not the one in buildscript:

dependencies {
    compile "org.jetbrains.kotlin:kotlin-stdlib-jre8:$kotlin_version"
    testCompile group: 'junit', name: 'junit', version: '4.12'
+   compile "org.jetbrains.ktor:ktor-core:$ktor_version"
+   compile "org.jetbrains.ktor:ktor-netty:$ktor_version"
}

The ktor-core dependency contains the main Ktor API.
The ktor-netty is one of the hosts that can run Ktor applications.

First Server Application

Create a new file in src/main/resources and call it application.conf.
This file will allow you to configure the server parameters, such as the port number, environment, etc.

Copy the following configuration into application.conf:

ktor {
    deployment {
        port = ${PORT}
    }

    application {
        modules = [ WhoKt.main ]
    }
}

This file defines important parameters that the server will use: the port number to listen to connections and the module to load.
While you could hardcode the port number, using the environment variable ${PORT} will make your configuration more flexible and will prepare you for deployment to Heroku later.

Next, create a new Kotlin file, call it Who.
IntelliJ will add the kt extension, so it will show up as Who.kt in the project.
Unlike Java, Kotlin allows you to put any code in this file, not just the class that matches the filename.

Write the following code:

fun Application.main() {
    install(DefaultHeaders)
    install(CallLogging)
    install(Routing) {
        get("/") {
            val text = "Howdy, Planet!"
            call.respondText(text)
        }
    }
}

IntelliJ will ask you to import classes and functions as you go.
All of the imports should be in org.jetbrains.ktor packages:

import org.jetbrains.ktor.application.Application
import org.jetbrains.ktor.application.install
import org.jetbrains.ktor.features.DefaultHeaders
import org.jetbrains.ktor.http.ContentType
import org.jetbrains.ktor.logging.CallLogging
import org.jetbrains.ktor.response.respondText
import org.jetbrains.ktor.routing.Routing
import org.jetbrains.ktor.routing.get

Running The Server

To run the server, you have to create a new run configuration.

  • Click the plus button in the Run/Debug Configurations dialog and select Kotlin.
  • In the Name field enter Dev Host
  • Check the “Single instance only” checkbox so that you do not accidentally attempt to start multiple instances of the server
  • In the Main class field enter org.jetbrains.ktor.netty.DevelopmentHost
  • Click the ... button next to the Environment variables field and add the variable called PORT with the value 5000
  • From the “Use classpath of module” drop down select doktor_main

Your run configuration should look like this:

Run configuration

Once you save the configuration, it will be preselected in the run configurations drop down.
Run the server by clicking the green “Play” button:

Run the server

Testing The Server

You can test the server from the terminal using curl.
Type this command:

curl -v -w "n" http://localhost:5000/

The -v option is short for --verbose and allows you to see more details of the request and response, such as headers.
The -w option stands for --write-out and in this case allows you to force a new line after the response.
This is useful in bash, since it won’t automatically start the prompt on a new line.

You should see output that looks like this:

*   Trying ::1...
* TCP_NODELAY set
* Connected to localhost (::1) port 5000 (#0)
> GET / HTTP/1.1
> Host: localhost:5000
> User-Agent: curl/7.51.0
> Accept: */*
> 
< HTTP/1.1 200 OK
< Date: Tue, 25 Jul 2017 19:32:18 GMT
< Server: ktor-core/0.4.0-alpha-10 ktor-core/0.4.0-alpha-10
< Content-Type: text/plain
< Content-Length: 14
< 
* Curl_http_done: called premature == 0
* Connection #0 to host localhost left intact
Howdy, Planet!

Serving JSON

If you are building your own API, you will most likely want to return JSON.
Building stringified JSON is no fun.
It’s much better to let a library do the heavy lifting.
So, add Gson to build.gradle:

    compile "org.jetbrains.ktor:ktor-core:$ktor_version"
    compile "org.jetbrains.ktor:ktor-netty:$ktor_version"
+   compile "com.google.code.gson:gson:2.8.1"
}

Back in Who.kt, create a data class:

data class Who(val name: String, val planet: String)

Data classes are a little bit like POJOs (Plain Old Java Objects), but with a lot of added power are more like Powerful Old Kotlin Expression for Minimal Object Notation.
You define the fields, the compiler generates accessors and several other functions, such as equals(), hashCode(), etc.

They even support destructuring declarations, so you can assign individual fields to separate variables, for example:

val (name, planet) = Who("Doktor", "Gallifrey")

Next, update the get("/") route:

    get("/") {
+       val doktor = Who("Doktor", "Gallifrey")
+       val gson = Gson()
+       val json = gson.toJson(doktor)
+       call.respondText(json, ContentType.Application.Json)
-       val text = "Howdy, Planet!"
-       call.respondText(text)
    }

To propagate these changes to the running instance of the server, you have to rebuild and restart the project.
You can rebuild the project by either using the keyboard shortcut ⌘F9 (Command-F9 on Mac or CTRL-F9 on Windows/Linux) or clicking the button in the toolbar to the left of the run configurations dropdown.
Since you configured “Dev Host” to be a single instance, the “Run” button is now a restart button.
The first time you use it, IntelliJ will ask you if you are sure you want to restart the server:

Restart the server

Test from the command line or your browser.
You should see the following JSON now:

{"name":"Doktor","planet":"Gallifrey"}

One more thing.
If you decide to access your server from an Android emulator, remember that localhost means connect to the current machine, i.e., the emulator itself.
To connect to the host computer, use the 10.0.2.2 IP address.

Running From The Command Line

If you want to start the server from the command line, you have to let Gradle know how to run.
In your build.gradle file, add the application plugin and specify the main class.

 apply plugin: 'java'
 apply plugin: 'kotlin'
+apply plugin: 'application'

+mainClassName = 'org.jetbrains.ktor.netty.DevelopmentHost'

This is the equivalent of the run configuration you created in the IDE earlier.
To run the server, open the terminal and go to the root directory of the project, where the gradlew file is located, and execute ./gradlew run.

For the More Curious: Heroku

If you decide to make your Ktor application available to the world, then Heroku can help.
Heroku is a popular cloud platform that allows you to deploy various server applications.
Heroku supports applications that are built using Gradle, so deploying this app will be fairly straightforward.

Install Heroku command-line tools.
On a Mac, you can use Homebrew (or follow instructions for other platforms):

brew install heroku

Login to Heroku:

heroku login

You can create the app in your Heroku account or you can do that through the command line:

heroku create # only if you didn't create it on the web

If you did create the app in the Heroku account on the web, you can add the corresponding app’s remote:

heroku git:remote -a NAME_OF_YOUR_APP # if you created the app already

Change the default Gradle task that will be executed when you push the code to Heroku:

heroku config:set GRADLE_TASK="build"

Since Heroku does not know how to start Ktor apps, you have to create a custom Procfile.
Create the file in the root of your project and call it literally Procfile.
It will contain just a single line:

web: ./gradlew run

To test the app locally:

heroku local

Open http://localhost:5000/ in your browser.
You should see the familiar JSON again.

When you are ready to deploy, make sure all your files are committed to git and push your changes to Heroku:

git push heroku master

Moar Ktor

This post merely scratches the surface of what can be done with Ktor.
Let us know if you find this interesting and would like to see more Kotlin on the server content.

The post Want Kotlin on the Server? Do Ktor appeared first on Big Nerd Ranch.

]]>
https://bignerdranch.com/blog/want-kotlin-on-the-server-do-ktor/feed/ 0
Android Security for Developers https://bignerdranch.com/blog/android-security-for-developers/ https://bignerdranch.com/blog/android-security-for-developers/#respond Wed, 05 Apr 2017 10:00:00 +0000 https://nerdranchighq.wpengine.com/blog/android-security-for-developers/ As developers, we usually think of security from the perspective of the platform and applications. But we should not forget the humans for whom we create our applications.

The post Android Security for Developers appeared first on Big Nerd Ranch.

]]>

Security is frequently an afterthought in many software projects – perhaps too frequently. Sometimes there is merely no budget for security, whether it’s for developer training or for third-party experts. Unfortunately, the cost of such oversight is often much higher than the initial investment would have been. Design was in a similar position in the past, but many in the industry have realized that you can’t just “apply design” at the end of the project. Similarly, security may have a profound impact on some of the technical decisions made early in the project.

As developers, we usually think of security from the perspective of the platform and applications, but we should not forget the humans for whom we create our applications.

Know Thy Platform Security

Apple publishes the iOS Security White Paper for every major release of iOS, and Google publishes the annual Android Security Year in Review. With every update, they each also publish the release notes that contain the details of security fixes.

Half a Billion Marshmallows

For security conscious developers the information gleaned from Google’s annual report can help in making some important decisions. For example, it might be useful to consider setting minSdkVersion to 19 (KitKat or version 4.4) or even higher, since only Android versions 4.4.4 and later receive the monthly security patches. In 2016, over 50% of devices received platform security updates. While the percentage may seem low, given the sheer number of Android devices, 1.4 billion, this translates to more than 735 million updated devices. In fact, the installed base of Android puts certain numbers into perspective. While “only” 34.1% of devices run Marshmallow or later, expressed in units instead of percentage it equals to over 475 million devices. Given the estimates of the number of active iOS devices (630 million iPhones), the number of devices on Marshmallow and later is beginning to look like a very sizable market.

Security Features

What’s so special about Android Marshmallow? It introduced a number of very important security features

  • Runtime permissions not only dramatically improve user experience, but could also increase the percentage of users that update your app.

  • Fingerprint authentication encourages more people to enable screen lock protection.
  • Disk encryption is now required for all capable devices that ship with Marshmallow.
  • Support for encryption keys residing in secure hardware make key material extraction even more difficult.

Android Nougat continued building on what Marshmallow started and added significant security upgrades, including media server hardening, direct boot and file-based encryption, and the improved system update process. Newer devices will benefit from the seamless updates feature, which performs system updates in the background on a separate partition. This way, Nougat can boot the updated version much faster so that users don’t have to wait for ages before their phone is ready to use again.

One of the big Android O features that has a serious impact on user security is the Prefill API. AgileBits, the maker of 1Password, already showed off a proof-of-concept integration, since making it easier for users to adopt the practice of using password managers means improved security.

Developer Education

Google offers some good resources for Android developers, including best practices for security & privacy, security tips and App Security Checklist. In addition, Android Security Bulletins are useful for staying aware of the state of platform security.

OWASP (Open Web Application Security Project) started its life covering security for web applications, but gradually grew to include various platforms, including mobile. OWASP Top 10 Risks and OWASP Android Testing Cheat Sheet are good starting points for evaluating an application quality from the security perspective.

CWE (Common Weakness Enumeration) is another great resource for learning from others’ mistakes. It’s also useful to check various secure coding guidelines.

Security Practices

Security and privacy are more important to people now than ever before. You may want to share some things, like what you’re eating for lunch and pictures of your dog, to millions of your Instagram and Snapchat followers, but you probably don’t want to share your bank account credentials or social media passwords. So, some things are worth keeping secret.

In 2015 Google published the results of their research into security practices that are recommended by security experts and non-experts alike. It turns out these two groups have a different take on the importance of various practices.

Amateurs Security Experts
1. Use Antivirus Software 1. Install Software Updates
2. Use Strong Passwords 2. Use Unique Passwords
3. Change Passwords Frequently 3. Use 2-Factor Authentication
4. Only Visit Websites They Know 4. Use Strong Passwords
5. Don’t Share Personal Information 5. Use a Password Manager

Other than the use of strong passwords, the security tips are very different. While at first glance some of the suggestions by amateurs may seem to make sense, they offer less security than the experts’ advice. For example, in some cases antiviruses may even lower your security. Changing passwords frequently also lowers security, since many users end up using passwords that are easier to remember and to guess.

The top 5 practices recommended by experts could be condensed to just top 3, since many password managers’ functionality takes care of creating unique and strong passwords. Of course there are additional practices that are important, such as backups, VPN, and signing up for password hacking/leak notifications.

As developers, we should encourage these best practices, both by educating our friends and families, and by making software that is friendly to these practices.

Best Practices and Caution

Using unique passwords together with 2-factor authentication significantly reduces users’ risk. Since many people reuse the same password across multiple sites, a hack of one of these sites makes their other accounts vulnerable. So, if you’re still reusing passwords, it’s time to get a good password manager and start changing passwords.

You can use your password manager for more than just passwords – for example, you should probably keep your security questions unique across websites. Of course, it’s possible to exercise good password discipline without using password managers, it would require a little bit more effort but may be worth it if you’re concerned about the password manager’s security.

Backups, done properly, will help not just in cases of accidental loss of data, due to broken or lost devices, but also in cases of data loss due to ransomware. The Backup 3-2-1 rule helps protect against the possible unfortunate situation when malware/ransomware infects the backup drive that was connected to the infected PC.

VPN (virtual private network) software helps you when you are using an unsecured Wi-Fi at a cafe or in a hotel. When using public Wi-Fi, all your non-HTTPS traffic is fully visible to anybody on the same network. While many websites are moving towards using HTTPS, there are still others that allow regular HTTP.

You do have to be careful when selecting VPN software. On Android, 84% leak user traffic and 18% of VPN apps don’t even encrypt the traffic, something that the VPN is supposed to do by definition. What’s worse, 38% of them inject malware, further endangering their users. So, do your research and avoid bad ones, those that effectively offer no security.

Software updates are the unsung heroes of security practices. Many vendors issue regular security patches even to older versions of their software. For example, Android monthly security patches are released for Android 4.4.4 and later. Of course, some of the more fundamental security updates require upgrades to the latest major version of the OS.

Getting software from known sources also helps keep malware at bay. Even though the Play Store has been doing a great job of detecting and removing PHA (Potentially Hazardous Apps), sometimes malware sneaks in anyway.

Making Mobile a Safer Place

Airline safety rules dictate that in case of emergency you should put on your own oxygen mask first before helping others. Similarly, you should practice safety and security yourself first, and then help make the lives of the users of your apps more secure.

Update your software, especially the OS and browsers. If you’re not running the latest iOS (10.3.1 as of April 3, 2017) on your iPhone or the latest April security patches on your Android phone, you’re leaving yourself open to attacks.

Once you’ve updated your devices, look for opportunities to make your users more secure. Adopt secure coding practices, stay informed of the platform security updates, audit your code, and keep learning. Stay tuned for more information on security as we continue to dive into the subject.

The post Android Security for Developers appeared first on Big Nerd Ranch.

]]>
https://bignerdranch.com/blog/android-security-for-developers/feed/ 0
Building User Interfaces with ConstraintLayout https://bignerdranch.com/blog/building-user-interfaces-with-constraintlayout/ https://bignerdranch.com/blog/building-user-interfaces-with-constraintlayout/#respond Mon, 12 Dec 2016 11:11:11 +0000 https://nerdranchighq.wpengine.com/blog/building-user-interfaces-with-constraintlayout/ We [talked about the ConstraintLayout](/blog/constraintlayout-vs-auto-layout-how-do-they-compare/) in a previous post and compared it to Apple's Auto Layout. Since then, ConstraintLayout has gone from alpha to beta, and the latest version packs further speed improvements and fixes. Furthermore, Android Studio 2.3 made improvements to the editor. In this post, we'll explore some examples of user interfaces you can create with ConstraintLayout.

The post Building User Interfaces with ConstraintLayout appeared first on Big Nerd Ranch.

]]>

We talked about the ConstraintLayout in a previous post and compared it to Auto Layout. Since then, has gone from alpha to beta, and the latest version (beta4) packs further speed improvements and fixes. Furthermore, Android Studio 2.3 (currently canary 2, not yet final) made improvements to the editor.

In this post, we’ll explore some examples.

Example: Centering Views

Auto Layout for iOS and macOS exposes the centerX and centerY anchors. Centering a view in its parent view is as simple as adding a constraint between the view’s and its parent’s center anchors. Similarly, center-aligning multiple views is achieved by creating the constraints between their centers.

ConstraintLayout does not have center anchors, but there are several mechanisms that can achieve the same effect.

The first approach is to create a guideline with the percent constraint set to 0.5: app:layout_constraintGuide_percent="0.5", and to align the view’s left and right anchors to the guideline:

    app:layout_constraintLeft_toLeftOf="@+id/guideline"
    app:layout_constraintRight_toLeftOf="@+id/guideline"

Centering example

The second approach is to constrain the left and right anchors of the view to its parent:

    app:layout_constraintLeft_toLeftOf="parent"
    app:layout_constraintRight_toRightOf="parent"

Centering example 2

Chains

ConstraintLayout alpha 9 added the chains feature, which makes it possible to achieve many of the tricks that would previously require a nested LinearLayout.

A “chain” is a set of views that are connected pair-wise by bi-directional constraints. In other words, the simplest chain consists of two views that have constraints against each other in the same dimension, for example:

    <Button
        ...
        app:layout_constraintRight_toLeftOf="@id/button4"
        app:layout_constraintHorizontal_chainStyle="packed"
        .../>

    <Button
        ...
        app:layout_constraintLeft_toRightOf="@+id/button3"
        .../>

Chain example

Chains can be “spread,” “packed” or “spread inside”; this is controlled by the app:layout_constraintHorizontal_chainStyle property (there is also a Vertical version). Making the chain packed pushes the views close to each other and making it spread allocates the empty space around them.

Chain styles

Example: GeoQuiz

The first exercise in our Android Programming guide builds a quiz application with a UI that looks like this:

GeoQuiz example

The UI specifications might boil down to something like this:

  • It should display the question text view, with margins on the left and the right, 8 points above the horizontal center line.
  • It should display two buttons, each 8 points on either side of the vertical center line, 8 points below the horizontal center line.

This is the layout used in the book:

GeoQuiz layout

In order to show the buttons side-by-side below text, the book uses two nested LinearLayouts. It works perfectly well, but in more complex UIs nesting layouts may lead to performance problems that result from some implementation details of the layout process. When a view or a layout is invalidated (i.e., changes content, size, or position), its container, or parent view, is also invalidated. Since this process is recursive, a single change in the view hierarchy may lead to a mass of updates. Thus, in general, it is desirable to keep the view hierarchy as flat as possible.

RelativeLayout enables flattening of the view hierarchy, but as mentioned earlier, its inefficiencies made it unsuitable for use in complex UIs.

The new kid on the block, ConstraintLayout, is set to make efficient constraint-based layouts a thing on Android. What makes ConstraintLayout better than RelativeLayout? Use of the Cassowary algorithm and the constraints (pun intended) placed on how the constraints can be set up.

Here’s the same UI designed using ConstraintLayout (Layout XML file):

GeoQuiz layout with ConstraintLayout

It is a bit hard to see all the constraints, so here they are broken down piece by piece:

  • The dotted lines are the vertical and horizontal center guidelines.
  • The text view and the two buttons define their position relative to those guidelines.

This layout can also be created without using guidelines. Chains to the rescue!

A view can participate in two separate chains, one in the horizontal and one in the vertical direction. For example, the GeoQuiz UI from our Android book can be created using the ConstraintLayout with two chains, rather than using nested LinearLayouts:

GeoQuiz with Chains

Example: HomePwner

Using StackViews is a common way to simplify the constraints on iOS. For example, to create a form that contains text labels and text fields, you could use a vertical StackView that contains horizontal StackViews with the UILabel and UITextField. To align the text fields, we would add a leading constraint to all of them.

In our iOS Programming Guide, we build the HomePwner application with the detail screen that looks like this:

HomePwner detail screen

There are a few things to keep in mind when trying to replicate this UI using ConstraintLayout. Since the layout params have to be unique per view, we cannot align EditTexts with each other and set them a minimum distance from the corresponding TextViews. One workaround is to introduce a helper Spacer view and define the left/right constraints against it (Layout XML file):

HomePwner in Android

Conclusion

ConstraintLayout has come a long way since its first alpha version. This post explores a few relatively simple user interfaces that can be constructed with its help. However, the real promise of ConstraintLayout will be realized in more complex user interfaces that typically would require several layers of nested layouts, reducing the depth of the view hierarchy and thus improving layout performance.

Would you like to see more examples of ConstraintLayout in action? Let us know in the comments!

The post Building User Interfaces with ConstraintLayout appeared first on Big Nerd Ranch.

]]>
https://bignerdranch.com/blog/building-user-interfaces-with-constraintlayout/feed/ 0
ConstraintLayout vs Auto Layout: How Do They Compare? https://bignerdranch.com/blog/constraintlayout-vs-auto-layout-how-do-they-compare/ https://bignerdranch.com/blog/constraintlayout-vs-auto-layout-how-do-they-compare/#respond Thu, 06 Oct 2016 10:00:53 +0000 https://nerdranchighq.wpengine.com/blog/constraintlayout-vs-auto-layout-how-do-they-compare/ ConstraintLayout and Auto Layout use the same underlying algorithm. How do they differ?

The post ConstraintLayout vs Auto Layout: How Do They Compare? appeared first on Big Nerd Ranch.

]]>

Children of Cassowary

Cassowary is a bird that lives in the tropical forests of New Guinea and mostly eats fruit. It is also the name of the algorithm and software for solving systems of linear equations and inequalities, developed in the 1990s at the University of Washington. It turns out that linear equations are really well suited for specifying the parameters of user interface elements, namely, the positions and sizes of views, their relationships to each other, etc. Instead of painstakingly crafting artisanal, bespoke, pixel-perfect-but-not-adaptable user interfaces, the UI can be defined by declaring how its elements are positioned with respect to each other.

Cassowary’s contribution was the efficiency with which it could solve all those equations. Since people are very sensitive to the smoothness of graphics, speed of rendering the UI is very important. In 2016, both iOS and Android have first-party layout systems based on Cassowary. This explores ConstraintLayout and compares it to Auto Layout.

Constraints

A constraint is a rule that specifies of how properties of views relate to each other. For example, we might want the views to be aligned along their top edge. Rather than manually setting their vertical position to be equal to the same number, the constraint would simply state that their top edges must be equal to each other, like so: view1.top = view2.top.

Top aligned iOS

Top aligned iOS

Top aligned iOS

Top aligned Android

More generally, a constraint defines the relationship between properties of views, called anchors, in an expression of this form (shown here as an equality, but inequalities can be used, too):

view1.attribute1 = multiplier * view2.attribute2 + constant

To translate these constraints into actual positions and sizes, the constraint solver applies the Cassowary algorithm to find the solution. In the illustration above, the top positions of the views are set up as follows:

view1.top = view2.top
view1.top = container.top + constant

Since the container size and position are known, the second expression gives us the solution for view1.top, and, consequently, the solution for view2.top.

Anchors

Many of the same anchors exist in both Auto Layout and ConstraintLayout. Both have top, bottom, baseline, left, and right. There is a slight terminology difference for the internationalization-friendly anchors: leading and trailing for Auto Layout vs start and end for ConstraintLayout. Auto Layout also has centerX and centerY, which are missing from the current version of ConstraintLayout. The same centering effect can be achieved in ConstraintLayout with other mechanisms, such as the guidelines or using both left and right constraints. In addition, ConstraintLayout defines the begin and percent anchors exclusive to guidelines.

Android iOS
top top
bottom bottom
left left
right right
start leading
end trailing
  centerX
  centerY
baseline baseline

Guidelines, Layout Guides and Margins

Auto Layout provides the top and bottom layout guides that help with aligning views to the top or the bottom of the screen. They adapt to the presence of various bars at the top or bottom of the screen: status, navigation, tab, tool, and active call/audio recording. Some of these can appear dynamically (e.g., active call), others are determined at the time the view controller is presented (e.g., pushing the view controller onto the navigation stack). There are also layout margins on the sides that allow views to keep a certain distance from the edge of the screen.

Interface Builder allows adding horizontal and vertical guides, but they’re not actual views, they only exist during editing. Constraints cannot be defined in terms of edit-time-only guides.

ConstraintLayout guidelines, on the other hand, are first-class citizens. They’re special non-rendering views, permanently View.GONE from sight, but playing an important role in layout. They show up as vertical or horizontal lines in the UI and can be set at either a fixed position or a fixed percentage from an edge. For example, a vertical guideline that goes through the center of the screen would look like this:

    <android.support.constraint.Guideline
        ...
        android:orientation="vertical"
        app:layout_constraintGuide_percent="0.5"/>

Defining Constraints

Like other layouts, ConstraintLayout defines its own LayoutParams, which its children can use to specify their position and sizing. Constraints are created by using layout params of the form app:layout_constraintAnchor1_toAnchor2Of="@id/view2". As a result, ConstraintLayout’s constraints are one-way and directed out of one view to another view.

On iOS constraints are separate objects. They are defined in the view that’s the closest common ancestor of the views that are related by it. For example:

  • Width and height are defined in the view itself, since it’s the “closest common ancestor.”
  • Constraints between a view and its parent are defined in the parent.
  • Constraints between siblings are defined in the parent.

Auto Layout scales priorities from 0 to 1000, where 1000 means absolutely required and lower priorities mean those can be broken sooner.

ConstraintLayout doesn’t have explicit priorities, every constraint is required.

Mental Model

With Auto Layout, each view needs to have enough constraints to determine its vertical and horizontal position and size. For some Views, their intrinsic size may serve as the constraint for size (width and height). A UILabel (usually a single line of text) typically only needs the position constraints, for example, center horizontally in container, and standard distance from top. A UIImageView can also infer its size from its contents, but depending on the source of the image, it may not behave as expected at the design time, so it may need to add explicit width and height constraints. Alternatively, it may be anchored to the inside of its container.

In ConstraintLayout, all views typically have one of three size specifications for width/height:

  • wrap_content, which has the same meaning as in other layouts. Roughly equivalent to intrinsic content size in Auto Layout.
  • Fixed width, set to a specific dp value. Toggling the size control to this setting will result in the editor inserting the current dp size of the view. This is the same as setting an explicit dimension (width or height) constraint in Auto Layout.
  • “Any size” (0dp) will cause the view to occupy the remaining space while satisfying the constraints.

Anchor compatibility: ConstraintLayout

ConstraintLayout allows only legal combinations of anchors for constraints. The anchors must be of the same “type”. The type mainly represents the axis (vertical or horizontal), but there additional subtleties. For example, the i18n-friendly (Start/End) and legacy (Left/Right) anchors cannot be mixed. Furthermore, baselines can only be constrained to other baselines. Finally, baseline and top/bottom are mutually exclusive and addition of a new constraint will result in the removal of the other. For example, if a TextView was already baseline-aligned to another view, adding a top constraint will remove the baseline constraint.

This rule is enforced by the XML schema that defines only legal layout attributes (e.g., layout_constraintBaseline_toBaselineOf) and within the ConstraintLayout itself.

Anchors Type
Baseline Vertical (Text only)
Top, Bottom Vertical
Start, End Horizontal (1)
Left, Right Horizontal (2)

ConstraintLayout exposes the baseline anchor, whereas LinearLayout has baselineAligned turned on by default and RelativeLayout has align_baselineTo.

Anchor compatibility: Auto Layout

Auto Layout uses different classes in order to enforce compatibility when specifying the constraint programmatically. All these classes are subclasses of the generic NSLayoutAnchor class declared as follows (using Swift notation):

class NSLayoutAnchor<AnchorType : AnyObject> : NSObject {
    ...
    func constraint(equalTo anchor: NSLayoutAnchor<AnchorType>) -> NSLayoutConstraint
    ...
}

Subclasses use themselves as the generic constraint, for example:

class NSLayoutXAxisAnchor : NSLayoutAnchor<NSLayoutXAxisAnchor> {

Such declaration enforces correctness of the constraints through the language’s type system (in both Swift and Objective-C).

Anchors Type Class
Baseline Vertical (Text only) NSLayoutYAxisAnchor
Top, Bottom Vertical NSLayoutYAxisAnchor
Leading, Trailing Horizontal (1) NSLayoutXAxisAnchor
Left, Right Horizontal (2) NSLayoutXAxisAnchor
Width, Height Dimension NSLayoutDimension

NSLayoutAnchor defines methods that create NSLayoutConstraints with another anchor and an optional constant, i.e., constraints of the forms anchor1 <REL> anchor2 or anchor1 <REL> anchor2 + constant, where <REL> is “equal”, “less than or equal”, or “greater than or equal”.

While NSLayoutXAxisAnchor and NSLayoutXAxisAnchor do not define any extra methods over NSLayoutAnchor, their sibling NSLayoutDimension adds several methods to expand expressive power. The added NSLayoutDimension methods make it possible to express constraints of the form anchor1 <REL> constant and anchor1 <REL> multiplier * anchor2 + constant.

Conclusion

ConstraintLayout joins a growing number of UI systems that use the Cassowary constraint solver algorithm. Despite the common foundation, the implementations may differ in the specifics. Auto Layout and ConstraintLayout use some of the same mechanisms and even terminology, like anchors and constraints, but vary in their approaches to the API. Since ConstraintLayout is still very young, there are many exciting possibilities for its evolution.

Resources

The post ConstraintLayout vs Auto Layout: How Do They Compare? appeared first on Big Nerd Ranch.

]]>
https://bignerdranch.com/blog/constraintlayout-vs-auto-layout-how-do-they-compare/feed/ 0
Neural Networks in iOS 10 and macOS https://bignerdranch.com/blog/neural-networks-in-ios-10-and-macos/ https://bignerdranch.com/blog/neural-networks-in-ios-10-and-macos/#respond Tue, 28 Jun 2016 10:00:53 +0000 https://nerdranchighq.wpengine.com/blog/neural-networks-in-ios-10-and-macos/ Apple has been using machine learning in their products for a long time—Siri answers our questions and entertains us, iPhoto recognizes faces in our photos, Mail app detects spam messages.

As app developers, we have access to some capabilities exposed by Apple's APIs such as [face detection](https://developer.apple.com/library/ios/documentation/GraphicsImaging/Conceptual/CoreImaging/ci_detect_faces/ci_detect_faces.html), and starting with iOS 10, we'll gain a high-level API for speech recognition and SiriKit.

The post Neural Networks in iOS 10 and macOS appeared first on Big Nerd Ranch.

]]>

Apple has been using machine learning in their products for a long time: Siri answers our questions and entertains us, iPhoto recognizes faces in our photos, Mail app detects spam messages.
As app developers, we have access to some capabilities exposed by Apple’s APIs such as face detection, and starting with iOS 10, we’ll gain a high-level API for speech recognition and SiriKit.

Sometimes we may want to go beyond the narrow confines of the APIs that are built into the platform and create something unique. Many times, we roll our own machine learning capabilities, using one of a number of off-the-shelf libraries or building directly on top of fast computation capabilities of Accelerate or Metal.

For example, my colleagues built an entry system for our office that uses an iPad to detect a face, then posts a gif in Slack and allows users to unlock the door using a custom command.

Doorbell face recognition

But now we have first-party support for neural networks: at WWDC 2016, Apple introduced not one, but two neural network APIs, called Basic Neural Network Subroutines (BNNS) and Convolutional Neural Networks (CNN).

Machine Learning and Neural Networks

AI pioneer Arthur Samuel defined machine learning as a “field of study that gives computers the ability to learn without being explicitly programmed.” Machine learning systems are frequently used to make sense of the data that can’t easily be described using traditional models.

For example, we can easily write a program that calculates the square footage (area) of the house, given the dimensions and shapes of all its rooms and other spaces, but calculating the value of the house is not something we can put in a formula. A machine learning system, on the other hand, is well suited for such problems. By supplying the known real-world data to the system, such as the market value, size of the house, number of bedrooms, etc., we can train it to be able to predict the price.

A neural network is one of the most common models to building machine learning system. While the mathematical underpinnings of neural networks have been developed over half a century ago in the 1940s, parallel computing made them more feasible in the 1980s and the interest in deep learning sparked a resurgence of neural networks in the 2000s.

A neural network is constructed of a number of layers, each of which consists of one or more nodes. The simplest neural network has three layers: input, hidden and output. The input layer nodes may represent individual pixels in an image or some other parameters. The output layer nodes are often the results of the classification, such as “dog” or “cat”, if we are trying to automatically detect the contents of a photo. The hidden layer nodes are configured to perform an operation on the inputs or apply the activation function.

Neural network diagram

Types of Layers

Three common types of layers are pooling, convolution and fully connected.

A pooling layer aggregates the data, reducing its size, typically by using the maximum or average value of its inputs. A series of convolution and pooling layers can be stringed together to gradually distill a photo into a collection of increasingly higher-level features.

A convolution layer transforms an image by applying a convolution matrix to each pixel of the image. If you’ve used Pixelmator or Photoshop filters, you’ve most likely used a convolution matrix. A convolution matrix is typically a 3×3 or 5×5 matrix that is applied to the input image pixels in order to calculate the new pixel values in the output image. To get the value of the output pixel, we would multiply the values of the pixels in the original image and calculate the average.

For example, this convolution matrix would blur the image:

1 1 1
1 1 1
1 1 1

Whereas this one would sharpen the image:

 0 -1  0
-1  5 -1
 0 -1  0

The neural network’s convolution layer uses the convolution matrix to process the input and generate the data for the next layer, for example, to extract new features in an image, such as edges.

A fully connected layer can be thought of as a convolution layer where the filter has the same size as the original image. In other words, you can think of the fully connected layer as a function that assigns weights to individual pixels, averages the result, and gives a single output value.

Training and Inference

Each layer needs to be configured with appropriate parameters. For example, the convolution layer needs information about the input and output images (dimensions, number of channels, etc.), as well as convolution layer parameters (kernel size, matrix, etc.). The fully connected layer is defined by the input and output vectors, activation function, and weights.

To obtain these parameters, the neural network has to be trained. This is accomplished by passing the inputs through the neural network, determining the output, measuring the error (i.e., how far off the actual result was from the predicted result), and adjusting the weights via backpropagation. Training a neural network may require hundreds, thousands, or even millions of examples.

At the moment, Apple’s new machine learning APIs can be used for building neural networks that only do inference, but not training. Good thing that Big Nerd Ranch does.

Accelerate: BNNS

The first new API is part of the Accelerate framework and is called BNNS, which stands for Basic Neural Network Subroutines. BNNS complements the BLAS (Basic Linear Algebra Subroutines), which was used in some third-party machine learning applications.

BNNS defines layers in the BNNSFilter class. Accelerate supports three types of layers: convolution layer (created by the BNNSFilterCreateConvolutionLayer function), fully connected layer (BNNSFilterCreateFullyConnectedLayer), and pooling layer (BNNSFilterCreatePoolingLayer).

The MNIST database is a well-known data set containing tens of thousands of hand-written digits that were scanned and resized to fit a 20 by 20 pixel image.

One approach to processing image data is to convert an image into a vector and pass it through a fully connected layer. For the MNIST data, a single 20×20 image would become a vector of 400 values. Here’s how a hand-written digit “1” would get converted to a vector:

Handwriting converted to vector

Below is sample code for configuring a fully connected layer that takes a vector of size 400 as an input, uses the sigmoid activation function and outputs a vector of size 25:

    // input layer descriptor
    BNNSVectorDescriptor i_desc = {
        .size = 400,
        .data_type = BNNSDataTypeFloat32,
        .data_scale = 0,
        .data_bias = 0,
    };

    // hidden layer descriptor
    BNNSVectorDescriptor h_desc = {
        .size = 25,
        .data_type = BNNSDataTypeFloat32,
        .data_scale = 0,
        .data_bias = 0,
    };

    // activation function
    BNNSActivation activation = {
        .function = BNNSActivationFunctionSigmoid,
        .alpha = 0,
        .beta = 0,
    };

    BNNSFullyConnectedLayerParameters in_layer_params = {
        .in_size = i_desc.size,
        .out_size = h_desc.size,
        .activation = activation,
        .weights.data = theta1,
        .weights.data_type = BNNSDataTypeFloat32,
        .bias.data_type = BNNSDataTypeFloat32,
    };

    // Common filter parameters
    BNNSFilterParameters filter_params = {
        .version = BNNSAPIVersion_1_0;   // API version is mandatory
    };

    // Create a new fully connected layer filter (ih = input-to-hidden)
    BNNSFilter ih_filter = BNNSFilterCreateFullyConnectedLayer(&i_desc, &h_desc, &in_layer_params, &filter_params);

    float * i_stack = bir; // (float *)calloc(i_desc.size, sizeof(float));
    float * h_stack = (float *)calloc(h_desc.size, sizeof(float));
    float * o_stack = (float *)calloc(o_desc.size, sizeof(float));

    int ih_status = BNNSFilterApply(ih_filter, i_stack, h_stack);

Metal!

Does it get any more metal than this? As a matter of fact, it does, because the second neural network API is part of Metal Performance Shaders (MPS) framework. While Accelerate is the framework for performing fast computing on the CPU, Metal pushes the GPU to its limit. Metal’s flavor is called CNN, the Convolution Neural Network.

MPS comes with a similar set of APIs. Creating a convolution layer requires use of MPSCNNConvolutionDescriptor and MPSCNNConvolution functions. For a pooling layer, MPSCNNPoolingMax would supply the parameters. A fully connected layer is created by the MPSCNNFullyConnected function.
The activation functions are defined by subclasses of MPSCNNNeuron: MPSCNNNeuronLinear, MPSCNNNeuronReLU, MPSCNNNeuronSigmoid, MPSCNNNeuronTanH, MPSCNNNeuronAbsolute.

BNNS and CNN compared

This table presents the list of activation functions in Accelerate and Metal:

Accelerate/BNNS Metal Performance Shaders/CNN
BNNSActivationFunctionIdentity  
BNNSActivationFunctionRectifiedLinear MPSCNNNeuronReLU
  MPSCNNNeuronLinear
BNNSActivationFunctionLeakyRectifiedLinear  
BNNSActivationFunctionSigmoid MPSCNNNeuronSigmoid
BNNSActivationFunctionTanh MPSCNNNeuronTanH
BNNSActivationFunctionScaledTanh  
BNNSActivationFunctionAbs MPSCNNNeuronAbsolute

Pooling functions:

Accelerate/BNNS Metal Performance Shaders/CNN
BNNSPoolingFunctionMax MPSCNNPoolingMax
BNNSPoolingFunctionAverage MPSCNNPoolingAverage

Accelerate and Metal provide a very similar set of functionality for neural networks, so the choice of one or the other will depend on each application. While GPUs are typically preferred for the kinds of computations required in machine learning, data locality may cause the Metal CNN to perform poorer than the Accelerate BNNS version. If the neural network operates on images that have been loaded into the GPU, for example, using MPSImage and the new MPSTemporaryImage, Metal is the clear winner.

Want more info on machine learning? Check out this post on getting started with Core ML, a new framework announced at WWDC 2017.

The post Neural Networks in iOS 10 and macOS appeared first on Big Nerd Ranch.

]]>
https://bignerdranch.com/blog/neural-networks-in-ios-10-and-macos/feed/ 0
Android Studio Live Templates https://bignerdranch.com/blog/android-studio-live-templates/ https://bignerdranch.com/blog/android-studio-live-templates/#respond Thu, 18 Jun 2015 15:00:00 +0000 https://nerdranchighq.wpengine.com/blog/android-studio-live-templates/

Code completion can improve your productivity by reducing how much you have to type, but there are situations when a more powerful tool is needed. Thanks to Android Studio and IntelliJ, live templates make it much easier to focus on just the things you care about.

The post Android Studio Live Templates appeared first on Big Nerd Ranch.

]]>

Code completion can improve your productivity by reducing how much you have to type, but there are situations when a more powerful tool is needed. Thanks to Android Studio and IntelliJ, live templates make it much easier to focus on just the things you care about.

Live Templates

Live Templates are code snippets that I can insert into my code by typing their abbreviation and pressing tab. They live in the Editor section of the Preferences.

Live Templates in Preferences

A template contains hard-coded text and placeholder tokens (or variables). The parts that are marked by the $ character on either end are variables and normally would be the things that I’d be expected to type in.

For example, one of the built-in templates looks like this:

for(int $INDEX$ = 0; $INDEX$ < $LIMIT$; $INDEX$++) {
  $END$
}

Here, we have three variables $INDEX$, $LIMIT$ and $END$.

  • $END$ is a special predefined variable that controls where your cursor will be placed once you’re done filling out the template. We will have to fill out the values of the other two.
  • To use this template, I will type fori in the Java file and press tab. Android Studio will expand the template and put the cursor on the first variable that needs to be replaced. In this case, it will put the cursor where the template has the $INDEX$ token.
  • As I type something, the other two places where $INDEX$ appears will copy what I’m typing, live!
  • When I’m done naming the index variable, I will press tab or return and the cursor will move to the next variable that needs to be defined, which is $LIMIT$.
  • Finally, after I finish typing in the $LIMIT$ variable, pressing tab or return will place the cursor where the $END$ variable is and will stop the template fill-out session.

Templates in Action

Let’s look at a more complex example.
In our Android Programming Guide, we use the newIntent pattern for Activities and the newInstance pattern for Fragments. Typically, creating a new Activity/Fragment pair involves the following steps:

  • Create the newIntent method in the activity.
  • Create the constant(s) for the names of extras to be passed with the Intent.
  • Create the getFragment method that reads the Intent extras and passes them on to the fragment’s newInstance method.
  • Create the newInstance method in the fragment.
  • Create the constant(s) for the names of the arguments to be set on the fragment.
  • Create the instance variable(s) to store the values of the arguments.
  • Read the arguments in the onCreate method.

In this video, I demonstrate how live templates make it much easier to focus on just the things I care about, rather than the boilerplate.

In the first part of the video, I’m using my “Activity New Intent with Arguments” template. I type ania, the abbreviation I assigned to it, then hit <tab> so Android Studio expands the template, and then I type these characters: String<tab><tab>scannerId<tab>. I end up with this code:

public class ScannerActivity extends SingleFragmentActivity {
    private static final String SCANNER_ID = "ScannerActivity.SCANNER_ID";

    public static Intent newIntent(Context context, String scannerId) {
        Intent intent = new Intent(context, ScannerActivity.class);
        intent.putExtra(SCANNER_ID, scannerId);
        return intent;
    }

    @Override
    protected Fragment getFragment() {
        String scannerId = getIntent().getStringExtra(SCANNER_ID);
        return ScannerFragment.newInstance(scannerId);
    }
}

I have another template, “Fragment New Instance with Arguments.” I type the abbreviation I assigned to the template, fnia, then <tab>, and get the expanded template. Then I type the same sequence of characters as for the ania template: String<tab><tab>scannerId<tab>. This is the result:

public class ScannerFragment extends Fragment {
    private static final String SCANNER_ID = "ScannerFragment.SCANNER_ID";

    private String mScannerId;

    public static ScannerFragment newInstance(String scannerId) {
        ScannerFragment fragment = new ScannerFragment();
        Bundle args = new Bundle();
        args.putString(SCANNER_ID, scannerId);
        fragment.setArguments(args);
        return fragment;
    }

    @Override
    public void onCreate(Bundle savedInstanceState) {
        super.onCreate(savedInstanceState);
        mScannerId = getArguments().getString(SCANNER_ID);
    }
}

Not too bad for just under 30 keystrokes.

Creating Templates

There are two parts to live templates. One is the template itself and the other is the definition of the variables. This is my ania live template:

private static final String $EXTRA_PARAM$ = "$CLASS_NAME$.$EXTRA_PARAM$";

public static Intent newIntent(Context context, $EXTRA_CLASS$ $EXTRA_VAR$) {
    Intent intent = new Intent(context, $CLASS_NAME$.class);
    intent.putExtra($EXTRA_PARAM$, $EXTRA_VAR$);$END$
    return intent;
}

@Override
protected Fragment getFragment() {
    $EXTRA_CLASS$ $EXTRA_VAR$ = getIntent().get$EXTRA_CLASS$Extra($EXTRA_PARAM$);
    return $FRAGMENT_CLASS$.newInstance($EXTRA_VAR$);
}

Note that variables are deliniated by the $ symbol. Normally, each variable is what you need to type in. If a variable appears in multiple places, all of them are updated simultaneously as you type. It’s possible to customize these variables and even to automatically set their values based on other variables.

For example, $CLASS_NAME$ is defined as expression className, which evaluates to the name of the current class. Here’s the full list of definitions:

Name Expression Default Value Skip if Defined
EXTRA_CLASS typeOfVariable(VAR)   [ ]
EXTRA_VAR suggestVariableName   [ ]
CLASS_NAME className   [x]
EXTRA_PARAM capitalizeAndUnderscore(EXTRA_VAR)   [x]
FRAGMENT_CLASS groovyScript("_1.replaceAll('Activity','Fragment')", CLASS_NAME)   [x]

Three of the variables are marked “Skip if defined,” so I don’t need to type them; their names are derived from what I have already typed. I can even use groovyScript to evaluate expressions beyond the fairly rich predefined set.

As I noted earlier, $END$ controls where your cursor will be once you’re done filling out the template. In this example, I want to put it inside the newIntent method just before the return statement, so that I can customize the Intent object further. For example, I could add flags or more extras.

The fnia template is very similar:

private static final String $ARG_PARAM$ = "$CLASS_NAME$.$ARG_PARAM$";

private $ARG_CLASS_DITTO$ m$INST_VAR$;

public static $CLASS_NAME$ newInstance($ARG_CLASS$ $ARG_VAR$) {
    $CLASS_NAME$ fragment = new $CLASS_NAME$();
    Bundle args = new Bundle();
    args.put$ARG_CLASS$($ARG_PARAM$, $ARG_VAR$);
    fragment.setArguments(args);
    return fragment;
}

@Override
public void onCreate(Bundle savedInstanceState) {
    super.onCreate(savedInstanceState);
    m$INST_VAR$ = getArguments().get$ARG_CLASS$($ARG_PARAM$);
}

I had to use a little trick in this template. I created a special variable, $ARG_CLASS_DITTO$. that’s a copy of the $ARG_CLASS$ variable. The reason I duplicated them is to force the cursor to start at the type of the parameter of the newInstance method. If I didn’t do this, the cursor would first jump to the type of the instance variable, then to the name of the parameter.

Thanks to live templates, I’ve reduced the amount of typing I have to do when creating new Activities and Fragments. Of course, there are many other situations where Live Templates would come in handy as well. I’m sure lots of you have your own productivity tips and examples of Live Templates, so please feel free to share with your fellow developers in the comments!

The post Android Studio Live Templates appeared first on Big Nerd Ranch.

]]>
https://bignerdranch.com/blog/android-studio-live-templates/feed/ 0
Becoming Material with Android’s Design Support Library https://bignerdranch.com/blog/becoming-material-with-androids-design-support-library/ https://bignerdranch.com/blog/becoming-material-with-androids-design-support-library/#respond Fri, 05 Jun 2015 09:53:37 +0000 https://nerdranchighq.wpengine.com/blog/becoming-material-with-androids-design-support-library/ Last year, Google announced Material Design, a set of guidelines for Android apps and apps on other platforms. One example of such new goodies was the [Floating Action Button](/blog/floating-action-buttons-in-android-l/), which requires a bit of setup. What's more, some of its features work only on Lollipop and later. Enter the design support library, announced at this year's I/O.

The post Becoming Material with Android’s Design Support Library appeared first on Big Nerd Ranch.

]]>

Last year, Google announced Material Design, a set of guidelines for Android apps and apps on other platforms.
One example of such new goodies was the Floating Action Button, which requires a bit of setup. What’s more, some of its features work only on Lollipop and later.

Sure, it’s technically just a circle with a drop shadow, but how many developers have the extra time to implement every new design language specification?
A few third-party implementations of the floaing action button became available, but ensuring consistency and completeness was still elusive.

Fortunately, Google clearly recognizes the importance of a good user experience for the KitKats and JellyBeans of the world (and all the way back to Eclair MR1).
As time progressed, more and more elements of Material Design were added to the AppCompat support library.

However, the AppCompat library may not be the right place to add all of the design goodies. Enter the design support library, announced at this year’s I/O.
Not only does it bring the FABulous to earlier versions of Android, it also gives us new goodies, such as the Navigation View, snackbar, floating labels for EditText, and CoordinatorLayout.

The NavigationView makes it easy to create Material Design-style side drawer layouts.
It consists of the header and the menu.

<android.support.v4.widget.DrawerLayout
        xmlns:android="http://schemas.android.com/apk/res/android"
        xmlns:app="http://schemas.android.com/apk/res-auto"
        android:layout_width="match_parent"
        android:layout_height="match_parent"
        android:fitsSystemWindows="true">

    <!-- your content layout -->

    <android.support.design.widget.NavigationView
            android:id="@+id/navigation_view"
            android:layout_width="wrap_content"
            android:layout_height="match_parent"
            android:layout_gravity="start"
            app:headerLayout="@layout/drawer_header"
            app:menu="@menu/drawer"/>
</android.support.v4.widget.DrawerLayout>

The header is loaded from the layout specified by the app:headerLayout attribute. It would be configured to highlight the identity of the app by using the appropriate color scheme and/or large images.

The menu is used to define the “body” of the navigation drawer. It is loaded from the menu resource specified by the app:menu attribute. The top-level menu items will be displayed at the top portion of the navigation. The NavigationView also supports hierarchical menus; the sub-menus will be displayed below the main list and will feature subheaders.

<menu xmlns:android="http://schemas.android.com/apk/res/android">
  <group android:checkableBehavior="single">
      <item
          android:id="@+id/navigation_item_1"
          android:checked="true"
          android:icon="@drawable/ic_android"
          android:title="@string/navigation_item_1"/>
      <item
          android:id="@+id/navigation_item_2"
          android:icon="@drawable/ic_android"
          android:title="@string/navigation_item_2"/>
  </group>

  <item
      android:id="@+id/navigation_subheader"
      android:title="@string/navigation_subheader">
      <menu>
          <item
              android:id="@+id/navigation_sub_item_1"
              android:icon="@drawable/ic_android"
              android:title="@string/navigation_sub_item_1"/>
          <item
              android:id="@+id/navigation_sub_item_2"
              android:icon="@drawable/ic_android"
              android:title="@string/navigation_sub_item_2"/>
      </menu>
  </item>
</menu>

Since this menu resource is not loaded in the Activity’s or Fragment’s onCreateOptionsMenu, selecting one of these items will not trigger onOptionsItemSelected. Instead, we need to attach the click listener to the NavigationView:

    navigationView.setNavigationItemSelectedListener(
            new NavigationView.OnNavigationItemSelectedListener() {
        @Override
        public boolean onNavigationItemSelected(MenuItem menuItem) {
            menuItem.setChecked(true);
            mDrawerLayout.closeDrawers();
            switch (menuItem.getItemId()) {
                case R.id.navigation_item_1:
                    // react
                    break;
            }
            return true;
        }
    });

TextInputLayout

The design library takes an interesting approach to customizing the EditText: it doesn’t change it directly. Instead, the TextInputLayout is used to wrap the EditText and provide the enhancements.

The first one, displaying a floating label when the user types something into the field, is done automagically. The TextInputLayout finds the EditText among its children and attaches itself as a TextWatcher, so it’s able to determine when the field has been modified and animates the movement of the hint from its regular place in the EditText to the floating label position above it.

The second enhancement, displaying the error message, requires a slight change in code. Instead of setting the error on the EditText, the error should be set on the TextInputLayout.
That’s because there is no automatic way for the TextInputLayout to be notified when the error is set on the EditText.

Here’s what the layout might look like:

    <android.support.design.widget.TextInputLayout
        android:id="@+id/username_text_input_layout"
        android:layout_width="match_parent"
        android:layout_height="wrap_content">
        <EditText
            android:id="@+id/username_edit_text"
            android:layout_width="match_parent"
            android:layout_height="wrap_content"
            android:hint="@string/username_hint"/>
    </android.support.design.widget.TextInputLayout>

Note that both the EditText and TextInputLayout need layout IDs.
In the fragment, we would need to configure the TextInputLayout to enable displaying errors:

    TextInputLayout usernameTextInputLayout = (TextInputLayout) view.findViewById(R.id.username_text_input_layout);
    usernameTextInputLayout.setErrorEnabled(true);
    ...
    usernameTextInputLayout.setError(R.string.username_required);

Snackbar

Snackbars are like Toasts, but a bit more flexible.
A snackbar contains an action and can be dismissed.
Its API has been deliberately made similar to Toast’s:

Snackbar
  .make(view, R.string.snackbar_text, Snackbar.LENGTH_LONG)
  .setAction(R.string.snackbar_action, mOnClickListener)
  .show();

Instead of displaying in a predetermined location in the window, the snackbar will place itself inside the view that’s passed into the make() method. The static make() method will find a suitable parent to embed the Snackbar.

What constitutes a suitable parent? It’s the nearest ancestor that’s either a CoordinatorLayout (if available), or if that fails, the nearest FrameLayout. Since the decor content view is also a FrameLayout, that’s the furthest the Snackbar will perform the search.

Why the special treatment for the CoordinatorLayout? It offers some nice benefits, including moving floating action buttons out of the way when the snackbar appears and moving them back when the snackbar is dismissed.

CoordinatorLayout

The CoordinatorLayout is responsible for managing interactions between its child views. It does so by applying rules set by its children’s CoordinatorLayout.LayoutParams. One set of rules is defined by the subclasses of the abstract CoordinatorLayout.Behavior class. There are a few Behaviors that come bundled with the design support library, such as AppBarLayout.ScrollingViewBehavior and SwipeDismissBehavior.

For example, to make scrolling of a view (e.g., RecyclerView) affect the toolbar, you could add app:layout_behavior="@string/appbar_scrolling_view_behavior" to your view:

    <android.support.v7.widget.RecyclerView
        android:id="@+id/crime_recycler_view"
        android:layout_width="match_parent"
        android:layout_height="match_parent"
        app:layout_behavior="@string/appbar_scrolling_view_behavior"/>

layout_behavior expects the fully-qualified name of the class that implements the Behavior, in this case it happens to be android.support.design.widget.AppBarLayout$ScrollingViewBehavior.

Another example is keeping the floating action button anchored to the bottom of the toolbar. This can be achieved with the pair of attributes app:layout_anchor and app:layout_anchorGravity as in this example:

    <android.support.design.widget.FloatingActionButton
        android:id="@+id/fab"
        android:layout_width="wrap_content"
        android:layout_height="wrap_content"
        app:layout_anchor="@id/appbar"
        app:layout_anchorGravity="bottom|right|end"
        app:borderWidth="0dp"
        android:layout_margin="@dimen/fab_margin"
        android:src="@drawable/ic_photo_camera_white_24dp" />

Collapsing the toolbar can be fine-tuned via app:layout_collapseMode. It accepts three values: "off", "pin", and "parallax". For example, setting this attribute to "parallax" is useful for applying a nice parallax effect to images.

The CoordinatorLayout deserves a bit more attention, so I’ll return to it in a later post.

Demo

This blog post wouldn’t be complete without putting these new toys into action. Here’s a demo that shows:

  • the TextInputLayout managing the hint animation
  • FloatingActionButton anchored to the bottom of the CollapsingToolbar
  • Snackbar with the associated action

The post Becoming Material with Android’s Design Support Library appeared first on Big Nerd Ranch.

]]>
https://bignerdranch.com/blog/becoming-material-with-androids-design-support-library/feed/ 0
Playing with Numbers in Global Playgrounds https://bignerdranch.com/blog/playing-with-numbers-in-global-playgrounds/ https://bignerdranch.com/blog/playing-with-numbers-in-global-playgrounds/#respond Thu, 15 Jan 2015 12:10:22 +0000 https://nerdranchighq.wpengine.com/blog/playing-with-numbers-in-global-playgrounds/ Learn more about showing numbers correctly in iOS and OS X using Xcode 6 and Playgrounds.

The post Playing with Numbers in Global Playgrounds appeared first on Big Nerd Ranch.

]]>

One day, I was walking to the office and came across an “Office Space For Rent” sign. “It’s either a really tiny office or maybe something else is going on,” I thought, trying to decipher what “1.000” represented.

It can get confusing. For example, in the US, UK and Australia, the dot in the number represents a decimal separator. However, in Germany, the Netherlands and Turkey, the dot could serve as a thousands or grouping separator. Of course, the context helped decipher this mystery. Since this happened in Amsterdam, “1.000” meant one thousand.

When we write apps, we need to take this context into account, because people who use our app may live in other countries and speak other languages. Fortunately, we don’t have to remember the details about each region in the world ourselves. Most modern operating systems encapsulate such knowledge in a “locale,” which knows how to print numbers and dates, among other things.

Locales and Playgrounds

Internationalization and localization tools have seen major improvements in Xcode 6, but perhaps the unsung hero is the Playground. As you type code, you can see its results immediately, which makes it much easier to experiment and learn.

First, let’s determine the locale of our Playground:

NSLocale.currentLocale().localeIdentifier

On my laptop, the results sidebar shows "en_US". This is not the only locale I have at my disposal, however. Let’s see what else is available:

NSLocale.availableLocaleIdentifiers()

This is what I see in the results sidebar:

["eu", "hr_BA", "en_C...

Since the list is very large and doesn’t fit, we can click the Quick Look (“eye”) icon in the sidebar.

If I want to play with particular locales, I can reference them as follows:

let unitedStatesLocale = NSLocale(localeIdentifier: "en_US")
let chinaLocale = NSLocale(localeIdentifier: "zh_Hans")
let germanyLocale = NSLocale(localeIdentifier: "de_DE")
let indiaLocale = NSLocale(localeIdentifier: "en_IN")

Let’s see what they are called in English:

unitedStatesLocale.displayNameForKey(NSLocaleIdentifier, value: unitedStatesLocale.localeIdentifier)!
unitedStatesLocale.displayNameForKey(NSLocaleIdentifier, value: chinaLocale.localeIdentifier)!
unitedStatesLocale.displayNameForKey(NSLocaleIdentifier, value: germanyLocale.localeIdentifier)!
unitedStatesLocale.displayNameForKey(NSLocaleIdentifier, value: indiaLocale.localeIdentifier)!

The results sidebar will show:

"English (United States)"
"Chinese (Simplified)"
"German (Germany)"
"English (India)"

If we wanted to learn what a particular locale is called somewhere else, we would write:

germanyLocale.displayNameForKey(NSLocaleIdentifier, value: unitedStatesLocale.localeIdentifier)!

The “en_US” locale happens to be called "Englisch (Vereinigte Staaten)" in German.

Representing Numbers

Locales know more than just what to call themselves and others.

When it comes to numbers, a lot of English-speaking locales follow similar rules. Many people know the differences between number formatting rules in the US and UK and continental Europe. In the US or UK, 42 million would be printed as 42,000,000.00. In some parts of Europe, the same number would be printed as 42.000.000,00. But in other parts of Europe, it would be printed as 42 000 000,00. We haven’t even started scratching the surface, so how are we supposed to keep track of all these differences? Number formatters to the rescue!

Let’s pick a large number to identify differences in formatting in these various locales:

let largeNumber = 360451996007.42
var numberFormatter = NSNumberFormatter()
numberFormatter.numberStyle = NSNumberFormatterStyle.DecimalStyle

numberFormatter.locale = unitedStatesLocale
numberFormatter.stringFromNumber(largeNumber)!

numberFormatter.locale = chinaLocale
numberFormatter.stringFromNumber(largeNumber)!

numberFormatter.locale = germanyLocale
numberFormatter.stringFromNumber(largeNumber)!

numberFormatter.locale = indiaLocale
numberFormatter.stringFromNumber(largeNumber)!

Not surprisingly, we find the US and UK number formatting identical, but there are several variations for other locales:

"360,451,996,007.42"
"360,451,996,007.42"
"360.451.996.007,42"
"3,60,45,19,96,007.42"

A few interesting observations:

  • The Indian numbering system groups digits differently, using the comma to denote the lowest three digits (thousand) and then grouping every 2 digits: 3,60,45,19,96,007.42.
  • While both of them are in Europe, France uses spaces to group thousands, whereas Germany uses periods. So you can’t really generalize things about numbers in Europe.
  • In some locales, even digits may appear differently.

Next time somebody sends you an email saying that their app was downloaded 1,500 times in the first hour, make sure you know where they are coming from. It could mean a major hit (one and a half thousand) or it could mean only one customer managed to download the app fully, while the second customer only managed to get half the app (one and a half).

Side Note: Numbers in Code

Sometimes it is useful to group digits in numbers in a certain way while we are writing code. This does not affect the actual value of those numbers. For example:

let worldPopulation = 7_288_000_000
// or 7_28_80_00_000 in Indian Numbering System
let phoneNumber = 1_800_555_1234
let socialSecurityNumber = 123_45_6789

Dollars and Cents (Er, Pounds and Pence)

Sometimes numbers mean more than just the digits and symbols that comprise them. Sometimes numbers represent money. If it doesn’t make any difference to you, I would like to give you 1.50 dollars in exchange for 1,500 euros. You can already spot that something doesn’t look right. Jeremy Sherman wrote a post about this. What can we learn about currency formatting with Swift?

var currencyFormatter = NSNumberFormatter()
currencyFormatter.numberStyle = NSNumberFormatterStyle.CurrencyStyle

currencyFormatter.locale = unitedStatesLocale
currencyFormatter.stringFromNumber(largeNumber)!

currencyFormatter.locale = chinaLocale
currencyFormatter.stringFromNumber(largeNumber)!

currencyFormatter.locale = germanyLocale
currencyFormatter.stringFromNumber(largeNumber)!

currencyFormatter.locale = indiaLocale
currencyFormatter.stringFromNumber(largeNumber)!

This shows us how the same amount is formatted using the local currency.

"$360,451,996,007.42"
"¤ 360,451,996,007.42"
"360.451.996.007,42 €"
"₹ 3,60,45,19,96,007.42"

It’s clear that different locales have different currency symbols and place them differently. Depending on the application, this may or may not be what we intended. So let’s specify the currency explicitly, to avoid any loss and to potentially forego any gains from such careless currency “exchange”:

var dollarFormatter = NSNumberFormatter()
dollarFormatter.numberStyle = NSNumberFormatterStyle.CurrencyStyle
dollarFormatter.currencyCode = "USD"

dollarFormatter.locale = unitedStatesLocale
dollarFormatter.stringFromNumber(largeNumber)!

dollarFormatter.locale = chinaLocale
dollarFormatter.stringFromNumber(largeNumber)!

dollarFormatter.locale = germanyLocale
dollarFormatter.stringFromNumber(largeNumber)!

dollarFormatter.locale = indiaLocale
dollarFormatter.stringFromNumber(largeNumber)!

This amount represents a very large sum. Just don’t try to cash it.

"$360,451,996,007.42"
"US$ 360,451,996,007.42"
"360.451.996.007,42 $"
"US$ 3,60,45,19,96,007.42"

In some cases we see the currency symbol $, sometimes we see US$. That’s because the dollar sign is used for some other currencies and may be ambiguous.

Even within the EU, formatting amounts with Euro will yield many different forms:

let euroLocaleIdentifiers = ["bg_BG", "hr_HR", "cs_CS", "da_DK", "nl_NL", "en_UK", "et_EE", "fi_FI", "fr_FR", "de_DE", "el_GR", "hu_HU", "ga_IE", "it_IT", "lv_LV", "lt_LT", "mt_MT", "pl_PL", "pt_PT", "ro_RO", "sk_SK", "sl_SI", "es_ES", "sv_SE"]
let EuroLocales = euroLocaleIdentifiers.map({s in NSLocale(localeIdentifier: s)})

var euroFormatter = NSNumberFormatter()
euroFormatter.numberStyle = NSNumberFormatterStyle.CurrencyStyle
euroFormatter.currencyCode = "EUR"

for l in EuroLocales {
    euroFormatter.locale = l
    euroFormatter.stringFromNumber(largeNumber)
}

Here are all the variations. As you can see with Dutch and German, the currency may be formatted differently, depending on the country it’s being used in.

Formatted euro amount Languages
360 451 996 007,42 € Bulgarian, Czech, Estonian, Finnish, French, Italian, Lithuanian, Polish, Portuguese, Slovak, Swedish
360.451.996.007,42 € Croatian, Danish, Dutch (Belgium), German (Germany), Greek, Romanian, Slovenian, Spanish
€ 360.451.996.007,42 Dutch (Netherlands), German (Austria)
€360,451,996,007.42 English (UK), Irish, Maltese
360 451 996 007,42 EUR Hungarian
€360 451 996 007,42 Latvian

Units, or How to Prevent Crash Landings

Currency is not the only numerical value with units. iOS 8 added HealthKit, with support for quantities representing things like weight, height, caloric intake, etc. For example, our program would manipulate weight in kilograms internally, but print out results in whatever unit is appropriate for the current locale:

let massFormatter = NSMassFormatter()
massFormatter.stringFromKilograms(5) // 11.023 lb
massFormatter.numberFormatter.locale = NSLocale(localeIdentifier: "zh_Hans")
massFormatter.stringFromKilograms(5) // 5千克

In addition to formatting the numbers, HealthKit makes it much easier to manipulate these quantities. For example, we can convert between pounds and kilograms:

let pounds = HKUnit.poundUnit()
let kilograms = HKUnit.gramUnitWithMetricPrefix(HKMetricPrefix.Kilo)
var personWeight = HKQuantity(unit: pounds, doubleValue: 180)
personWeight.isCompatibleWithUnit(kilograms) // true
personWeight.doubleValueForUnit(kilograms) // 81.6466266

Lessons Learned

Any time you want to display numbers to humans, you are likely to need the number formatter. Putting dots, commas, spaces or other marks in the number to separate thousands, groups or decimals is hard work, so let the number formatter do it for you.

First, understand the context in which the numbers are used. Do they represent quantities with units, such as money, weight or height? Or are they unit-less quantities? This will dictate how you handle the numbers. You might need to use special classes that wrap those quantities.

Second, choose the right number formatter to display the number. Dealing with money? Make sure you use the currency style number formatter AND specify the correct currency. Dealing with physical units, like mass? Take advantage of the formatters that came with HealthKit and use the right units (e.g., pounds vs kilograms). These little steps will help you deliver a more delightful experience.

The post Playing with Numbers in Global Playgrounds appeared first on Big Nerd Ranch.

]]>
https://bignerdranch.com/blog/playing-with-numbers-in-global-playgrounds/feed/ 0