Kurt Nelson - Big Nerd Ranch Tue, 19 Oct 2021 17:46:37 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.5 ART Theory: Replacing Dalvik with Android Runtime https://bignerdranch.com/blog/art-theory-replacing-dalvik-with-android-runtime/ https://bignerdranch.com/blog/art-theory-replacing-dalvik-with-android-runtime/#respond Tue, 24 Jun 2014 10:10:22 +0000 https://nerdranchighq.wpengine.com/blog/art-theory-replacing-dalvik-with-android-runtime/

The Android Open Source Project has recently seen commit activity indicating that the Android Runtime (ART), included in KitKat, is going to very soon replace Dalvik, which has been executing your apps since the beginning of Android. If everything goes according to plan, users will not notice a thing. Developers shouldn’t have to do anything at all if they only use the Android SDK, and make minor updates if they use the NDK improperly.

The post ART Theory: Replacing Dalvik with Android Runtime appeared first on Big Nerd Ranch.

]]>

The Android Open Source Project has recently seen commit activity indicating that the Android Runtime (ART), included in KitKat, is going to very soon replace Dalvik, which has been executing your apps since the beginning of Android. If everything goes according to plan, users will not notice a thing. Developers shouldn’t have to do anything at all if they only use the Android SDK, and make minor updates if they use the NDK improperly.

Here’s a bit of history of the two, and some ideas of what Android “L” may contain.

Dalvik’s Sunset

Let’s go back to 2009, when Dalvik was a brand-new virtual machine. Phones had small amounts of disk space, and even less RAM, so bytes were at a premium. At this point, 64-bit architectures were not ubiquitous, and virtualization as a whole was just beginning to take off. Full ahead-of-time compilation wasted valuable space in memory and on disk, especially when a largely touted feature of Android at the time was that each app could run in a completely independent virtual machine. Java-style .class files would be converted into one .dex file, which, when it was installed on the device, would be optimized, or “odex’d,” for that specific device. This allowed completely unused sections of code to be no-opped out, saving even more space. There were also various other optimizations generally involving trimming off unneeded bits here and there in the bytecode.

Today, all of this optimization for space is finally coming back to “byte” us. Now that devices have gigabytes of RAM, and many tablets have more space than my first laptop, all of this optimization is for naught. Dalvik no longer provides advantages against the traditional Java virtual machine, and is no longer being worked on.

ART Theory

If we get to go back and reinvent a virtual machine that is fed Java, what should we do? The optimal VM would run Java code just as fast as its functionally equivalent native code. It would also need to completely break away from the traditional JVM in response to the Oracle lawsuit.

To meet this need, the Android team has come up with ART: the Android Runtime Environment. The biggest change from Dalvik is ahead-of-time compilation. On installation, ART compiles the bytecode down to machine code that can be run on the bare metal of the device. Floating point performance and UI responsiveness are the most noticeable improvements resulting from this process. Everything else is still up in the air until ART matures.

Why Should Consumers Care?

Android Runtime will mean a faster experience for users:

  • Ahead-of-time compilation = speed. Overhead has been moved to install time and is no longer needed while the app is running.

  • Improved garbage collection = speed. Eliminating interruptions in the UI thread of apps, even when they are tiny, makes the UI feel much smoother.

  • Development and debugging improvements = speed. Once developers are able to profile their apps on ART devices, they will be able to find sections of code in need of attention way more accurately than before, which should result in those sections being fixed more quickly.

In summary, ART = more speed. Theoretically, ART could also improve battery life, but right now it is on par with Dalvik.

Why Should Developers Care?

ART means improvements for developers as well:

  • Apps should not get paused for garbage collection before ALLOC, even in low-memory situations.
  • Better messages when a NullPointerException is encountered: ART tells you what method the code tried to access.
  • Fewer interruptions from garbage collection in general.
  • Better performance in Java SDK code for floating points (Linpack performs ~10% better) and integers (Machin’s Formula for Pi calculation performs ~10% better). Here’s a full report on ART performance. Traditionally, if you were bottlenecked on performance, the only way to improve would be to switch to the NDK. Now, you might get a free speed boost when ART becomes default.

Possible Issues with ART

  • Apps that consist of large amounts of NDK code, especially if the Java-NDK boundary is crossed often, will likely need fine tuning in order to avoid impacting performance.
  • Dalvik didn’t complain when performing illegal storage of pointers to Java objects, but with ART, your app will now force close.
  • OpenGL/3D performance has taken a slight hit in the current iteration.

I’m looking forward to ART. Just remember to take everything here with a grain of salt, as it is still very much under active development and can change largely at any time. I hope to find out more at Google I/O, and publish future posts with more info.

Kurt Nelson and Kristin Marsicano will be attending Google I/O. Meet them at the conference or at our Android Community Party tonight.

The post ART Theory: Replacing Dalvik with Android Runtime appeared first on Big Nerd Ranch.

]]>
https://bignerdranch.com/blog/art-theory-replacing-dalvik-with-android-runtime/feed/ 0
The State of the Weariverse https://bignerdranch.com/blog/the-state-of-the-weariverse/ https://bignerdranch.com/blog/the-state-of-the-weariverse/#respond Mon, 23 Jun 2014 13:10:22 +0000 https://nerdranchighq.wpengine.com/blog/the-state-of-the-weariverse/

Wearables are becoming ubiquitous. Every day, it feels like there’s some new fitness tracker with a slightly different set of sensors and an entirely new app/API/website/life-changing-experience/yet-another-account-to-set-up. And fitness trackers are only the most popular bastion of wearable computing. Once you start counting the many devices that used to be released as a USB peripheral and are now being updated to use Bluetooth Low Energy and a rechargeable battery, the list of wearables becomes even larger than the number of Android devices available.

The post The State of the Weariverse appeared first on Big Nerd Ranch.

]]>

Wearables are becoming ubiquitous. Every day, it feels like there’s some new fitness tracker with a slightly different set of sensors and an entirely new app/API/website/life-changing-experience/yet-another-account-to-set-up. And fitness trackers are only the most popular bastion of wearable computing. Once you start counting the many devices that used to be released as a USB peripheral and are now being updated to use Bluetooth Low Energy and a rechargeable battery, the list of wearables becomes even larger than the number of Android devices available.

What is a Wearable?

Though I would argue that we aren’t extactly sure where the definition of the term begins and ends, a wearable can be loosely defined as a device that the user interacts with fully inside his or her personal space. Fitness trackers, smart glasses, smartphones, gesture detectors and smart watches all fit into this definition. With the many wearable devices available, the wearable ecosystem is becoming incredibly fragmented, on an order of magnitude more than we ever dealt with during the PC era. Forbes observed this trend towards fragmentation way back in November of last year, and since then, many more devices have been released.

Defining wearable computing is not only a fun philosophical debate (e.g., If I strap a Furby to my head, is it then a wearable, kid-friendly interface for Furbish?), but it’s also pertinent to solving this fragmentation. We need a broad enough abstraction to cover all of these different devices that yet is specific enough to leave us within a realistic and implementable realm.

To add to the complication, wearables are just a small subset of the wider Internet of Things. We’ve got all of these devices strewn about our lives that are IP-addressible, or that can otherwise be controlled via web-calls to an API somewhere. My lightbulbs, door locks, speakers, car, refrigerator and thermostats are all accessible through my phone right now—and those are just my personal “Things.”

How Do We Get Things Talking?

So now that I’ve got all these Things everywhere in my life, and I’m wearing 12 different CPUs on my body while keeping everything charged and connected to the internet, it really bothers me that I can’t actually use them together in unison without going to great lengths—and I’m a software developer. For someone who isn’t a developer, it’s basically impossible to rig a single button on a smartphone to both turn off the lights and lock the door. Furthermore, even with Apple’s HomeKit, Android users like me are left in the dark, with little hope of a vendor-agnostic update to the protocol.

What we need is an open standard protocol: a way for a device to publish what it can do and who can trigger it, and a way for a wearable to share events and commands it gleans from the environment without having to explicitly connect or pair devices. While Bluetooth LE gets us part of the way there by making pairing a thing of the past and by reading properties out of thin air, every device has its own unique set of properties, doing us no good beyond the Personal Area Network.

We need a new TCP over IP: The Things communication protocol—a standardized set of properties, a way of syncing properties over IP and PANs, and a way of replicating and modifying properties to the cloud and on device.

This protocol should have a large list of generic properties that could be implemented: brightness, hue, open/closed state, temperature, ready/not ready, capacity, humidity, power, GPS location, room placement and other physical properties—the list goes on and on. Things would not have to know about all properties, just the ones the device chooses to implement. Properties beyond the basics would come about organically as new device manufacturers declare them.

Wearables or other input devices would declare their own set of properties (once again up to the creator) such as gestures, distance, counters and sensor readings. Devices should be responsible for publishing properties in some fashion, either locally over Bluetooth LE to be distributed by another device, or directly to a wider web service. Ideally, this property directory would allow for easy creation of decentralized replicas, using a consensus algorithm that allows heavy users to set up an on-site directory server to minimize latency. Authentication would be the responsibility of the directories instead of the individual devices, including allowing the replication of a subset of properties.

Once we had this information in the form of properties freely flowing around the Weariverse, acting on them would become simple. A light switch on the wall could simply manipulate the on/off state of a certain group of properties, while a phone app could replicate and expose all properties of a set of lights. Apps that handle event-driven automation could live in the cloud, subscribing to relevant properties and firing off changes when events come in: for example, opening your front door could trigger turning on lights and music inside your house.

What’s Next for Wearables?

There are, of course, services like If This Then That and OpenHAB that are very similar to what I have described, but using them requires a trade-off between configurability and ease of use. OpenHAB is intimidating for the average user: It requires modifying a series of text files in various folders and installation as a server on a computer that stays on within your Local Area Network—far beyond a typical user’s skillset. IFTTT is incredibly simple to use and can be manipulated via drag-and-drop units in a web browser or mobile app, but this interface severely limits conditionals, state, and multi-device actions. That said, OpenHAB is an active open-source project, and a more user-friendly configuration system is being built.

I believe that just like the iPhone was the tipping point for the PDA, we are nearing the tipping point of the Weariverse. But first, we must tackle these kinds of problems if we want to bring wearable computing and the Internet of Things to the general consumer. These projects are taking steps towards forming a cohesive Weariverse, and if you already have some of these devices in your life, I encourage you try them out.

The post The State of the Weariverse appeared first on Big Nerd Ranch.

]]>
https://bignerdranch.com/blog/the-state-of-the-weariverse/feed/ 0
Designing Glassware https://bignerdranch.com/blog/designing-glassware/ https://bignerdranch.com/blog/designing-glassware/#respond Fri, 21 Mar 2014 13:42:07 +0000 https://nerdranchighq.wpengine.com/blog/designing-glassware/

Google Glass logoI recently attended a Google Glass Design Sprint hosted by the Glass team. I’ve been working on various Glassware since July, but this was the first formal design process I’ve gone through. I learned a few things by putting myself in the mind of a designer.

The post Designing Glassware appeared first on Big Nerd Ranch.

]]>

Google Glass logoI recently attended a Google Glass Design Sprint hosted by the Glass team. I’ve been working on various Glassware since July, but this was the first formal design process I’ve gone through. I learned a few things by putting myself in the mind of a designer.

Principles of Designing Glassware

First off, if you haven’t seen the official Glass Design guidelines or one of Timothy Jordan’s talks, you should take a detour there now.

Keep it Stupid Simple

Glassware is deceptively simple when you storyboard it; it may have only two or three different views. However, populating those two or three views with the contextually relevant data might require incredibly large amounts of computing power behind it. And while it might seem like a good idea to allow users to access their entire history on your service, they might then find themselves wading through cards to find things, wasting their battery with unneeded screen on time and network accesses.

Combining APIs Makes Magic

The magic of Glassware happens when you combine completely separate sources to deliver content in an innovative way. A great example of this is Refresh, which meshes together Google Calendar, Facebook, LinkedIn and Twitter. With this combination, Refresh not only informs users of who their next meeting is with, but offers key information about them. It is the social network-backed version of Thad Starner’s remembrance agent.

The User Will Be Distracted

Quickness and ease of use cannot be emphasized enough. If a user is distracted by the real world while using your app, Glass will go to sleep and reset to the home card when they next wake it up. If your user was deep inside the menus of your app, he or she will not be happy to have to again navigate down inside the bundle in order to resume the process.

This happens even with the native apps: if you are in the middle of captioning and sharing a picture and Glass times out, you have to start over again. At other times, questionable network connectivity means that the voice recognition takes so long you forget you were doing something.

Less to More

Instead of less being more, Glass encourages you to always have detailed information available for being read aloud or drilled down into, but you should not default to putting it in users’ faces. Look at the example below: the main card the user sees is just the joke and the punchline, so they can quickly tell it without heavy device interaction. If the user wants to see who submitted a joke or any other metadata, he or she can just tap again to get more info on a detail card.

Methodology

Unlike normal Android applications, Glassware currently always runs on the exact same device and screen. This means that you can very carefully place elements and not have to worry about tablets or orientation changes. It also means that you must also use Glassware on a physical Glass device before you can validate the entire user experience—and you have to do it more than once. Without living and breathing your Glassware in your day-to-day life, you can’t tell if it will distract the user at inappropriate times or be difficult to use in situations where voice commands are inappropriate.

UI Flow

Tell-a-Joke-UI

While any mobile app should have the UI storyboarded, Glass needs a little more. Unlike a touchscreen device, Glass operates on far more than just screen taps. Additionally, Glassware can be entered from both the main launcher and from a live or static timeline card.

In our “tell a joke” example, the user can request a brand new joke via the launcher or scroll in their timeline history to find other jokes they’ve told recently. The two different entry points are shown, with the launcher entry at the top of the diagram and the history entry in the middle. I’ve chosen to use arrows with a circle on one end to show entry points.

Make a couple of different flows for your app, and then actually try them. Since we are pioneers in the wearable application space right now, no best practices or standards have been developed. As long as you keep to a strict tree structure of scenes, things should make sense to the user. If, for example, you can’t get the interaction on your diagram by using only straight arrows between boxes on a grid, it may not be an intuitive navigation pattern for the user. Don’t worry if you accidentally invent a pattern that is not possible with the current GDK: as long as it consists of Cards and CardScrollViews, it can be cobbled together as a prototype and suggested to the Glass team.

Build It Fast

Even if you don’t have all the backend components ready, go ahead and build your UI with mock data and run it on Glass. Try the voice trigger, insert data in your timeline and leave a live card up for a bit. Does it feel right, or is it getting in the way of the rest of your Glass usage? As others try out what you’ve built, you’ll likely discover that users will try to use a gesture to perform an interaction that you hadn’t considered.

What’s next?

If you already know Android programming, get your hands on Glass and dive in. If you’re not an Android expert but want to hop on the Glass train, sign up for one of our Android bootcamps to learn the fundamentals of developing good Android apps. What you learn there can easily be transferred to developing specific Glass apps.

Next week, I’ll be at the Glass Design Sprint at the MIT Media Lab, followed by the WearScript Workshop, where we will be hacking on WearScript, an open-source project I’ve been working on. The full agenda is here. If you’d like more info about attending, email me.

The post Designing Glassware appeared first on Big Nerd Ranch.

]]>
https://bignerdranch.com/blog/designing-glassware/feed/ 0
Interns can't make coffee https://bignerdranch.com/blog/interns-cant-make-coffee/ https://bignerdranch.com/blog/interns-cant-make-coffee/#respond Sun, 26 Aug 2012 12:00:00 +0000 https://nerdranchighq.wpengine.com/blog/interns-cant-make-coffee/

Interns at Highgroove don't make coffee

The post Interns can't make coffee appeared first on Big Nerd Ranch.

]]>

Interns at Highgroove don't make coffee

If I were the new intern at any other company, I would most likely be fetching coffee, keeping the kitchen clean and working on the projects that nobody else wants. But here at Highgroove, things are entirely different.

First of all, I can’t even make coffee at Highgroove. Our coffee machine is not run of your run-of-the-mill drip brewer. In fact, it’s off- limits to anyone who hasn’t been trained to use it, meaning that interns have to bother full-timers when they want a cup of coffee.

That’s not the only intern cliche that has been turned on its head at Highgroove: I was asked to show up around 10 a.m. on my first day, and told to leave when I felt like I was done. Because we are a results-only work environment, what matters is getting work done. And even as the intern, I’ve got work to get done the first week that helps the company.

Getting down to work

One of the first things I did was compile a list of the most popular gems we use using my GitHub Stats script. I did this so I could familiarize myself with the various gems that Highgroove uses in our projects.

  • sinatra

  • nokogiri

  • will_paginate

  • haml

  • heroku

  • slim

  • thin

  • simple_form

  • devise

  • pg

  • jquery-rails

And, of course, the most common one:

  • rails

These are some pretty good gems that you should consider including in your new rails project template to make your life easier. I’m not brand new to rails, but hadn’t seen simple_form or slim before, and now writing views is a much happier task for me. I’m not constantly typing out < and /> over and over again.

What are the most common gems you use on your projects that the new guy would have to familiarize himself with?

Image credit: marfis75

The post Interns can't make coffee appeared first on Big Nerd Ranch.

]]>
https://bignerdranch.com/blog/interns-cant-make-coffee/feed/ 0