Andrew Marshall - Big Nerd Ranch Tue, 19 Oct 2021 17:47:11 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.5 ConstraintLayout Flow: Simple Grid Building Without Nested Layouts https://bignerdranch.com/blog/constraintlayout-flow-simple-grid-building-without-nested-layouts/ Tue, 05 Nov 2019 18:11:10 +0000 https://nerdranchighq.wpengine.com/?p=3913 ConstraintLayout chains are great, but they only work for one row of items. What if you have too many items to fit on one row? Learn how to use ConstraintLayout Flow to handle this case.

The post ConstraintLayout Flow: Simple Grid Building Without Nested Layouts appeared first on Big Nerd Ranch.

]]>
ConstraintLayout chains are great, but they only work for one row of items. What if you have too many items to fit on one row? There hasn’t been a simple way to allow your chain to expand to multiple rows of items. With ConstraintLayout Flow, this changes.

ConstraintLayout Flow allows a long chain of items to wrap onto multiple rows or columns. This is similar to Google’s FlexboxLayout, which is an Android implementation of the idea of the flexible box layout from CSS. However, instead of using an actual ViewGroup to manage the contained items, ConstraintLayout Flow uses a virtual helper object, so your layout maintains its flat view hierarchy.

<androidx.constraintlayout.widget.ConstraintLayout
  xmlns:android="http://schemas.android.com/apk/res/android"
  xmlns:app="http://schemas.android.com/apk/res-auto"
  android:layout_width="match_parent"
  android:layout_height="match_parent">

    <androidx.constraintlayout.helper.widget.Flow
        android:layout_width="0dp"
        android:layout_height="wrap_content"
        app:layout_constraintStart_toStartOf="parent"
        app:layout_constraintEnd_toEndOf="parent"
        app:layout_constraintTop_toTopOf="parent" 
        app:constraint_referenced_ids="item_1,item_2,item_3" />

    <View
        android:id="@+id/item_1"
        android:layout_width="50dp"
        android:layout_height="50dp" />

    <View
        android:id="@+id/item_2"
        android:layout_width="50dp"
        android:layout_height="50dp" />

    <View
        android:id="@+id/item_3"
        android:layout_width="50dp"
        android:layout_height="50dp" />

</androidx.constraintlayout.widget.ConstraintLayout>

Flow is used within a parent ConstraintLayout and is able to manage any View with a defined id. Notice the attribute constraint_referenced_ids – this attribute defines the Views that will be managed by this constraint helper. If you’ve used Group or Barrier before, you’ll be familiar with this; it works exactly the same as it does in those virtual helper objects.

What can we use this for?

While there are many combinations of Flow attributes that result in a variety of layouts, I’ll focus on two use-cases that I think will be the most common usage of ConstraintLayout Flow: grid and flexbox-style layouts.

The grid

A pretty common ask of Android developers is to create a grid of items. Of course, we have GridLayout and RecyclerView with GridLayoutManager, but Flow is a good alternative for a couple of reasons:

  • It keeps the layout flat, which is more efficient during layout calculation than nesting with GridLayout or RecyclerView.
  • It’s simpler than setting up a RecyclerView and, for small lists of items, you won’t sacrifice the performance benefits of RecyclerView.

A grid setup is going to work best for items that are the same size, otherwise we’ll end up with a lot of empty space. To start, we’ll put 10 square views into a Flow like the one in the code above.

Right now, we haven’t specified any attributes for the Flow, so all the Flow attributes are at their defaults. The main attribute of Flow is app:flow_wrapMode, which specifies how the Flow should handle elements that extend past the constraints. The default value of none creates a regular ConstraintLayout chain, which extends past the edge of its constraints on both sides. To tell the Flow to wrap when it reaches the constraint, we’ll use chain for wrap mode.

<androidx.constraintlayout.helper.widget.Flow
    android:layout_width="275dp"
    android:layout_height="200dp"
    android:background="#e4e4e4"
    app:constraint_referenced_ids="item_0,item_1,item_2,item_3,item_4,item_5,item_6,item_7,item_8,item_9"
    app:flow_wrapMode="chain" />

Now, instead of extending past the constraint edges on the left and right, the Flow makes the items wrap onto another line. However, since Flow treats each new line as a new chain, the last line doesn’t fit with the grid.

The other possibility for wrap mode is aligned, which tells the Flow to line up the items vertically, as well as horizontally. In chain mode each row is treated as an independent chain, so items won’t necessarily line up vertically. In aligned mode, the 1st item of each chain will be in the 1st “column”, the 2nd item of each chain will be in the 2nd “column”, and so on. If we change our Flow to have aligned wrap mode we get something closer to what we want.

We still have spaces between the views, though. Since each row is treated as a chain, we can borrow an idea from normal ConstraintLayout chains: the chain style. It works the same here, using spreadspread_inside, and packed to determine how to distribute the remaining space in the chain. Since we want no spaces between each view, we’ll set app:flow_horizontalStyle to packed. Now we have a grid layout, made entirely with ConstraintLayout!

<androidx.constraintlayout.helper.widget.Flow
    android:layout_width="275dp"
    android:layout_height="200dp"
    android:background="#e4e4e4"
    app:constraint_referenced_ids="item_0,item_1,item_2,item_3,item_4,item_5,item_6,item_7,item_8,item_9"
    app:flow_wrapMode="aligned"
    app:flow_horizontalStyle="packed" />

Learn how a design audit can help you see the most success from your mobile or web application.

The flexbox

A flexbox-style layout is similar to a grid, but the items don’t necessarily need to be the same size. Google actually already has a library that implements flexbox-style layout for Android, but again, the benefit here is that the Flow keeps the layout flat, so the layout computation is more efficient.

For example, if we changed our views to TextViews with dimensions of wrap_content and gave each of them random words, we’ll end up with views that don’t fit neatly into a grid structure.

However, if we now change the Flow’s wrap mode back to chain, the views will nest with each other more neatly.

What if we’d like those views to start at the left side of the container, instead of being centered? Similar to other ConstraintLayout elements, Flow has a bias attribute that controls which side of the constraint each row will be placed closer to. If we set the app:flow_horizontalBias attribute to 0, the chains will hug the left side of the container.

If we don’t need the views to be nested so tightly with one another, we can also add space between the rows and columns using app:flow_horizontalGap and app:flow_verticalGap. Notice that this only adds space between items, not at the front or end of each chain.

Experiment!

These two use cases definitely aren’t the only ways to use ConstraintLayout Flow, but they’re ones that immediately present themselves as real-world uses of the new API. Flow’s main draw is that it allows building more complex layouts without relying on nested ViewGroups which lessens the potential for your app to experience UI jank. I have also found that it is faster and easier to set up than an equivalent RecyclerView, so for small sets of items where you don’t need the recycling behavior, Flow could be a simpler alternative.

Given the combinations of layout attributes afforded to us by Flow and ConstraintLayout in general, there are many possible layouts that can be unexpected and unintuitive. Finding discrete use-case specific combinations of attributes will be the key to using this new ConstraintLayout API effectively.

As with all new APIs, experiment with different combinations of attributes to see what kind of layouts you can produce. I’m excited to see what other use-cases could be implemented using ConstraintLayout Flow!

The post ConstraintLayout Flow: Simple Grid Building Without Nested Layouts appeared first on Big Nerd Ranch.

]]>
Using FirebaseMLKit with CameraX https://bignerdranch.com/blog/using-firebasemlkit-with-camerax/ Tue, 08 Oct 2019 14:50:54 +0000 https://nerdranchighq.wpengine.com/?p=3798 So you've watched the CameraX introduction at Google I/O 2019 and you saw all the cool image manipulation and face detection implemented in the demo apps. Then you worked through the CameraX CodeLab, but the analysis that they demonstrate in that app just calculates the luminosity. What if we want something a little flashier?

The post Using FirebaseMLKit with CameraX appeared first on Big Nerd Ranch.

]]>
So you’ve watched the CameraX introduction at Google I/O 2019 and you saw all the cool image manipulation and face detection implemented in the demo apps. Then you worked through the CameraX CodeLab, but the analysis that they demonstrate in that app just calculates the luminosity. What if we want something a little flashier?

Fortunately, we can make use of another one of Google’s libraries, Firebase ML Kit. ML Kit makes face detection super simple, and CameraX’s analysis step makes it easy to feed images to the face detector. Let’s see how to combine the two to detect the contours of a person’s face!

The setup

Our MainActivity will handle asking for permission to use the camera, and then delegate to CameraFragment when permission is granted:

private const val CAMERA_PERMISSION_REQUEST_CODE = 101

class MainActivity : AppCompatActivity() {

  override fun onCreate(savedInstanceState: Bundle?) {
    super.onCreate(savedInstanceState)
    setContentView(R.layout.activity_main)

    if (hasCameraPermissions()) {
      supportFragmentManager.beginTransaction()
          .add(R.id.content_area, CameraFragment())
          .commit()
    } else {
      requestPermissions(arrayOf(Manifest.permission.CAMERA), CAMERA_PERMISSION_REQUEST_CODE)
    }
  }

  private fun hasCameraPermissions(): Boolean {
    return ContextCompat.checkSelfPermission(this, Manifest.permission.CAMERA) == PackageManager.PERMISSION_GRANTED
  }

  override fun onRequestPermissionsResult(requestCode: Int, permissions: Array<out String>, grantResults: IntArray) {
    super.onRequestPermissionsResult(requestCode, permissions, grantResults)
    if (requestCode == CAMERA_PERMISSION_REQUEST_CODE) {
      if (grantResults[0] == PackageManager.PERMISSION_GRANTED) {
        supportFragmentManager.beginTransaction()
            .add(R.id.content_area, CameraFragment())
            .commit()
      }
    }
  }
}

Of course, for the above to work, our app will need the permission declared in the manifest:

<uses-permission android:name="android.permission.CAMERA">

CameraFragment will handle initializing the CameraX use-cases and binding them to its lifecycle. For now, the layout for the fragment, fragment_camera.xml, just consists of a FrameLayout containing a TextureView:

<FrameLayout
  xmlns:android="http://schemas.android.com/apk/res/android"
  android:layout_width="match_parent"
  android:layout_height="match_parent">

  <TextureView
    android:id="@+id/camera_view"
    android:layout_width="match_parent"
    android:layout_height="match_parent" />

</FrameLayout>

We’ll use the TextureView to display the SurfaceTexture representing the camera output from CameraX’s preview use-case. We get a reference to the TextureView in the set up methods of CameraFragment:

class CameraFragment : Fragment() {
  private lateinit var cameraView: TextureView
  override fun onCreateView(inflater: LayoutInflater, container: ViewGroup?, savedInstanceState: Bundle?): View? {
    val view = inflater.inflate(R.layout.fragment_camera, container, false)
    
    cameraView = view.findViewById(R.id.camera_view)
    
    return view
  }
}

Add CameraX

First, we’ll need to add the CameraX dependency. I’ve found that it works most consistently if you add the Camera2 dependency as well:

def camerax_version = "1.0.0-alpha02"
implementation "androidx.camera:camera-core:$camerax_version"
implementation "androidx.camera:camera-camera2:$camerax_version"

We’ll then add a method to set up a CameraX instance to be associated with CameraFragment:

override fun onCreateView(...)
  ...    
  cameraView.post { 
    setUpCameraX()
  }
  
  return view
}
private fun setUpCameraX() {
  CameraX.unbindAll()
  val displayMetrics = DisplayMetrics().also { cameraView.display.getRealMetrics(it) }
  val screenSize = Size(displayMetrics.widthPixels, displayMetrics.heightPixels)
  val aspectRatio = Rational(displayMetrics.widthPixels, displayMetrics.heightPixels)
  val rotation = cameraView.display.rotation
}

We need to establish the size, aspect ratio, and rotation of our target view so that we can properly configure the CameraX use-cases. By calling setUpCameraX() from within cameraView.post(), we ensure that it doesn’t get run until the view is completely set up and ready to be measured.

Build the CameraX use-cases

Since eventually we want to draw the detected face contours on the preview image, we need to set up the preview and analysis use-cases together, so we can transform their output for proper display. We also need to be able to resize and rotate everything properly when the device is rotated.

To encapsulate this logic, we’ll make a utility class called AutoFitPreviewAnalysis. If you’ve checked out Google’s CameraX sample project, you may have seen their AutoFitPreviewBuilder. Our AutoFitPreviewAnalysis will be a modified version of that class, so we’ll start by copying that class into our project.

Go ahead and change the class name to AutoFitPreviewAnalysis. Since we’re creating both a Preview and Analysis use-case, let’s change build() to take the configuration parameters from CameraFragment and simply return an instance of the class:

fun build(
  screenSize: Size, 
  aspectRatio: Rational, 
  rotation: Int, 
  viewFinder: TextureView
): AutoFitPreviewAnalysis {
  return AutoFitPreviewAnalysis(config, WeakReference(viewFinder))
}

We now have everything we need to create the configuration objects for both the Preview and Analysis use-cases:

private fun createPreviewConfig(screenSize: Size, aspectRatio: Rational, rotation: Int): PreviewConfig {
  return PreviewConfig.Builder().apply {
    setLensFacing(CameraX.LensFacing.FRONT)
    setTargetResolution(screenSize)
    setTargetAspectRatio(aspectRatio)
    setTargetRotation(rotation)
  }.build()
}
private fun createAnalysisConfig(screenSize: Size, aspectRatio: Rational, rotation: Int): ImageAnalysisConfig {
  return ImageAnalysisConfig.Builder().apply {
    setLensFacing(CameraX.LensFacing.FRONT)
    setImageReaderMode(ImageAnalysis.ImageReaderMode.ACQUIRE_LATEST_IMAGE)
    setTargetRotation(rotation)
    setTargetResolution(screenSize)
    setTargetAspectRatio(aspectRatio)
  }.build()
}

Since we’re going to need to map the contour points from the analysis to the preview image, it’s important that both preview and analysis are set up with the same target resolution and aspect ratio.

We also set the analysis’s imageReaderMode to ACQUIRE_LATEST_IMAGE, which always returns the latest image, discarding any others. This will keep the analysis working on the most up-to-date frame, without clogging up the pipeline with old frames.

For simplicity, we’ll hard-code the camera to the front (selfie) camera.

Create the configuration objects in build(), then pass them to the AutoFitPreviewAnalysis constructor. Change the constructor arguments to match the new parameters.

fun build(screenSize: Size, aspectRatio: Rational, rotation: Int, viewFinder: TextureView): AutoFitPreviewAnalysis {
  val previewConfig = createPreviewConfig(screenSize, aspectRatio, rotation)
  val analysisConfig = createAnalysisConfig(screenSize, aspectRatio, rotation)
  return AutoFitPreviewAnalysis(previewConfig, analysisConfig, WeakReference(viewFinder))
}

In the init method, create the ImageAnalysis instance and add an accessor so you can get it to bind it to the lifecycle.

Finally, in CameraFragment, build an instance of AutoFitPreviewAnalysis and bind the created use-cases to the fragment’s lifecycle:

private fun setUpCameraX() {
  ...
  val autoFitPreviewAnalysis = AutoFitPreviewAnalysis.build(screenSize, aspectRatio, rotation, cameraView)
  
  CameraX.bindToLifecycle(this, autoFitPreviewAnalysis.previewUseCase, autoFitPreviewAnalysis.analysisUseCase)
}

At this point, you should be able to run your app and see the preview coming from your selfie camera. Next, we’ll add the image analysis!

Add Firebase ML Kit

We first need to add the Firebase ML Kit dependencies to the project, and set up our project with Firebase.

Add the following dependencies to your app/build.gradle file:

implementation 'com.google.firebase:firebase-core:16.0.9'
implementation 'com.google.firebase:firebase-ml-vision:20.0.0'
implementation 'com.google.firebase:firebase-ml-vision-face-model:17.0.2'

Create a new project in the Firebase console, and follow the directions to register your Android app with the Firebase service.

Create the analyzer

To actually run the analysis, we’ll need to define a class that implements the ImageAnalysis.Analyzer interface. Our class, FaceAnalyzer, will encapsulate the ML Kit logic and pass the results to the view to be rendered. We’ll start with a bare-bones implementation, then optimize a bit from there: 

private class FaceAnalyzer : ImageAnalysis.Analyzer {
  private val faceDetector: FirebaseVisionFaceDetector by lazy {
    val options = FirebaseVisionFaceDetectorOptions.Builder()
      .setContourMode(FirebaseVisionFaceDetectorOptions.ALL_CONTOURS)
      .build()
    FirebaseVision.getInstance().getVisionFaceDetector(options)
  }
  private val successListener = OnSuccessListener<List<FirebaseVisionFace>> { faces ->
    Log.e("FaceAnalyzer", "Analyzer detected faces with size: " + faces.size)
  }
  private val failureListener = OnFailureListener { e ->
    Log.e("FaceAnalyzer", "Face analysis failure.", e)
  }
  override fun analyze(image: ImageProxy?, rotationDegrees: Int) {
    if (image == null) return
    val cameraImage = image.image ?: return
    val firebaseVisionImage = FirebaseVisionImage.fromMediaImage(cameraImage, getRotationConstant(rotationDegrees))
    val result = faceDetector.detectInImage(firebaseVisionImage)
      .addOnSuccessListener(successListener)
      .addOnFailureListener(failureListener)
  }
  private fun getRotationConstant(rotationDegrees: Int): Int {
    return when (rotationDegrees) {
      90 -> FirebaseVisionImageMetadata.ROTATION_90
      180 -> FirebaseVisionImageMetadata.ROTATION_180
      270 -> FirebaseVisionImageMetadata.ROTATION_270
      else -> FirebaseVisionImageMetadata.ROTATION_0
    }
  }
}

We’ll first define the set up logic for our face detector. We only care about getting the contours, so we use the ALL_CONTOURS option and get the detector using the static FirebaseVision method.

We then define a success and failure listener for our detector. Right now, these will just log messages about the result, but we’ll add more to these later.

The key method is analyze(), overridden from the ImageAnalysis.Analyzer interface. This method will be called by CameraX’s analysis use-case with every frame detected by the camera. This frame is wrapped in an ImageProxy, so we ensure that we have data, then use the resulting image with FirebaseVisionImage.fromMediaImage() to construct an image object suitable to be analyzed with Firebase.

The other parameter that CameraX gives us in the analysis pathway is the degrees of rotation of the analysis image. This is useful for optimized analysis: computer vision algorithms are typically more accurate when the items being analyzed are in the expected orientation. Conveniently, fromMediaImage() takes a rotation parameter for exactly this purpose – we just need to transform it from degrees to FirebaseVisionImageMetadata constants using a small helper method, getRotationConstant().

Once we have the FirebaseVisionImage parameter built, we can pass it to our faceDetector for analysis, along with the success and failure listeners.

To add our new FaceAnalyzer to our CameraX image analysis use-case, we just need to assign it to our ImageAnalysis object after we create it:

analysisUseCase = ImageAnalysis(analysisConfig).apply {
  analyzer = FaceAnalyzer()
}

Optimize

If you run the app now, it will show the preview and log that it found a face if you point it at yourself, but it’s really slow. That’s because we’re currently doing everything on the main thread, and we’re attempting to analyze most frames, even if the current analysis isn’t finished yet. Let’s fix that.

First, we’ll create a new thread to handle the analysis use-case, and apply it to the ImageAnalysisConfig when we create it:

return ImageAnalysisConfig.Builder().apply {
  ...
  val analysisThread = HandlerThread("FaceDetectionThread").apply { start() }
  setCallbackHandler(Handler(analysisThread.looper))
}.build()

Setting the callback handler in this manner instructs ImageAnalysis to invoke the ImageAnalysis.Analyzer.analyze() method on this thread instead of the main thread, which will greatly help with our UI jank.

In addition to this, we can limit the number of frames that the analysis gets run on. Our original setting of ImageAnalysis.ImageReaderMode.ACQUIRE_LATEST_IMAGE will help with this a little bit. However, because faceDetector.detectInImage() operates asynchronously and a new analysis pass will begin almost as soon as the previous invocation of analyze() returns, it’s likely that many analysis passes will be rapidly stacked and eventually cause an OutOfMemoryError.

To prevent this, we’ll add a simple atomic flag in FaceAnalyzer that causes the analyze() method to return early if a previous analysis is still in progress, causing that frame to be skipped:

private var isAnalyzing = AtomicBoolean(false)
private val successListener = OnSuccessListener<List<FirebaseVisionFace>> { faces ->
  isAnalyzing.set(false)
  Log.e("FaceAnalyzer", "Analyzer detected faces with size: ${faces.size}")
}
private val failureListener = OnFailureListener { e ->
  isAnalyzing.set(false)
  Log.e("FaceAnalyzer", "Face analysis failure.", e)
}
override fun analyze(image: ImageProxy?, rotationDegrees: Int) {    
  if (isAnalyzing.get()) return
  isAnalyzing.set(true)
  ...
}

Add an overlay to the preview

To draw the contour points on top of the preview View, we need to add another view. Since we need this new view to draw a series of arbitrary points, a custom view is going to be our best bet here: 

class FacePointsView @JvmOverloads constructor(
  context: Context,
  attrs: AttributeSet? = null,
  defStyleAttr: Int = -1
) : View(context, attrs, defStyleAttr) {
  private val pointPaint = Paint(Paint.ANTI_ALIAS_FLAG).apply {
    color = Color.BLUE
    style = Paint.Style.FILL
  }
  var points = listOf<PointF>()
    set(value) {
      field = value
      invalidate()
    }
  override fun onDraw(canvas: Canvas) {
    super.onDraw(canvas)
    canvas.apply {
      for (point in points) {
        drawCircle(point.x, point.y, 8f, pointPaint)
      }
    }
  }
}

Our new FacePointsView will simply subclass View, and instantiate a Paint with which to draw the points. In the onDraw(), all we need to do is iterate over the points list and draw each point as a small circle. When the points variable is set to something new, the view is invalidated so the points get re-drawn.

Add this new view in the fragment’s layout in the hierarchy so it’s drawn on top of the camera preview view:

<TextureView
    android:id="@+id/camera_view"
    android:layout_width="match_parent"
    android:layout_height="match_parent" />

<com.bignerdranch.cameraxmlkitblog.FacePointsView
  android:id="@+id/face_points_view"
  android:layout_width="match_parent"
  android:layout_height="match_parent" />

Draw the face points

Now that we have a view that will draw our points, we need to get those points to the view. In the FaceAnalyzer, add a lambda property that will serve as a listener, and grab the points from the successful analysis and pass them to the lambda: 

var pointsListListener: ((List<PointF>) -> Unit)? = null
private val successListener = OnSuccessListener<List<FirebaseVisionFace>> { faces ->
  isAnalyzing.set(false)
  val points = mutableListOf<PointF>()
  for (face in faces) {
    val contours = face.getContour(FirebaseVisionFaceContour.ALL_POINTS)
    points += contours.points.map { PointF(it.x, it.y) }
  }
  pointsListListener?.invoke(points)
}

To get the points to the view, the AutoFitPreviewAnalysis will need a reference to the FacePointsView. Add a parameter to the build() function, then create a WeakReference to it and pass it to the constructor:

fun build(
  screenSize: Size, 
  aspectRatio: Rational, 
  rotation: Int, 
  viewFinder: TextureView, 
  overlay: FacePointsView
): AutoFitPreviewAnalysis {
  val previewConfig = createPreviewConfig(screenSize, aspectRatio, rotation)
  val analysisConfig = createAnalysisConfig(screenSize, aspectRatio, rotation)
  return AutoFitPreviewAnalysis(
    previewConfig, 
    analysisConfig, 
    WeakReference(viewFinder), 
    WeakReference(overlay)
  )
}

Update the constructor to match the above signature, and grab a reference to the FacePointsView in CameraFragment and pass it to AutoFitPreviewAnalysis.build().

Finally, when associating the FaceAnalyzer with the ImageAnalysis, set the analyzer’s pointsListListener to a lambda passing the points to the view:

analysisUseCase = ImageAnalysis(analysisConfig).apply {
  analyzer = FaceAnalyzer().apply {
    pointsListListener = { points ->
      overlayRef.get()?.points = points
    }
  }
}

Now if you run the app and point the camera at your face, you should see the points of a face drawn on the screen. It won’t be matched to your face yet, but you can move your mouth and watch the other face mirror your expression. Cool… or creepy?

Match the points to the preview

Since AutoFitPreviewAnalysis.updateTransform() takes the SurfaceTexture returned from the Preview use-case and transforms it to fit on the screen, the face contour points don’t match the preview image. We need to add a similar transform to the face points so the points will match.

The way the preview transform works is via matrix multiplication, which is a computationally efficient way of changing a large number of points at once. For our contour points, we’ll need to apply three transformations: scale, translation, and mirror. What we’ll do is construct a single matrix that represents the combination of all three transformations.

Initial set up

First, let’s add some additional cached values to work with. You’ll notice that at the top of AutoFitPreviewAnalysis, values for various dimensions and sizes are being cached for use in the transform calculation. To these values, add two more dimension caches:

/** Internal variable used to keep track of the image analysis dimension */
private var cachedAnalysisDimens = Size(0, 0)
/** Internal variable used to keep track of the calculated dimension of the preview image */
private var cachedTargetDimens = Size(0, 0)

cachedAnalysisDimens represents the size of the analysis image that CameraX’s analysis use case returns to FaceAnalyzer.analyze(), so we can add another callback lambda to send this value back to AutoFitPreviewAnalysis:

private class FaceAnalyzer : ImageAnalysis.Analyzer {
  var analysisSizeListener: ((Size) -> Unit)? = null
  override fun analyze(image: ImageProxy?, rotationDegrees: Int) {
    val cameraImage = image?.image ?: return
    analysisSizeListener?.invoke(Size(image.width, image.height))
    ...
  }

Then we can cache this value by setting a listener when we construct the FaceAnalyzer:

analyzer = FaceAnalyzer().apply {
  ...
  analysisSizeListener = {
    updateOverlayTransform(overlayRef.get(), it)
  }
}

The second new cached value, cachedTargetDimens, is the calculated size of the preview image. This is different from the viewFinderDimens value, since that measures the size of the view itself. For example, in the image above, viewFinderDimens.width includes the width of the white bars on the left and right of the image, whereas cachedTargetDimens.width is only the width of the image.

AutoFitPreviewAnalysis.updateTransform() already calculates this value as scaledWidth and scaledHeight to transform the preview, so all we need to do is store it for use with the overlay’s transform computation: 

// save the scaled dimens for use with the overlay
cachedTargetDimens = Size(scaledWidth, scaledHeight)

After we have these values set up, create a new method that we’ll use to build our transformation matrix for the overlay points:

private fun overlayMatrix(): Matrix {
  val matrix = Matrix()
  return matrix
}

Scale

The first transform we’ll need to apply is scaling – we want the points from the image analysis to occupy the same space as the preview image.

Since we set CameraX to use the same aspect ratio for both the preview and image analysis use-cases, we can use the same scale factor for both width and height. We also know the dimensions that we’d like to match: cachedTargetDimens, and we know our starting dimensions: cachedAnalysisDimens, so it’s just a matter of calculating the percent scale:

val scale = cachedTargetDimens.height.toFloat() / cachedAnalysisDimens.width.toFloat()

Note that we’re comparing the target’s height to the analysis’s width. This is because of how the target dimensions are calculated in updateTransform(), as well as how the camera defines its image’s dimensions by default. The target dimensions are calculated to always match a phone in portrait, regardless of true orientation – the long side is the height. The camera’s dimensions are defined in the opposite way – the long side is always the width, regardless of true orientation. Since we want the long sides from the analysis to match the long side of the preview, we just switch them when doing the calculation for the scale.

Now that we have the scale factor, use it to alter our identity matrix.

matrix.preScale(scale, scale)

Translation

Since it’s possible for the preview image to get letterboxed on the sides, depending on the phone size and orientation, we need to build a way to move our set of contour points to have the same origin as the preview image.

To do so, we need to calculate the difference between the view’s width and the target width (the width of the actual preview image displayed). Note, however, that viewFinderDimens are not independent of rotation, like cachedTargetDimens are. Therefore, we need to determine which orientation the phone is currently in, and find the difference between the corresponding sides in that orientation:

val xTranslate: Float
val yTranslate: Float
if (viewFinderDimens.width > viewFinderDimens.height) {
  // portrait: viewFinder width corresponds to target height
  xTranslate = (viewFinderDimens.width - cachedTargetDimens.height) / 2f
  yTranslate = (viewFinderDimens.height - cachedTargetDimens.width) / 2f
} else {
  // landscape: viewFinder width corresponds to target width
  xTranslate = (viewFinderDimens.width - cachedTargetDimens.width) / 2f
  yTranslate = (viewFinderDimens.height - cachedTargetDimens.height) / 2f
}

Once we’ve calculated the distance in each axis to translate the points by, apply it to the matrix.

matrix.postTranslate(xTranslate, yTranslate)

Mirror

Since we’re using the front camera, CameraX preview flips this image so the image that appears on screen looks like one that would appear if you were looking in a mirror. Image analysis doesn’t do this flip for us, so we have to mirror it ourselves.

Fortunately, mirror is the easy one. It’s just a scale transform of -1 in the x-direction. First, calculate the center of the image, then scale it around that point:

val centerX = viewFinderDimens.width / 2f
val centerY = viewFinderDimens.height / 2f
matrix.postScale(-1f, 1f, centerX, centerY)

Use the matrix

Now that our overlayMatrix() function returns a matrix that encapsulates all the transformations that we need for our face map, let’s apply it to the points in the map. Add another member variable to the FacePointsView class to store the updated matrix:

var transform = Matrix()

Now we’ll add a method to apply this transform matrix to the list of points. The key method we’ll be building this around is Matrix.mapPoints(dst: FloatArray, src: FloatArray). For every pair of points in the input array (src), the matrix multiplies the pair by itself, producing a new pair that is mapped to its position in the transformed space. These mapped points are copied to the output array (dst) in the same order.

For the code, add a private method and create FloatArrays for the input and output, and pass them to mapPoints(). Then convert the output FloatArray back into a List<Point> that we can use with our existing logic in onDraw():

private fun transformPoints() {
  // build src and dst
  val transformInput = points.flatMap { listOf(it.x, it.y) }.toFloatArray()
  val transformOutput = FloatArray(transformInput.size)
  // apply the matrix transformation
  transform.mapPoints(transformOutput, transformInput)
  // convert transformed FloatArray to List<Point>
  drawingPoints = transformOutput.asList()
      .chunked(size = 2, transform = { (x, y) -> PointF(x, y) })
}

Note that drawingPoints hasn’t been defined – that’s because we’ll need that to be a member variable so it’s available to our onDraw(). Let’s add that now.

private var drawingPoints = listOf<PointF>()

Now our FacePointsView has everything it needs to draw the points in the correct position over the camera preview image.

Draw the face points… again, but better!

Currently, we’re calling invalidate() on the FacePointsView whenever we get a new set of points from the analyzer. Now that we’ve added our transformation matrix, we’d like those points to first be transformed before any drawing occurs. We’ll also need the previously stored points to be transformed again if the matrix changes. Let’s change the setters of both points and transform to achieve this:

var points = listOf<PointF>()
  set(value) {
    field = value
    transformPoints()
  }
var transform = Matrix()
  set(value) {
    field = value
    transformPoints()
  }

Whenever either of these variables is changed, we call transformPoints(), which uses the current values of points and transform to create a new drawingPoints list.

We then need to change our onDraw() method to draw from the points in drawingPoints, instead of from points:

canvas.apply {
-   for (point in points) {
+   for (point in drawingPoints) {
      drawCircle(point.x, point.y, 8f, pointPaint)
    }
  }

Finally, the only thing that remains is to tell our view to redraw every time the drawingPoints list gets updated. Make a custom setter and call invalidate() in it to achieve this:

private var drawingPoints = listOf<PointF>()
  set(value) {
    field = value
    invalidate()
  }

Now if you run the app, you should see the points overlaid on the image of your face. When you move your face, the points should follow!

Sample project

See the sample project for the full code.

The post Using FirebaseMLKit with CameraX appeared first on Big Nerd Ranch.

]]>
CameraX https://bignerdranch.com/blog/camerax/ https://bignerdranch.com/blog/camerax/#respond Mon, 17 Jun 2019 09:00:00 +0000 https://nerdranchighq.wpengine.com/blog/camerax/ Android camera development can often be problematic since there are so many different kinds of devices and OS versions to support. Learn how Google's new CameraX library aims to solve this pain point.

The post CameraX appeared first on Big Nerd Ranch.

]]>

One of the coolest things about Android is that the devices and the cameras attached to them come in all shapes and sizes, and this includes the cameras attached to them.
We’ve seen phones with five cameras, enormous megapixel counts, and all kinds of other hardware oddities.
In addition to this, nifty machine learning features are becoming more ubiquitous, so users are starting to expect camera apps to process their photos in new interesting ways.
Unfortunately, this hardware variety and necessity for image processing tend to make developing camera apps difficult.
Camera APIs also vary across different devices, leading to branching implementations and hard-to-debug issues with the camera.

The standard API used since Lollipop has been the Camera2 API, which gives developers access to the basics of photography (and is itself an improvement over the original Camera API).
However, it still doesn’t address a lot of device-specific weirdness around interacting with the camera hardware and software, and you still have to manually manage resources and configuration.
Because of this, your app often has to determine which device it’s running on to address these specific issues before sending and/or after receiving information from the Camera2 API.
Needless to say, this can quickly become a maintenance nightmare in your codebase.

CameraX to the rescue

The CameraX support library aims to solve these problems with an elegant API that behaves in the same way across almost every Android device.
Basically, it serves as a simplifying abstraction layer on top of the existing Camera2 API. It allows you to quickly and succinctly access camera information for common use cases.
CameraX handles all the startup and shutdown logic, ensures that lifecycles are obeyed, and manages threading for camera features.
Issues found on specific devices are now handled within the CameraX library, instead of you having to include handling logic in your application.
Since it’s still using the Camera2 API, it provides backwards compatibility to API 21.

CameraX use cases and resource management

The API design is simple – you tell CameraX which features you need enabled during that session, what you want to do with the output, and what lifecycle your camera session should be limited to.

You’re required to give it a configuration, but CameraX attempts to determine sensible defaults if you don’t specify one or if your specification exceeds the capabilities of the device your app is running on.
For example, you can request a target resolution for the output image, but if the device is incapable of that resolution, CameraX will automatically handle the fallback to a supported resolution.
Startup and shutdown are handled based on the bound lifecycle, which operates in an intuitive way: on Lifecycle.Event.ON_START, the camera is started and begins reporting output data, on Lifecycle.Event.ON_STOP, the camera is shut down, and on Lifecycle.Event.ON_DESTROY, associated resources are released.

CameraX is designed with three main use cases: preview, analysis, and capture.
Preview use case allows you to access a stream of output frames from the camera, meaning you can display the image stream in your application.
CameraX also provides focus, zoom, and torch (flash) APIs to facilitate developing common camera preview interactions.
Analysis use case allows you to attach an analyzer method that will be run on each frame of the camera output stream.
Analyzing each frame can be used to implement real-time effects like face detection or to detect light or color levels in the image.
Finally, capture use case allows your user to actually take the photo and save it to the device.
The modes can be enabled on their own or in combination with the others, and all three can be tied to the same lifecycle so that resource management remains simple.

CameraX extensions

Some devices have additional capabilities not present on all devices.
This includes features like Portrait mode, Night mode, HDR, and Beauty mode.
CameraX extensions enable these modes and more, and it’s only a few lines of code to implement.
If the specific device your app is running on doesn’t support that extension, then it returns false, the extension remains disabled, and everything continues as normal.

How does Google ensure consistent camera behavior?

Google has heard the many pain points developers have found with Camera2 API, and among the most daunting is having to debug hard-to-find issues on dozens of different devices.
To address this, they’ve built an automated test lab specifically for CameraX and filled it with a variety of devices from various OEMs and Android versions back to Lollipop.
They’re continually running unit tests, integration tests, and performance tests on different camera features.
The goal of this test facility is to find bugs with the camera interaction so they can be addressed within the CameraX library, and no longer need to be manually handled by app developers.

We’ve seen the benefits that CameraX gives us over the Camera2 API, and ideally this abstracted access to the Camera2 API will be enough for most use cases.
However, if you have a more complex use-case that CameraX doesn’t yet cover, you can still add the Camera2 dependency alongside CameraX and use Camera2 APIs when you need to.

For now, CameraX is still in alpha, so it’s not recommended to become too attached to the API as it currently exists.
However, according to Google it’s in rapid development, and several libraries announced in alpha during IO 2018 now have stable versions, so I’d expect CameraX to follow a similar tragectory.
If you’d like a hands-on introduction, Google created a helpful Codelab that walks you through the basics.
Give CameraX a whirl and say goodbye to your Android camera-related headaches!

The post CameraX appeared first on Big Nerd Ranch.

]]>
https://bignerdranch.com/blog/camerax/feed/ 0
What’s new with Kotlin for Android developers https://bignerdranch.com/blog/whats-new-with-kotlin-for-android-developers/ https://bignerdranch.com/blog/whats-new-with-kotlin-for-android-developers/#respond Thu, 30 May 2019 09:00:00 +0000 https://nerdranchighq.wpengine.com/blog/whats-new-with-kotlin-for-android-developers/ Google announced a lot of new features at Google IO 2019 surrounding Kotlin and Android - here are some of the highlights.

The post What’s new with Kotlin for Android developers appeared first on Big Nerd Ranch.

]]>

As Kotlin rapidly gains popularity with Android developers, more and more tooling and APIs are being released to be more productive while making Android apps.
Google announced a lot of new features at Google IO 2019 surrounding Kotlin and Android – here are some of the highlights.

Project templates

Previously, when you used Android Studio’s templates to create a new activity or other Android component, it would create it in Java first, then you would have to use the Convert to Kotlin function to change it into Kotlin.
In the newest versions of Android Studio, it is able to generate new activities or fragments in Java or Kotlin.
You can select the language that it uses in the New Android Activity or New Android Component wizard.
Read more about it on the Android Developer site.

Coroutines for all the things

Google recently added support for coroutines in a lot of the Jetpack components.

WorkManager

Instead of extending the Worker class to use with WorkManager, you can now extend the CoroutineWorker class.
This class gives you access to the suspending function doWork(), which executes on a Dispatchers coroutine context of your choice.
Read more at the WorkManager documentation.

Room

Starting in Room 2.1.0 (now in beta), Room can use coroutines for database threading.
All methods in your DAO interfaces can be suspend functions, and Room’s generated code will support that.
For simple database operations, Room handles the threading and ensures that it is using a background thread (currently the IO Dispatcher).
For more complex operations, you can specify which Dispatcher you want the operations to execute on. Florina Muntenescu from Google has a more detailed post about it here.

Lifecycle, LiveData, and ViewModel

Lifecycle-aware Jetpack components also have coroutine support now.
Both Lifecycle and ViewModel create coroutine scopes (LifecycleScope and ViewModelScope), and any coroutine initiated within these scopes is automatically canceled when the scope is completed.
For Lifecycles, this is when it is destroyed, and for ViewModels it’s when it is cleared.
This makes it simpler to manage lifecycle-dependent work with coroutines.

LiveData can also be generated using coroutines in a lifecycle-dependent manner using the liveData builder function, which allows you to call suspend functions which emit their results into the LiveData.

Read more about lifecycle-dependent coroutine support on the developer documentation.

KTX – Android API Extensions

Last year around IO 2018, Google started releasing betas of KTX, an enormous set of libaries containing extension functions for Android components.
A lot of these now have stable versions, and new libraries for different components are still being added.
These are helpful for writing commonly used Android operations with an idiomatic Kotlin syntax.

Generally, KTX enables the use of lambdas to allow the various APIs to use slightly less boilerplate.
For example, fragment transactions can be simplified:

// without ktx
getSupportFragmentManager()
            .beginTransaction()
            .addToBackStack("FragmentName")
            .add(R.id.fragment_container, fragment)
            .commit();

// with ktx
getSupportFragmentManager().commit {
  addToBackStack("FragmentName")
  add(R.id.fragment_container, fragment)
}

KTX is available for a variety of Android components, including preferences, text, views, animation, fragments, navigation, and many more.
Most of the newer Jetpack libraries have a -ktx version of the dependency that can be used in place of the standard one, and that’s all it takes to get access to these extensions.
Read more about them on the developer documentation.

Jetpack Compose

An exciting new development with the combination of Kotlin and Android is Jetpack Compose.
It uses an idiomatic Kotlin domain specific language to build intuitive layouts, directly from your Kotlin files.
The result is that you’re able to write more complex UI operations with less boilerplate code.
Right now, Jetpack Compose is still in pre-alpha stage, but it’s a good example of how the unique features of Kotlin can be leveraged on Android to create a better developer experience.
Given its early development status, Compose specifics are likely subject to change, but if you want to try it out early, check out the official documentation.

Kotlin scratch file

A Kotlin Scratch File is a runnable Kotlin script file, and these are now supported by Android Studio.
You can write any Kotlin in these files and Android Studio will evaluate each line and print the results in grey on the right.
You don’t even have to write a main() function; everything can be top-level.
I find this extremely useful when I want to quickly test a function I’m writing without having to actually run my app to test it.

To create one in Android Studio, go to File > New > Scratch File > Kotlin.
This will open a new file called scratch_1.kts or similar.
At the top, you can specify the classpath to a specific module, so you can access the existing code in your app with the scratch file.
By default, it’s set to use Interactive Mode, meaning AS will run your code as soon as it thinks you’ve stopped typing for long enough.
In practice, I found this to be a bit annoying, so I usually turn it off and just use the run button at the top left.

These scratch files are created in the support directory for Android Studio in the scratches directory by default, so you can access them later if you need to.

For this and the other improvements to Kotlin itself in version 1.3, see the release notes on Jetbrains Kotlin documentation site.

The hot new language

We’ve briefly looked at some of the new Kotlin+Android features announced at Google IO 2019 and how they can be useful to you.
Now that Kotlin is Google’s preferred language for Android development, I’d expect more features like these fairly frequently.
Try these out in your app today!

The post What’s new with Kotlin for Android developers appeared first on Big Nerd Ranch.

]]>
https://bignerdranch.com/blog/whats-new-with-kotlin-for-android-developers/feed/ 0
Writing your Gradle build scripts in Kotlin on Android https://bignerdranch.com/blog/writing-your-gradle-build-scripts-in-kotlin-on-android/ https://bignerdranch.com/blog/writing-your-gradle-build-scripts-in-kotlin-on-android/#respond Mon, 15 Apr 2019 09:00:00 +0000 https://nerdranchighq.wpengine.com/blog/writing-your-gradle-build-scripts-in-kotlin-on-android/ Android developers now have access to Gradle Kotlin DSL v1.0! This means that we can write Gradle build scripts in our favorite language, Kotlin. Here, we'll convert an Android project using a Groovy Gradle script to use a Kotlin Gradle script.

The post Writing your Gradle build scripts in Kotlin on Android appeared first on Big Nerd Ranch.

]]>

The folks at Gradle recently released version 5.0, which means that us Android developers now have access to Gradle Kotlin DSL v1.0! This means that we can write Gradle build scripts in our favorite language, Kotlin.

What is it?

Gradle Kotlin DSL is a domain specific language built with the express purpose of defining Gradle build plans. This has all of your favorite functions and assignments from the Gradle Groovy DSL, but now in Kotlin.

The full feature set of Gradle Kotlin DSL is supported on less IDEs than Groovy, but Android Studio and IntelliJ IDEA support everything we need, so most Android developers shouldn’t have a lot of issues.

Why?

I don’t know about you, but editing Gradle Groovy files is not my favorite part about Android development. Groovy is dynamically typed, so Android Studio has a tough time providing you with intelligent hints about what methods you can call, what parameters they take, and when you’ve done something wrong. Furthermore, since Gradle scripts are the only thing I typically see Groovy in, the syntax is way more difficult for me to parse than that of Kotlin, which I use every day during app development.

By writing our Gradle scripts in Kotlin instead of Groovy, we can solve both these problems. Since Kotlin is statically typed, it’s simpler for the IDE to fill in helpful hints about the code. Plus, since we use Kotlin for day-to-day development on Android apps, editing those Gradle files isn’t as much of a cognitive load since we’re familiar with the language’s syntax.

How do I do it?

When you create a project in Android Studio, it creates a Gradle script using Groovy by default. To continue past this, we’ll convert the default Gradle scripts to use Kotlin instead. We’ll first make some minor syntactical changes to our Groovy scripts to inch them closer to their eventual Kotlin forms, so that when we actually change the file extension, it won’t have as many errors.

Use the right version of Gradle

Depending on the default version of Gradle for your version of Android Studio, your project may have a version of Gradle from before Gradle Kotlin DSL was fully supported. In your app’s gradle-wrapper.properties file, change the Gradle version to 5.0 so that you’ll have the stable 1.0 version of Gradle Kotlin:

distributionUrl=https://services.gradle.org/distributions/gradle-5.0-all.zip

Fix your single quotes

In most places, Groovy doesn’t care whether you use single quotes or double quotes to encapsulate strings. Kotlin is pickier: it requires double quotes. Do a find and replace (cmd+R macOS/ctrl+R Windows) for all the single quotes in your Gradle scripts and change them all to double quotes.

Use updated syntax to apply plugins

Any plugin applications using the legacy apply plugin syntax should be replaced with the newer plugins DSL. This new syntax allows Gradle to perform optimizations for loading your plugins, and helps the IDE with providing hints about the plugin classes.

// legacy
apply plugin: 'com.android.application'
// plugins DSL
plugins {
  id("com.android.application")
}

However, the plugins DSL does have limitations, so if you can’t use it in your case, you can still use the legacy apply plugin in Kotlin after converting the syntax.

Explicitly call functions or assign values

A feature of Groovy is the ability to assign a value or call a method with the exact same syntax:

// this is a method call
targetSdkVersion 28
// this is an assignment
versionCode 1

That’s not the case in Kotlin, and Groovy doesn’t require this syntax, so we can go ahead and change those to their more explicit counterparts right now:

// this is a method call
targetSdkVersion(28)
// this is an assignment
versionCode = 1

Note that Groovy uses a similar property access paradigm as Kotlin, using the underlying setProperty(5) function when you type property = 5. Therefore, the versionCode = 1 above could also be changed to setVersionCode(1) to the same effect, but because we also have property access in Kotlin, we can simply use the assignment operator.

If you’re not sure if a Groovy line should be converted to an assignment or a method call, you can always use quick info (ctrl+J macOS/ctrl+Q Windows) to pop up some information about the item where the cursor is. They all will show methods, but the ones that follow the JavaBean naming convention (get..., set...) can be converted to assignments in Kotlin.

Since most of the dependencies block in app/build.gradle is typically formatted in a similar way, I like to use a regex find and replace to reformat all of the implementation lines in one move:

// find regex
mplementations?(.*)$
// replace with regex
mplementation($1)

Change the file extensions

Now we’ve gotten the scripts as close as possible to Kotlin syntax while still being Groovy, so we’re ready to actually convert the files to Kotlin scripts instead of Groovy scripts. Rename the file names from build.gradle to build.gradle.kts to indicate that they’re now Kotlin script files.

Of course, this will cause a lot of red syntax highlighting, since we haven’t actually converted everything to Kotlin yet, but we’ll get there. Do a Gradle sync now, and at the end of every section below. If Android Studio ever tells you that “There are new script dependencies available” via a pop-down at the top of the window, choose the Enable auto-reload option to automatically apply them.

Fix any global variables

In Groovy, we had access to the ext object, which allows us to set variables that can be accessed from any of the inheriting Gradle scripts (this is often used to set version numbers for various dependencies). We can use extra.set("constraint_layout_version", "1.1.3") with rootProject.extra.get("constraint_layout_version") but this doesn’t give us the benefit of autocomplete for our properties, since they need to be accessed via those key strings.

We’ll instead use a buildSrc module to create an external store for these globally-needed variables. In the root directory of your project, create the following directory path: buildSrc/src/main/kotlin. Within the buildSrc directory, create a file called build.gradle.kts and use it to apply the Kotlin DSL plugin:

plugins {
  // note the backtick syntax (since `kotlin-dsl` is 
  // an extension property on the plugin's scope object)
  `kotlin-dsl` 
}

repositories {
    jcenter() // this is needed to download dependencies for kotlin-dsl
}

Within the buildSrc/src/main/kotlin directory, create a .kt file. It doesn’t matter what you name it, but since I usually use it for dependency declaration and versioning, I call it dependency.kt. You can then create objects with properties, and these will be able to be accessed from any of your build scripts.

object Versions {
    const val appCompat = "28.0.0"
    const val constraintLayout = "1.1.3"
}

object Deps {
    const val appCompat = "com.android.support:appcompat-v7:${Versions.appCompat}"
}

You can then use these in your app’s dependencies block, or anywhere else you need access to global variables for your scripts.

// can reference directly
implementation(Deps.appCompat)
// can also use Kotlin's string concatenation
implementation("com.android.support.constraint:constraint-layout:${Versions.constraintLayout}")

Change the buildTypes block in app/build.gradle.kts

In Groovy, our buildTypes block sets up our different build types like so:

buildTypes {
  release {
    ...
  }
  debug {
    ...
  }
}

It’s difficult to tell, but the general idea of what’s happening here is that it’s accessing a map of BuildTypes, which are mapped to String keys. All Android projects have both release and debug build types by default, but you can also define custom-named build types here as well. Unfortunately, the Kotlin equivalent for this is a bit hidden: inside the buildTypes block, the scope is a NamedDomainObjectContainer<BuildType>, which (along with its parent classes) has several functions for accessing these string keys:

// from parent class NamedDomainObjectCollection
fun getByName(String name, Action<BuildType> configureAction): BuildType
// from NamedDomainObjectContainer
fun create(String name, Action<BuildType> configureAction): BuildType
fun maybeCreate(String name): BuildType

We can use getByName if we already know the container holds a BuildType with a given name – this is the case for release and debug. If we want to make a custom-named build type, we can use the create method. However, the downside of both of these is that they can throw exceptions. maybeCreate protects us from this – it will first look for a pre-existing build type with the name given as an argument, but if that doesn’t exist, it will create it. However, maybeCreate doesn’t have an Action parameter like the other two, so we’ll have to use our trusty apply after we create the build type:

buildTypes {
  maybeCreate("release").apply {
    isMinifyEnabled = false
    ...
  }
}

Note I also change the minifyEnabled = false to be isMinifyEnabled instead, since that’s the actual name of the variable in BuildType.

Change Groovy’s map and list syntax to Kotlin’s

In this Groovy line:

// before
implementation(fileTree(include: ["*.jar"], dir: "libs"))

we’re asking Gradle to include any jar files from the libs directory by supplying them as a FileTree. To build this FileTree, we give the method a map of named properties, one of whose value is a single-element list. We’ll need to change this map and list to their Kotlin equivalents:

// after
implementation(fileTree(mapOf("include" to listOf("*.jar"), "dir" to "libs")))

Rewrite any tasks

At this point, Android Studio should be able to mostly parse your gradle scripts. However, any tasks that are still written in Groovy syntax will still be highlighted as incorrect. The following task is added in a new AS project by default:

task clean(type: Delete) {
  delete rootProject.buildDir
}

Here, we’re registering a task called “clean” of type Delete, which invokes the function Delete.delete(rootProject.buildDir) when run. In Kotlin, we can write it like this instead:

tasks {
  val clean by registering(Delete::class) {
    delete(rootProject.buildDir)
  }
}

All done!

We’ve completed the conversion of a set of Groovy Gradle scripts to Kotlin Gradle in an Android project!

As the Gradle Kotlin DSL version 1.0 was just released, there are still some unsupported features, and some minor bugs with Android Studio IDE support. However, the API is stable, and the converted script is able to build an Android project, along with improved support for code completion and editing hints from the IDE.

The post Writing your Gradle build scripts in Kotlin on Android appeared first on Big Nerd Ranch.

]]>
https://bignerdranch.com/blog/writing-your-gradle-build-scripts-in-kotlin-on-android/feed/ 0