In this article I discuss the refactoring I made in the MVVM project already discussed in the previous article.

The IDE

I actually decided to use IntelliJ IDEA 2019.1.1 for this. I’ve been trying to use Android Studio 3.5 Canary, but it doesn’t display the Android perspective correctly when Kotlin 1.3.30 plugin is installed.

The disadvantage of using IntelliJ as opposed to Android Studio 3.5 is such that you can’t use Apply Changes functionality. On the other hand, IntelliJ integrates with Mate’s Global Application Menu, so I am sticking for now with it.

The project

The project discussed in the present article is MVVM.

I overwrote the code discussed in the previous article, but you can still watch the git history to see which changes exactly I made.

I removed all dependencies on RxJava, and the class ActivityDescViewModel extends ViewModel directly, using one additional extension function to handle lifecycle.

The model

The model class that is used to convey activity recognition events:

abstract class ChannelActivityRecognition {
    protected val channel = Channel<ActivityDesc>(Channel.CONFLATED)
    val activities: ReceiveChannel<ActivityDesc> = channel
}

Please note that I am setting capacity of the Channel to Channel.CONFLATED, so that every time the consumer calls receive(), it will either suspend or receive the very last element sent to it. The elements are not buffered in the Channel.

This is the BroadcastReceiver that is triggered on activity recognition events:

private inner class Receiver : BroadcastReceiver() {

    override fun onReceive(context: Context, intent: Intent) {
        if (ActivityRecognitionResult.hasResult(intent)) {
            val result = ActivityRecognitionResult.extractResult(intent)!!
            val activity = result.mostProbableActivity!!
            val desc = context.getString(when (activity.type) {
                DetectedActivity.IN_VEHICLE -> R.string.activity_desc_in_vehicle
                DetectedActivity.ON_BICYCLE -> R.string.activity_desc_on_bicycle
                DetectedActivity.ON_FOOT -> R.string.activity_desc_on_foot
                DetectedActivity.TILTING -> R.string.activity_desc_tilting
                DetectedActivity.WALKING -> R.string.activity_desc_walking
                DetectedActivity.RUNNING -> R.string.activity_desc_running
                else -> R.string.activity_desc_unknown
            })
            val confidence = activity.confidence
            val activityDesc = ActivityDesc(desc, confidence)
            channel.offer(activityDesc)
        }
    }
}

Please note that I am using the method channel.offer(acvivityDesc), as opposed to an equivalent suspend function channel.send(activityDesc).

Because it is a CONFLATED channel, inserting values to it never needs to suspend. The only time a Channel needs to suspend when values are inserted to it is when it needs to wait for the consumer to receive them. In case of a CONFLATED channel particularly, the inserted values are simply discarded when the consumer hasn’t received them yet when a new value comes in.

Alternatively, the following code would have an identical effect:

GlobalScope.launch(Dispatchers.IO) {
    channel.send(activityDesc)
}

You may have to use the above version if you are using any other Channel than CONFLATED.

The viewmodel

This is the viewModel that is used to pass to an instance of LiveData the values generated by the above ChannelAcvitivyRecognition:

class ActivityDescViewModel : ViewModel() {
    private val activityDescMutable by channelLiveData { ar.activities }
    val activityDesc: LiveData<ActivityDesc> by lazy { activityDescMutable }
}

THe above code initiates lazily an instance of MutableLiveData and exposes it as LiveData, so that it is not seen as mutable outside of the scope of this class.

This is the extension function that creates the lazy itinialization:

fun <T> ViewModel.channelLiveData(channel: () -> ReceiveChannel<T>) = lazy {
    MutableLiveData<T>().apply {
        viewModelScope.launch(Dispatchers.IO) {
            while (true) {
                postValue(channel().receive())
            }
        }
    }
}

Because most of the time the above code is waiting for values, I use the Dispatchers.IO dispatcher. I have to use postValue() as opposed to setValue(), though, so that the views will only respond to the changes on the main thread.

I use a while(true) loop, as it is going to be canceled anyway when the viewModelScope is canceled. Each time the receive() function suspends, it will react immediately when its scope is canceled.

Testing

This is the test version of the ChannelActivityRecogtition:

class MockActivityRecognition : ChannelActivityRecognition() {

    init {
        GlobalScope.launch(Dispatchers.IO) {
            repeat(Int.MAX_VALUE) {
                delay(1000)
                channel.offer(if (it % 2L == 0L) ACT1 else ACT2)
            }
        }
    }

    companion object {
        const val DESC1 = "on foot"
        const val CONF1 = 50
        const val DESC2 = "in vehicle"
        const val CONF2 = 100

        private val ACT1 = ActivityDesc(DESC1, CONF1)
        private val ACT2 = ActivityDesc(DESC2, CONF2)
    }
}

It is not relevant at all whether you use channel.send() or channel.offer() in this case, as it is being executed inside of a CoroutineScope anyway, and because this Channel isCONFLATED, both functions work identically.

I use repeat(Int.MAX_VALUE) loop, as I want to emulate the real situation in which activity recognition never ends until you either close the application or deliberately cancel it. I didn’t want to use a while loop, because I wanted to use it inside the block, which here represents successive integers.

The previous article contains instructions on how to set up your Espresso tests to use this particular mock implementation of activity recognition.