HomeSDK IntegrationDiscussions
Log In
SDK Integration

Android Walkthrough

Access the SDK

The SDK is provided as the capturMicroMobility-release.aarfile for Android devices, available in the Reference Implementation Repository

ℹ️ Note: this is restricted access. If you see a 404, please confirm that your Github user has been granted access.

Requirements

  1. Android version 10.0 +
  2. Target - Android devices
  3. You will need a target workspace on the www.captur.ai platform, and:
    1. the associated API Key; you can generate an API key by logging into the Captur Dashboard
    2. the associated assetType; e.g. eBike, eScooter, seatedEScooter
    3. the associated locationName; e.g. London, Paris, Tokyo

Installation

To add this .AAR file to your project:

  1. Add the .aarfile to your app's libs folder i.e. app\libs. Create a libs folder if not present.

  2. Open build.gradle file (app module) and add the following dependency:

    implementation(files("libs/capturMicroMobility-release.aar")).

  3. Ensure Camera and Flash permissions are in the manifest file:

    <uses-feature android:name="android.hardware.camera.flash" />
    <uses-feature android:name="android.hardware.camera" android:required="false" />
    <uses-permission android:name="android.permission.CAMERA" />

❗️

The SDK will fail if Camera and Flash permissions are denied by the user.

  1. Update your app dependencies to include:

        implementation("org.jetbrains.kotlinx:kotlinx-serialization-json:1.5.1")
        implementation ("com.google.code.gson:gson:2.10.1")
        implementation ("androidx.camera:camera-core:1.3.3")
        implementation ("androidx.camera:camera-camera2:1.3.3")
        implementation ("androidx.camera:camera-lifecycle:1.3.3")
        implementation ("androidx.camera:camera-view:1.3.3")
        api("org.tensorflow:tensorflow-lite-task-vision:0.4.0")
        implementation("com.squareup.retrofit2:adapter-rxjava2:2.9.0")
    
        implementation("com.squareup.moshi:moshi-kotlin:1.14.0")
        implementation("com.squareup.retrofit2:retrofit:2.9.0")
        implementation("com.squareup.retrofit2:converter-moshi:2.9.0")
        implementation("com.squareup.okhttp3:okhttp:4.11.0")
        implementation("com.squareup.okhttp3:logging-interceptor:4.11.0")
        implementation ("androidx.lifecycle:lifecycle-viewmodel-ktx:2.8.0")
    
        implementation("com.google.accompanist:accompanist-permissions:0.31.0-alpha")
  2. Sync project with Gradle files.

Usage

1 - setAPI() to Setup the SDK

  1. Call the Captur.shared.setApi(key:) method before accessing any other Captur related methods or properties.
  2. Configure the SDK early in the app lifecycle, typically during app launch or at the start of the ride. For example, in ApplicationClass.kt:
class ApplicationClass : Application() {
    override fun onCreate() {
        super.onCreate()
        try {
            Captur.init(
                this,
                "<YOUR_API_KEY>"
            )
        } catch (e:Exception){
            Log.d("error", e.message ?: "error with tf lite model")
        }
    }
}

2 - prepareModel() at start of ride to initialise the latest model

  1. The function will ensure the correct model is available, based on the assetType and locationName provided.
    1. If the on-device model is matching the latest model, the function will initialise it.
    2. If there is a model update available, the function will download the new model and over-write the existing model.

      🚧

      callprepareModel() early to allow sufficient time for model downloads

  2. This can be a fire and forget call. You will be able to check if a model has been successfully initialised before you proceed to get the configuration and present the camera:
Captur.prepareModel(
            "<LOCATION_NAME>",
            <ASSET_TYPE>
        ) { b: Boolean, capturError: CapturError? ->

            if (b) {
                //Model initialisation success. Proceed to get configs

            } else {
							 //Handle errors
            }
        }

3 - getConfig() at end-ride to retrieve the configuration

Each session requires a configuration from the Captur backend. A configuration implements the policies that you set up in the control centre.

🚧

Check you are using the correct locationName and assetType; matching the control centre.

The function will capture an error type of .modelInitialisationFailure if for any reason the previousprepareModel() function failed to initialise, download, or store the model.

Captur.getConfiguration(
            "<LOCATION_NAME>",
            CapturAssetType.E_SCOOTER,
            0.0,
            0.0
        ) { b: Boolean, capturError: CapturError? ->

            if (b) {
                //Config success
                //Ready to present camera

            } else {
							 //Handle errors
            }
        }

❗️

The camera view depends on prepareModel() and getConfig()

Only present the camera view after you have called the prepareModel() function and obtained the configuration via the getConfig() function for each verification session.

4 - Define CapturCameraManager to handle scanning

Most of the interface with the Captur SDK happens via the CapturCameraManager. Whether you are using Jetpack Compose or traditional Activity/Fragment, ensure you initialize CapturCameraManager where you have an active reference to it.

We recommend configuring the CapturCameraManager within ViewModels, but this depends on how you have architected your app.

4.1. Prepare your view model or host class

class CameraViewModel(val manager: CapturCameraManager) : ViewModel() {
  //Manage the camera via a view model class
}

4.2. Initialise CapturCameraManager and subscribe to events

class CameraViewModel(val manager: CapturCameraManager) : ViewModel() {  
   init {
        manager.subscribeToEvents(this)
    }
}

Calling the view model:

val captureManager = CapturCameraManager(reference)
viewModel = CameraViewModel(captureManager)
  • Pass in a unique reference for each verification task. For example, your rideId. Records in the Captur dashboard are searchable by reference, and this is also used for invoicing, analytics, and debugging.

📘

Attempts vs Sessions

You can have more than one attempt per session. For example - you might ask a rider to retry parking compliance after a bad parking prediction decision event.

To have multiple attempts, simply initialise CapturCameraManagerwith the same reference or alternatively, you can call manager.retake(), depending on what suits your architecture better.

In hybrid apps like Flutter, the camera is managed natively but your feedback views might be managed in the hybrid code. During cases like this, it is better to re-initialise the managerwith the same reference in order to increment attempt

❗️

Unique Identifiers for Verification

Avoid passing in userId or bike/scooter IDs as they are not unique to each verification task. Usually, clients prefer passing in their rideId or equivalent as the reference.

5 - Handle Events

In the previous step, you can see that we subscribed to events by calling the manager.subscribeToEvents(this).

We need to conform to the events interface:

class CameraViewModel(val manager: CapturCameraManager) : ViewModel(), CapturEvents {  
   init {
        manager.subscribeToEvents(this)
    }
}

Implement the interface methods

override fun capturDidGenerateEvent(state: CapturCameraState, metadata: CapturOutput?) {}

override fun capturDidGenerateError(error: CapturError) {}

override fun capturDidGenerateGuidance(metadata: CapturOutput) {}

Let's go through the events one by one.

5.1 capturDidGenerateEvent(state: , metadata: )

The capturDidGenerateEvent event handles two UI states that occur when the SDK presents the camera view. You can iterate over the various UI states to handle different use cases.

One is the CAMERA_RUNNING state which indicates that the camera has started to make predictions on live feed. The CAMERA_DECIDED state indicates that the camera has finished making predictions and has arrived at a decision: goodParking, badParking, improvableParkingand insufficientInformation.

You will also receive a final image which you can present in your UI, and upload to your backend. This data is available via the metaData property.

override fun capturDidGenerateEvent(state: CapturCameraState, metadata: CapturOutput?) {
  when (state) {
    CapturCameraState.CAMERA_RUNNING -> {
      //Camera running
    }

    CapturCameraState.CAMERA_DECIDED -> {
      metadata?.let {
        val decision = it.decision
        //Handle your flow based on what the decision is
      }
    }
  }
}

You can define ahandleDecisions()function to handle the end decision predicted by the SDK:

fun handleDecision(metadata: CapturOutput) {
    when (metadata.decision) {
        GOOD_PARKING -> {
            // Handle good parking flow
        }
        BAD_PARKING -> {
            // Handle bad parking flow
        }
        IMPROVABLE_PARKING -> {
            // Handle improvable parking flow
        }
        INSUFFICIENT_INFORMATION -> {
            // Handle the flow when the SDK predicts insufficient information.
            // Example - vehicle too close, no vehicle in image, image quality too poor etc.,
        }
    }
}

5.2 capturDidGenerateGuidance(metadata: )

A guidance event is emitted while the scan is still active. These events should tell the user to move their phone, so that they capture enough of the required environment for a decision.

For example, an event can be emitted when the user is attempting to end the ride when pointing the camera to the handlebars, instead of the whole vehicle. While the scanning is ongoing, you can show a card view that says "The vehicle is too close. Please take a step back".

The event metadata includes guidanceTitle and guidanceDetailproperties, which you can use to display feedback.

override fun capturDidGenerateGuidance(metadata: CapturOutput) {
  
  guard let guidanceTitle = metadata.guidanceTitle
  guard let guidanceDetail = metadata.guidanceDetail
  
  //Use these strings to handle your own guidance UI. Maybe a nice cardview on top of the camera
  showGuidance(title: guidanceTitle, detail: guidanceDetail)
}

🚧

decision event and guidance event metadata is visible in the control centre, but not editable or localised.

The recommended implementation is to define copy based on the decision or reason_code

5.3 capturDidRevokeGuidance()

If the SDK detects that the guidance is no longer required, thecapturDidRevokeGuidance()event will be triggered. You can use this to stop showing any guidance UI.

5.4 capturDidGenerateError(error: )

The SDK might encounter errors depending on hardware or software issues. Use this event to handle any sort of errors to mitigate blockers and give your user a seamless experience. Note: This event is fired with the below errors when the camera is presented and when you have subscribed to events. This is a camera runtime error.

override fun capturDidGenerateError(error: CapturError) {
  //Handle camera runtime errors here
}

🚧

Handling modelVerificationFailed Errors

You typically don’t need to handle the modelVerificationFailed error. This error might occur if the system fails to process a single frame, possibly due to temporary CPU overload, but it doesn’t mean the entire flow should stop.

The SDK processes multiple frames per second, and occasionally, some frames may not be verified. For example, if a high number of frames within a short period fail, the modelVerificationFailed error will be triggered multiple times.

A consistently high failure rate may indicate an issue with the model or frame capture. You can set an acceptable failure rate based on your requirements

6 - Present the camera

The SDK comes with a fully managed CapturCameraPreview, which is a fully managed camera system. You can add your own UI layer on top of it. First create a new Compose view. In this case, we'll call it CameraScreen.

To tie all this together, first define interface and handler classes. There are many ways to go about this; what is shown here is one opinionated approach.

class CameraUiState {
    var shouldShowFeedBackScreen by mutableStateOf(false)
    var output by mutableStateOf<CapturOutput?>(null)
    var touchLightOn by mutableStateOf(false)
    var zoomOn by mutableStateOf(false)
    var backPress by mutableStateOf(false)
}

interface CameraScreenEventHandler {
    fun onFlashLightClicked()
    fun onZoomClicked()
    fun onRetryClicked()
    fun onFinishClicked()
}

Define the CameraScreen composable:

@Composable
fun CameraScreen(
    uiState: CameraUiState,
    eventHandler: CameraScreenEventHandler,
    manager: CapturCameraManager,
    refr: String,
    onFinish: () -> Unit,
) {
    val context = LocalContext.current

    if (uiState.backPress) {
        onFinish()
    } else if (uiState.shouldShowFeedBackScreen) {
        ContextCompat.startActivity(
            context,
            Intent(context, FeedBackActivity::class.java).putExtra(
                "state",
                Gson().toJson(uiState.output)
            ).putExtra("ref", refr),
            null
        )
        onFinish()
    } else {
        Box(modifier = Modifier.fillMaxSize()) {
            CapturCameraPreview(uiState.touchLightOn, uiState.zoomOn, manager)
            CameraOverlay(uiState, eventHandler)

        }
    }
}

The CapturCameraPreview requires an instance of the CapturCameraManager to be passed in. It also expects a state management toggle for flash. This is pretty important during lowlight conditions.

Now define the camera overlay to show other UI elements like the guidance (if captured via events), other important buttons like an exit button and a flash button.

@Composable
fun CameraOverlay(uiState: CameraUiState, eventHandler: CameraScreenEventHandler) {
    // Client overlay here...
    Box(
        modifier = Modifier.fillMaxSize()
    ) {

        uiState.output?.let {
            InformationText(R.drawable.infofilled, it.guidanceTitle, it.guidanceDetail)
        }

        Row(
            modifier = Modifier
                .align(Alignment.BottomCenter)
                .padding(AppSizes.current.Space3)
                .clip(shape = RoundedCornerShape(AppSizes.current.CornerSize2))
                .background(AppColors.current.White)
                .padding(AppSizes.current.Padding2)
        ) {
            CircularProgressIndicator(
                modifier = Modifier
                    .size(20.dp)
                    .align(Alignment.CenterVertically),
                color = AppColors.current.Purple
            )

            Text(
                text = stringResource(id = R.string.analysing_parking),
                modifier = Modifier.padding(start = AppSizes.current.Padding),
                style = AppTextStyles.current.Regular16,
                color = AppColors.current.Black
            )
        }

        IconButton(
            onClick = {
                eventHandler.onFinishClicked()
            },
            modifier = Modifier
                .padding(30.dp)
                .size(IconSize)
                .clip(shape = CircleShape)
                .align(Alignment.BottomStart)
                .background(AppColors.current.White)
        ) {
            Icon(Icons.Default.Clear, contentDescription = "", tint = AppColors.current.Black)
        }

        IconButton(
            onClick = {
                eventHandler.onFlashLightClicked()
            },
            modifier = Modifier
                .padding(30.dp)
                .size(IconSize)
                .clip(shape = CircleShape)
                .align(Alignment.BottomEnd)
                .background(AppColors.current.White)
        ) {

            Icon(
                painterResource(R.drawable.ic_flash_light),
                contentDescription = "",
                tint = Color.Black
            )
        }
    }
}

Define your CameraActivity

class CameraActivity : ComponentActivity() {
    lateinit var viewModel: CameraViewModel
    override fun onCreate(savedInstanceState: Bundle?) {
        super.onCreate(savedInstanceState)
        enableEdgeToEdge()
        setContent {
            val c: String = intent.extras?.getString("ref").toString()
            val reference = rememberSaveable {
                if (c == "null") (UUID.randomUUID().toString()) else c
            }
            val manager = CapturCameraManager(reference)
            viewModel = CameraViewModel(manager)
            YourAppTheme {
                Scaffold(modifier = Modifier.fillMaxSize()) { innerPadding ->
                    Column(modifier = Modifier.padding(innerPadding)) {
                        Surface(
                            modifier = Modifier.fillMaxSize(),
                            color = MaterialTheme.colorScheme.background
                        ) {
                            CameraScreen(
                                uiState = viewModel.uiState,
                                viewModel,
                                viewModel.manager,
                                reference
                            ) {
                                finish()
                            }
                        }
                    }
                }
            }
        }
    }
}

Your final CameraViewModel will now look like

class CameraViewModel(val manager: CapturCameraManager) : ViewModel(), CapturEvents,
    CameraScreenEventHandler {

    val uiState = CameraUiState()

    init {
        manager.subscribeToEvents(this)
    }

    override fun onFlashLightClicked() {
        uiState.touchLightOn = !uiState.touchLightOn

    }

    override fun onRetryClicked() {
        uiState.shouldShowFeedBackScreen = false
        uiState.output = null
    }

    override fun onFinishClicked() {
        uiState.backPress = true
        dismissCamera()
    }

    private fun dismissCamera() {
        uiState.backPress = true
    }

    override fun capturDidGenerateEvent(state: CapturCameraState, metadata: CapturOutput?) {
        when (state) {
            CapturCameraState.CAMERA_RUNNING -> {
                uiState.shouldShowFeedBackScreen = false
            }

            CapturCameraState.CAMERA_DECIDED -> {
                metadata?.let {
                    uiState.output = it
                }
                uiState.shouldShowFeedBackScreen = true
            }
        }
    }

    override fun capturDidGenerateError(error: CapturError) {
        println("error $error.errorMessage")
    }

    override fun capturDidGenerateGuidance(metadata: CapturOutput) {
        metadata.let {
            uiState.output = it
        }
    }
}

👍

Customising Feedback

The reference implementation shows a feedback screen for all outcomes. You can customize your view model to react to successful parking decisions by displaying an animated "success" screen to congratulate users, or skip this entirely for an experience that feels super-fast.

Understanding the flow

  1. Prepare your on device model (either initialise the existing model, download a new model or update model) based on your locationName and assetType
  2. Get your configuration from Captur for before the session starts
  3. Present the camera. The camera will run it's verification and send back events
  4. Handle a guidance event to display realtime feedback to the user while the camera is running
  5. Handle decision event once the model has made an attempt decision, or the scan has timed out

Some changes to the flow are supported - see: updates to the scanning flow