Integration of Huawei Direction API in Navigation Glove IoT application Using Kotlin - Part 4 - Huawei Developers

{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Introduction​
In this series of article, we will learn about Navigation Glove application and also we will learn about integration of the Huawei Direction API in Navigation Glove IoT application. We will learn how to draw polyline on the map.
If you are new to this series of articles, follow my previous articles.
Beginner: Integration of Huawei Account kit in Navigation Glove IoT application Using Kotlin - Part 1
Beginner: Integration of Huawei Map kit in Navigation Glove IoT application Using Kotlin - Part 2
Beginner: Integration of Huawei Site kit in Navigation Glove IoT application Using Kotlin - Part 3
What is Direction API?
Huawei Map Kit provides a set of HTTP/HTTPS APIs, which you can use to build map data functions like route planning, Static map, Raster map.
Directions API is a set of HTTPS-based APIs it is used to plans routes. The direction API returns data in JSON format. You can parse and draw route on the map.
It has following types of routes:
Walking: You can plan route max 150 kilometers.
Cycling: You can plan route max 100 kilometers.
Driving: Driving route gives some following functions:
1. It returns 3 routes for request.
2. It supports 5 waypoints.
3. It gives real time traffic condition.
Using Rest services we are integrating the following.
Direction API
Prerequisite​
AppGallery Account
Android Studio 3.X
SDK Platform 19 or later
Gradle 4.6 or later
HMS Core (APK) 4.0.0.300 or later
Huawei Phone EMUI 3.0 or later
Non-Huawei Phone Android 4.4 or later
Service integration on AppGallery.
1. We need to register as a developer account in AppGallery Connect.
2. Create an app by referring to Creating a Project and Creating an App in the Project.
3. Set the data storage location based on the current location.
4. Enabling Map Kit Service on AppGallery Connect.
5. Generating a Signing Certificate Fingerprint.
6. Configuring the Signing Certificate Fingerprint.
7. Get your agconnect-services.json file to the app root directory.
Client development
1. Create android project in android studio IDE.
2. Add the maven URL inside the repositories of buildscript and allprojects respectively (project level build.gradle file).
Code:
maven { url 'https://developer.huawei.com/repo/' }
3. Add the classpath inside the dependency section of the project level build.gradle file.
Code:
classpath 'com.huawei.agconnect:agcp:1.5.2.300'
4. Add the plugin in the app-level build.gradle file.
Code:
apply plugin: 'com.huawei.agconnect'
5. Add the below library in the app-level build.gradle file dependencies section.
Code:
implementation 'com.huawei.hms:maps:4.0.0.302'
implementation 'com.google.code.gson:gson:2.8.6'
implementation 'com.squareup.retrofit2:retrofit:2.7.2'
implementation 'com.squareup.retrofit2:converter-gson:2.7.2'
6. Add all the below permissions in the AndroidManifest.xml.
XML:
<uses-permission android:name="android.permission.INTERNET" />
<uses-permission android:name="android.permission.ACCESS_NETWORK_STATE" />
7. Sync the project.
By now user has select the Source location and Destination Location. And also we have added marker on the map.
Direction API Code
For direction API, we are using retrofit library to get the direction API.
So first let’s see what will be the Request and what will be response. And also end point.
Method Type: POST
URL:
https://mapapi.cloud.huawei.com/mapApi/v1/routeService/driving?key=
API key
Note: Replace API key with you project API key
Data Format: Request: Content-Type: application/json
Response: Content-Type: application/json
Request Example
POST https://mapapi.cloud.huawei.com/mapApi/v1/routeService/driving?key=API key HTTP/1.1
Content-Type: application/json
Accept: application/json
JSON:
{
"destination":{
"lat":12.982126,
"lng":77.533103
},
"origin":{
"lat":12.9702763,
"lng":77.5373127
}
}
Response
HTTP/1.1 200 OK
Content-type: application/json
Just check it in the Postman or RestClient or any other tools which give response for HTTP request
Now let us set up
Step 1: Create Const class.
package com.huawei.navigationglove.api
object Const {
const val BASE_URL = "https://mapapi.cloud.huawei.com/mapApi/v1/"
}Copy codeCopy code
Step 2: Create ApiClient class.
Java:
package com.huawei.navigationglove.api
import android.content.Context
import android.text.TextUtils
import okhttp3.Interceptor
import okhttp3.logging.HttpLoggingInterceptor
import okhttp3.OkHttpClient
import retrofit2.converter.gson.GsonConverterFactory
import com.jakewharton.retrofit2.adapter.rxjava2.RxJava2CallAdapterFactory
import okhttp3.HttpUrl
import okhttp3.Request
import retrofit2.Retrofit
import java.util.concurrent.TimeUnit
object ApiClient {
private val retrofit: Retrofit = Retrofit.Builder()
.baseUrl(Const.BASE_URL)
.client(setInterceptors())
.addConverterFactory(GsonConverterFactory.create())
.build()
private fun setInterceptors(): OkHttpClient {
val logger = HttpLoggingInterceptor()
logger.level = HttpLoggingInterceptor.Level.BODY
return OkHttpClient.Builder()
.readTimeout(60, TimeUnit.SECONDS)
.connectTimeout(60, TimeUnit.SECONDS)
.addInterceptor { chain ->
val url: HttpUrl = chain.request().url.newBuilder()
.addQueryParameter(
"key",
"ADD_YOUR_API_KEY_HERE"
)
.build()
val request = chain.request().newBuilder()
.header("Content-Type", "application/json")
.url(url)
.build()
chain.proceed(request)
}
.addInterceptor(logger)
.build()
}
fun createApiService(): ApiService {
return ApiClient.retrofit.create(ApiService::class.java)
}
fun <S> createService(serviceClass: Class<S>?): S {
return retrofit.create(serviceClass)
}
}
Step 3: Create the ApiService interface.
Java:
package com.huawei.navigationglove.api
import com.huawei.navigationglove.api.request.DirectionRequest
import com.huawei.navigationglove.api.response.DirectionResponse
import io.reactivex.Single
import retrofit2.Call
import retrofit2.http.*
interface ApiService {
@POST("routeService/{type}")
fun getDirectionsWithType(
@Path(value = "type",encoded = true) type : String,
@Body directionRequest: DirectionRequest
): Call<DirectionResponse>
}Copy codeCopy code
Step 4: Create TypeOfDirection Enum to get routes for driving/walking/cycling
package com.huawei.navigationglove.api
enum class TypeOfDirection(val type: String) {
WALKING("walking"),
BICYCLING("bicycling"),
DRIVING("driving")
}
Before creating request and response class, let us add one plugin to convert from JSON to Kotlin data classes
Choose File > Setting > Plugin
Step 5: Now we need to create Request kotlin classes.
Origin.kt
Java:
package com.huawei.navigationglove.api.request
import com.google.gson.annotations.SerializedName
data class Origin(@SerializedName("lng")
val lng: Double = 0.0,
@SerializedName("lat")
val lat: Double = 0.0)
Destination.kt
Java:
package com.huawei.navigationglove.api.request
import com.google.gson.annotations.SerializedName
data class Destination(@SerializedName("lng")
val lng: Double = 0.0,
@SerializedName("lat")
val lat: Double = 0.0)
DirectionRequest.kt
Java:
package com.huawei.navigationglove.api.request
import com.google.gson.annotations.SerializedName
data class DirectionRequest(@SerializedName("origin")
val origin: Origin,
@SerializedName("destination")
val destination: Destination)
Step 6: Now create Response Class. Same as the Request class created.
DirectionResponse.kt
Java:
package com.huawei.navigationglove.api.response
import com.google.gson.annotations.SerializedName
data class DirectionResponse(@SerializedName("routes")
val routes: List<RoutesItem>?,
@SerializedName("returnCode")
val returnCode: String = "",
@SerializedName("returnDesc")
val returnDesc: String = "")
Bounds.kt
Java:
package com.huawei.navigationglove.api.response
import com.google.gson.annotations.SerializedName
data class Bounds(@SerializedName("southwest")
val southwest: Southwest,
@SerializedName("northeast")
val northeast: Northeast)
EndLocation.kt
Java:
package com.huawei.navigationglove.api.response
import com.google.gson.annotations.SerializedName
data class EndLocation(@SerializedName("lng")
val lng: Double = 0.0,
@SerializedName("lat")
val lat: Double = 0.0)
Northeast.kt
Java:
package com.huawei.navigationglove.api.response
import com.google.gson.annotations.SerializedName
data class Northeast(@SerializedName("lng")
val lng: Double = 0.0,
@SerializedName("lat")
val lat: Double = 0.0)
PathItems.kt
Java:
package com.huawei.navigationglove.api.response
import com.google.gson.annotations.SerializedName
data class PathsItem(@SerializedName("duration")
val duration: Double = 0.0,
@SerializedName("durationText")
val durationText: String = "",
@SerializedName("durationInTrafficText")
val durationInTrafficText: String = "",
@SerializedName("durationInTraffic")
val durationInTraffic: Double = 0.0,
@SerializedName("distance")
val distance: Double = 0.0,
@SerializedName("startLocation")
val startLocation: StartLocation,
@SerializedName("startAddress")
val startAddress: String = "",
@SerializedName("distanceText")
val distanceText: String = "",
@SerializedName("steps")
val steps: List<StepsItem>?,
@SerializedName("endLocation")
val endLocation: EndLocation,
@SerializedName("endAddress")
val endAddress: String = "")
PolylineItems.kt
Java:
package com.huawei.navigationglove.api.response
import com.google.gson.annotations.SerializedName
data class PolylineItem(@SerializedName("lng")
val lng: Double = 0.0,
@SerializedName("lat")
val lat: Double = 0.0)
RoutesItem.kt
Java:
package com.huawei.navigationglove.api.response
import com.google.gson.annotations.SerializedName
data class RoutesItem(@SerializedName("trafficLightNum")
val trafficLightNum: Int = 0,
@SerializedName("dstInDiffTimeZone")
val dstInDiffTimeZone: Int = 0,
@SerializedName("crossCountry")
val crossCountry: Int = 0,
@SerializedName("hasRestrictedRoad")
val hasRestrictedRoad: Int = 0,
@SerializedName("hasRoughRoad")
val hasRoughRoad: Int = 0,
@SerializedName("hasTrafficLight")
val hasTrafficLight: Int = 0,
@SerializedName("crossMultiCountries")
val crossMultiCountries: Int = 0,
@SerializedName("dstInRestrictedArea")
val dstInRestrictedArea: Int = 0,
@SerializedName("overviewPolyline")
val overviewPolyline: String = "",
@SerializedName("paths")
val paths: List<PathsItem>?,
@SerializedName("bounds")
val bounds: Bounds,
@SerializedName("hasTolls")
val hasTolls: Int = 0,
@SerializedName("hasFerry")
val hasFerry: Int = 0)
Southwest.kt
Java:
package com.huawei.navigationglove.api.response
import com.google.gson.annotations.SerializedName
data class Southwest(@SerializedName("lng")
val lng: Double = 0.0,
@SerializedName("lat")
val lat: Double = 0.0)
StartLocation.kt
Java:
package com.huawei.navigationglove.api.response
import com.google.gson.annotations.SerializedName
data class StartLocation(@SerializedName("lng")
val lng: Double = 0.0,
@SerializedName("lat")
val lat: Double = 0.0)
StepsItem.kt
Java:
package com.huawei.navigationglove.api.response
import com.google.gson.annotations.SerializedName
data class StepsItem(@SerializedName("duration")
val duration: Double = 0.0,
@SerializedName("orientation")
val orientation: Int = 0,
@SerializedName("durationText")
val durationText: String = "",
@SerializedName("distance")
val distance: Double = 0.0,
@SerializedName("startLocation")
val startLocation: StartLocation,
@SerializedName("instruction")
val instruction: String = "",
@SerializedName("action")
val action: String = "",
@SerializedName("distanceText")
val distanceText: String = "",
@SerializedName("roadName")
val roadName: String = "",
@SerializedName("endLocation")
val endLocation: EndLocation,
@SerializedName("polyline")
val polyline: List<PolylineItem>?)=
Now everything set now call the API.
If your question about originReq and destinationReq follow my previous article. These two request has been created when user selects the source location and destination location.
Java:
navigate.setOnClickListener {
val directionRequest = DirectionRequest(originReq!!, destinationReq!!)
getDirection(directionType, directionRequest)
}Copy codeCopy code
private fun getDirection(directionType: String, directionRequest: DirectionRequest) {
ApiClient.createApiService()
.getDirectionsWithType(directionType, directionRequest)
.enqueue(object : Callback<DirectionResponse> {
override fun onFailure(call: Call<DirectionResponse>, t: Throwable) {
Toast.makeText(
[email protected],
"Failure" + t.localizedMessage + "\n" + t.message,
Toast.LENGTH_SHORT
).show()
}
override fun onResponse(
call: Call<DirectionResponse>,
response: Response<DirectionResponse>
) {
if (response.isSuccessful) {
response.body()?.let {
it.routes?.get(0)?.paths?.get(0)?.let { it1 -> addPolyLines(it1) }
Toast.makeText([email protected], "Success", Toast.LENGTH_SHORT)
.show()
}
//startActivity(Intent([email protected], MapsActivity::class.java))
}
}
})
}
Adding polyline
Java:
var polyLine: Polyline? = null
private fun addPolyLines(path: PathsItem) {
if (polyLine != null) {
polyLine!!.remove()
}
val options = PolylineOptions()
options.add(LatLng(path.startLocation.lat, path.startLocation.lng))
path.steps!!.forEach {
it.polyline!!.forEach { it1 ->
options.add(LatLng(it1.lat, it1.lng))
}
}
options.add(LatLng(path.endLocation.lat, path.endLocation.lng))
options.color(Color.BLACK)
options.width(6f)
polyLine = hMap!!.addPolyline(options)
}
Now let us see the full code.
HomeScreenActivity.kt
Java:
package com.huawei.navigationglove.ui
import android.content.Intent
import android.graphics.Color
import androidx.appcompat.app.AppCompatActivity
import android.os.Bundle
import android.util.Log
import android.view.View
import android.widget.Toast
import androidx.activity.result.contract.ActivityResultContracts
import com.huawei.hms.maps.*
import com.huawei.hms.maps.model.*
import com.huawei.hms.site.api.model.Site
import com.huawei.hms.site.widget.SearchIntent
import com.huawei.navigationglove.R
import com.huawei.navigationglove.api.ApiClient
import com.huawei.navigationglove.api.TypeOfDirection
import com.huawei.navigationglove.api.request.Destination
import com.huawei.navigationglove.api.request.DirectionRequest
import com.huawei.navigationglove.api.request.Origin
import com.huawei.navigationglove.api.response.DirectionResponse
import com.huawei.navigationglove.api.response.PathsItem
import kotlinx.android.synthetic.main.activity_home_screen.*
import retrofit2.Call
import retrofit2.Callback
import retrofit2.Response
import java.net.URLEncoder
import android.widget.RadioButton
class HomeScreenActivity : AppCompatActivity(), OnMapReadyCallback {
private val TAG = HomeScreenActivity::class.java.name
private val API_KEY: String =
"Add your API Key here"
private var originReq: Origin? = null
private var destinationReq: Destination? = null
var hMap: HuaweiMap? = null
private val searchIntent = SearchIntent()
private var mMarker: Marker? = null
private var mCircle: Circle? = null
private var directionType: String = TypeOfDirection.WALKING.type
override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
setContentView(R.layout.activity_home_screen)
val mSupportMapFragment: SupportMapFragment? =
supportFragmentManager.findFragmentById(R.id.mapfragment_mapfragmentdemo) as SupportMapFragment?
mSupportMapFragment!!.getMapAsync(this)
//Site kit
val apiKey = URLEncoder.encode(
API_KEY,
"utf-8"
)
searchIntent.setApiKey(apiKey)
//Select source location
start.setOnClickListener {
//You can try the below one for older method
//selectSourceLocation()
/*val intent = searchIntent.getIntent(this)
startActivityForResult(intent, SearchIntent.SEARCH_REQUEST_CODE)*/
selectSourceLocation()
}
//Select Destination Location
destination.setOnClickListener {
selectDestinationLocation()
}
navigate.setOnClickListener {
val directionRequest = DirectionRequest(originReq!!, destinationReq!!)
getDirection(directionType, directionRequest)
}
directionTypeGroup.setOnCheckedChangeListener { group, checkedId -> // checkedId is the RadioButton selected
val directionTypeRadioButton = findViewById<View>(checkedId) as RadioButton
when (directionTypeRadioButton.text.toString().toLowerCase()) {
"walking" -> directionType = TypeOfDirection.WALKING.type
"driving" -> directionType = TypeOfDirection.DRIVING.type
"cycling" -> directionType = TypeOfDirection.BICYCLING.type
}
Toast.makeText(applicationContext, directionTypeRadioButton.text, Toast.LENGTH_SHORT)
.show()
}
}
private fun selectSourceLocation() {
val intent = searchIntent.getIntent(this)
sourceLocationLauncher.launch(intent)
}
private var sourceLocationLauncher =
registerForActivityResult(ActivityResultContracts.StartActivityForResult()) { result ->
val data: Intent? = result.data
if (data != null) {
if (SearchIntent.isSuccess(result.resultCode)) {
val site: Site = searchIntent.getSiteFromIntent(data)
start.text = site.getName()
originReq = Origin(site.location.lng, site.location.lat)
//originReq = Origin(-4.66529, 54.216608)
//Toast.makeText(application, site.getName(), Toast.LENGTH_LONG).show()
val build = CameraPosition.Builder()
.target(LatLng(site.location.lat, site.location.lng)).zoom(16f).build()
val cameraUpdate = CameraUpdateFactory.newCameraPosition(build)
hMap!!.animateCamera(cameraUpdate)
//Setting max and min zoom
//hMap!!.setMaxZoomPreference(10f)
//hMap!!.setMinZoomPreference(1f)
// Marker can be add by HuaweiMap
mMarker = hMap!!.addMarker(
MarkerOptions().position(LatLng(site.location.lat, site.location.lng))
.icon(BitmapDescriptorFactory.fromResource(R.drawable.ic_baseline_location_on_24))
.clusterable(true)
)
mMarker?.showInfoWindow()
// circle can be added to HuaweiMap
/*mCircle = hMap!!.addCircle(
CircleOptions().center(LatLng(28.7041, 77.1025)).radius(45000.0).fillColor(
Color.GREEN))
mCircle?.fillColor = Color.TRANSPARENT*/
}
} else {
Toast.makeText(application, "Unable to find the data", Toast.LENGTH_LONG).show()
}
}
private fun selectDestinationLocation() {
val intent = searchIntent.getIntent(this)
destinationLocationLauncher.launch(intent)
}
private var destinationLocationLauncher =
registerForActivityResult(ActivityResultContracts.StartActivityForResult()) { result ->
val data: Intent? = result.data
if (data != null) {
if (SearchIntent.isSuccess(result.resultCode)) {
val site: Site = searchIntent.getSiteFromIntent(data)
destination.text = site.getName()
destinationReq = Destination(site.location.lng, site.location.lat)
//destinationReq = Destination(-4.66552, 54.2166)
val build = CameraPosition.Builder()
.target(LatLng(site.location.lat, site.location.lng)).zoom(16f).build()
val cameraUpdate = CameraUpdateFactory.newCameraPosition(build)
hMap!!.animateCamera(cameraUpdate)
mMarker = hMap!!.addMarker(
MarkerOptions().position(LatLng(site.location.lat, site.location.lng))
.icon(BitmapDescriptorFactory.fromResource(R.drawable.ic_baseline_destination_location_on_24))
.clusterable(true)
)
mMarker?.showInfoWindow()
}
} else {
Toast.makeText(application, "Unable to find the data", Toast.LENGTH_LONG).show()
}
}
override fun onMapReady(huaweiMap: HuaweiMap) {
Log.d(TAG, "onMapReady: ")
hMap = huaweiMap
hMap!!.mapType = HuaweiMap.MAP_TYPE_NORMAL
hMap!!.uiSettings.isMyLocationButtonEnabled = true
// Add a polyline (mPolyline) to a map.
/* val mPolyline = hMap!!.addPolyline(
PolylineOptions().add(
LatLng(47.893478, 2.334595),
LatLng(48.993478, 3.434595),
LatLng(48.693478, 2.134595),
LatLng(48.793478, 2.334595)
)
)
// Set the color of the polyline (mPolyline) to red.
mPolyline.color = Color.RED
// Set the width of the polyline (mPolyline) to 10 pixels.
mPolyline.width = 10f*/
//Adding Polygon
/*hMap!!.addPolygon(PolygonOptions().addAll(createRectangle(LatLng(12.9716, 77.5946), 0.1, 0.1))
.fillColor(Color.GREEN)
.strokeColor(Color.BLACK))*/
//Adding Circle on the Map
/*hMap!!.addCircle(CircleOptions()
.center(LatLng(12.9716, 77.5946))
.radius(500.0)
.fillColor(Color.GREEN))*/
}
private fun createRectangle(
center: LatLng,
halfWidth: Double,
halfHeight: Double
): List<LatLng> {
return listOf(
LatLng(center.latitude - halfHeight, center.longitude - halfWidth),
LatLng(center.latitude - halfHeight, center.longitude + halfWidth),
LatLng(center.latitude + halfHeight, center.longitude + halfWidth),
LatLng(center.latitude + halfHeight, center.longitude - halfWidth)
)
}
private fun getDirection(directionType: String, directionRequest: DirectionRequest) {
ApiClient.createApiService()
.getDirectionsWithType(directionType, directionRequest)
.enqueue(object : Callback<DirectionResponse> {
override fun onFailure(call: Call<DirectionResponse>, t: Throwable) {
Toast.makeText(
[email protected],
"Failure" + t.localizedMessage + "\n" + t.message,
Toast.LENGTH_SHORT
).show()
}
override fun onResponse(
call: Call<DirectionResponse>,
response: Response<DirectionResponse>
) {
if (response.isSuccessful) {
response.body()?.let {
it.routes?.get(0)?.paths?.get(0)?.let { it1 -> addPolyLines(it1) }
Toast.makeText([email protected], "Success", Toast.LENGTH_SHORT)
.show()
}
//startActivity(Intent([email protected], MapsActivity::class.java))
}
}
})
}
var polyLine: Polyline? = null
private fun addPolyLines(path: PathsItem) {
if (polyLine != null) {
polyLine!!.remove()
}
val options = PolylineOptions()
options.add(LatLng(path.startLocation.lat, path.startLocation.lng))
path.steps!!.forEach {
it.polyline!!.forEach { it1 ->
options.add(LatLng(it1.lat, it1.lng))
}
}
options.add(LatLng(path.endLocation.lat, path.endLocation.lng))
options.color(Color.BLACK)
options.width(6f)
polyLine = hMap!!.addPolyline(options)
}
}
activity_home_screen.xml
XML:
<?xml version="1.0" encoding="utf-8"?>
<FrameLayout xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:app="http://schemas.android.com/apk/res-auto"
xmlns:map="http://schemas.android.com/apk/res-auto"
xmlns:tools="http://schemas.android.com/tools"
android:layout_width="match_parent"
android:layout_height="match_parent"
tools:context=".ui.HomeScreenActivity">
<fragment
android:id="@+id/mapfragment_mapfragmentdemo"
class="com.huawei.hms.maps.SupportMapFragment"
android:layout_width="match_parent"
android:layout_height="match_parent"
map:cameraTargetLat="12.9716"
map:cameraTargetLng="77.5946"
map:cameraZoom="10" />
<androidx.cardview.widget.CardView
android:id="@+id/cardview"
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:layout_gravity="center_horizontal|top"
android:layout_marginLeft="20dp"
android:layout_marginTop="20dp"
android:layout_marginRight="20dp"
android:elevation="100dp"
app:cardBackgroundColor="@android:color/white"
app:cardCornerRadius="8dp">
<LinearLayout
android:layout_width="fill_parent"
android:layout_height="match_parent"
android:orientation="vertical">
<LinearLayout
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:orientation="horizontal">
<LinearLayout
android:id="@+id/locationLayout"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_weight="9.5"
android:background="@color/white"
android:orientation="vertical"
android:padding="20dp">
<TextView
android:id="@+id/start"
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:layout_marginBottom="1dp"
android:background="@android:color/transparent"
android:hint="Choose a starting point..."
android:maxLines="1"
android:textColor="@color/purple_200"
android:textSize="18sp" />
<View
android:layout_width="match_parent"
android:layout_height="5dp"
android:layout_marginTop="5dp"
android:layout_marginRight="50dp"
android:layout_marginBottom="5dp"
android:background="@drawable/dottet_line" />
<TextView
android:id="@+id/destination"
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:background="@android:color/transparent"
android:hint="Choose a destination..."
android:maxLines="1"
android:textSize="18sp"
android:textColor="@color/purple_200"/>
</LinearLayout>
<ImageView
android:id="@+id/navigate"
android:layout_width="36dp"
android:layout_height="36dp"
android:layout_gravity="center"
android:layout_marginRight="10dp"
android:layout_weight="0.5"
android:src="@drawable/ic_baseline_send_24" />
</LinearLayout>
<View
android:layout_width="match_parent"
android:layout_height="5dp"
android:background="@drawable/dottet_line" />
<RadioGroup
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:layout_marginTop="5dp"
android:gravity="center"
android:id="@+id/directionTypeGroup"
android:orientation="horizontal">
<RadioButton
android:id="@+id/walking"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_marginStart="10dp"
android:layout_weight="1"
android:checked="true"
android:text="Walking" />
<RadioButton
android:id="@+id/driving"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_marginStart="10dp"
android:layout_weight="1"
android:text="Driving" />
<RadioButton
android:id="@+id/cycling"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_marginStart="10dp"
android:layout_weight="1"
android:text="Cycling" />
</RadioGroup>
</LinearLayout>
</androidx.cardview.widget.CardView>
</FrameLayout>
Result​
Tips and Tricks​
1. Make sure you are already registered as a Huawei developer.
2. Set min SDK version to 21 or later, otherwise you will get AndriodManifest to merge issue.
3. Make sure you have added the agconnect-services.json file to the app folder.
4. Make sure you have added the SHA-256 fingerprint without fail.
5. Make sure all the dependencies are added properly.
6. If you want location feature in the Map, make sure you add location permission in AndroidManifest.xml file.
7. If you install app in android version 6 or greater than make sure you handled run time permission.
8. Make sure you have encoded your API key with URLEncode UTF-8.
Conclusion​
In this article, we have learnt the integration of the Direction API in Navigation Glove application using Android Studio and Kotlin. And also we have learnt how to convert the JSON to kotlin data classes. And also we have learnt the integration of the retrofit in the application.
Reference​
Map Kit - Official document
Map Kit - Code lab
Map Kit - Training Video
Map Kit - Direction API

Related

Building a News Client with Network Kit and MVVM

Most android applications download it's content from the cloud (commonly a REST API) getting ready to parse and display that information with lists and menus in order to display dynamic content or provide a personalized experience. There are some third party libraries designed to consume a REST API (like Retrofit) or to download media content (like Glide and Picasso). This time, let me introduce you the new Huawei Network Kit.
What is nework kit?
Network kit is the new Huawei's System SDK designed to simplify the communications with web services by providing 2 main connection modes:
Rest Client
HTTP Client
Network kit supports QUIC connections automatically, that means if the Web service supports QUIC or migrates to QUIC, your app will keep working without require any change. In addition, this kit is pretty similar to the well known Retrofit, so, if you have previous experience with Retrofit, you will be able to integrate Network Kit withount complications.
Previously, we made a News client by using the HQUIC kit. In this article we are going to develop a news client application by using the new Huawei Network Kit.
Previous requirements
A developer account in newsapi.org
Android Studio 4.0 or later and the kotlin plugin
Setting up the project
Network kit doesn't require to setup a project in AGC, but you still need to add the Huawei Maven repositories to your project-level build.gradle:
Java:
buildscript {
ext.kotlin_version = "1.4.31"
repositories {
...
maven {url 'https://developer.huawei.com/repo/'}
}
dependencies {
classpath "com.android.tools.build:gradle:4.1.2"
classpath "org.jetbrains.kotlin:kotlin-gradle-plugin:$kotlin_version"
}
}
allprojects {
repositories {
...
maven {url 'https://developer.huawei.com/repo/'}
}
}
task clean(type: Delete) {
delete rootProject.buildDir
}
Go to the official documents and look for the Network kit latest version under version change history. Once you have found the latest version available, add it to yout app-level build.gradle as follows
Java:
implementation 'com.huawei.hms:network-embedded:5.0.1.301'
We will use Moshi to parse the response from the web service, let's add the related dependencies and the kapt plugin to proccess the annotations.
Java:
plugins {
id 'com.android.application'
id 'kotlin-android'
id 'kotlin-kapt'
}
android{
...
}
dependencies {
...
implementation 'com.huawei.hms:network-embedded:5.0.1.301'
implementation 'com.squareup.moshi:moshi:1.11.0'
implementation "com.squareup.moshi:moshi-kotlin:1.11.0"
kapt 'com.squareup.moshi:moshi-kotlin-codegen:1.11.0'
...
}
To display the news in a list, we must add RecyclerView and CardView to our project and must enable the DataBinding library to make our job easier.
Java:
android {
...
//Enabling DataBinding and ViewBinding
buildFeatures{
viewBinding true
dataBinding true
}
...
}
dependencies {
...
//MVVM dependencies
implementation 'androidx.lifecycle:lifecycle-extensions:2.2.0'
implementation "androidx.lifecycle:lifecycle-viewmodel-ktx:2.3.0"
//DataBinding dependency
kapt "com.android.databinding:compiler:3.1.4"
//Layout dependencies
implementation "androidx.recyclerview:recyclerview:1.1.0"
implementation "androidx.cardview:cardview:1.0.0"
...
}
We are ready to start the project.
Building the request
First of all, Network kit must be initialized, let's create an Application class to do this job
NetworkApplication.kt
Java:
class NetworkApplication: Application() {
companion object {
const val TAG="Network Application"
}
override fun onCreate() {
super.onCreate()
initNetworkKit()
}
private fun initNetworkKit() {
// Initialize the object only once, upon the first call.
NetworkKit.init(this ,object : NetworkKit.Callback() {
override fun onResult(result: Boolean) {
if (result) {
Log.i(TAG, "Networkkit init success")
} else {
Log.i(TAG, "Networkkit init failed")
}
}
})
}
}
To make sure this code will be excecuted upon each startup, we must specify this class inside the application element in our AndroidManifest.xml. Let's add the required permissions too.
XML:
<uses-permission android:name="android.permission.ACCESS_NETWORK_STATE"/>
<uses-permission android:name="android.permission.INTERNET"/>
<uses-permission android:name="android.permission.ACCESS_WIFI_STATE"/>
<uses-permission android:name="android.permission.INTERNET"/>
<application
android:name=".NetworkApplication"
android:requestLegacyExternalStorage="true"
android:allowBackup="true"
android:icon="@mipmap/ic_launcher"
android:label="@string/app_name"
android:roundIcon="@mipmap/ic_launcher_round"
android:supportsRtl="true"
android:theme="@style/Theme.NetworkKitDemo"
android:usesCleartextTraffic="true">
<activity android:name=".MainActivity">
<intent-filter>
<action android:name="android.intent.action.MAIN" />
<category android:name="android.intent.category.LAUNCHER" />
</intent-filter>
</activity>
</application>
Now, we must create the data models wich Moshi will use to parse the response
NewsResponse.kt
Java:
@JsonClass(generateAdapter = true)
data class NewsResponse(
@Json(name = "status") val status: String?,
@Json(name = "totalResults") val totalResults: Int?,
@Json(name = "articles") val articles: List<Article>
)
@JsonClass(generateAdapter = true)
data class Article(
@Json(name = "source") val source: Source?,
@Json(name = "author") val author: String?,
@Json(name = "title") val title: String?,
@Json(name = "description") val description: String?,
@Json(name = "url") val url: String?,
@Json(name = "urlToImage") val urlToImage: String?,
@Json(name = "publishedAt") val publishedAt: String?,
@Json(name = "content") val content: String?
)
@JsonClass(generateAdapter = true)
data class Source(
@Json(name = "id") val id: String?,
@Json(name = "name") val name: String?
)
Network kit porvides 2 operation modes, we will use the REST Client to get the Top headlines in the user's country and the HTTP Client mode to download the picture of each Article. We will create a singleton class called NetworkKitHelper.
Let's take a look to the REST Client mode:
NetworkKitHelper.kt
Java:
object NetworkKitHelper {
const val TAG: String = "HTTPClient"
//Your API key from newsapi.org
val apiKey = Keys.readApiKey()
fun createNewsClient(): NewsService {
val restClient = RestClient.Builder()
.httpClient(HttpClient.Builder().build())
.baseUrl("https://newsapi.org/v2/")//Specify the API base URL, this is useful if you will consume multiple paths of the same API
.build()
return restClient.create(NewsService::class.java)
}
//Declare a Request API
interface NewsService {
//Use the GET annotation to specify the path
@GET("top-headlines/")
fun getTopHeadlines(/* use the Query annotation to specify a query parameter in the request*/
@Query("apiKey") apiKey: String? = "",
@Query("country") country: String
): Submit<String?>?
}
fun loadTopHeadlines(sampleService: NewsService, listener: NewsClientListener?,country:String=Locale.getDefault().country) {
sampleService.getTopHeadlines(apiKey,country)?.enqueue(object : Callback<String?>() {
@Throws(IOException::class)
override fun onResponse(submit: Submit<String?>?, response: Response<String?>) {
// Obtain the response. This method will be called if the request is successful.
val body = response.body
body?.let {
try {
val moshi = Moshi.Builder().build()
val adapter = moshi.adapter(NewsResponse::class.java)
val news = adapter.fromJson(it)
news?.let { myNews ->
listener?.onNewsDownloaded(myNews.articles)
}
} catch (e: JSONException) {
Log.e("excepion", e.toString())
}
}
}
override fun onFailure(submit: Submit<String?>?, exception: Throwable?) {
// Obtain the response. This method will be called if the request fails.
Log.e("LoadTopHeadlines", "response onFailure = " + exception?.message)
}
})
}
interface NewsClientListener {
fun onNewsDownloaded(news: List<Article>)
}
}
Put special attention to the loadTopHeadlines function. As you can see, there aren't coroutines or threads defined, we are using the enqueue API instead. By this way Network Kit will handle the request in asynchronous mode for us.
If the API call is successfull, we will use Moshi to parse the response into data objects. By other way, we will be notified about the error in the onFailure callback. Once the response has been parsed, NetworkKitHelper will repor the news to the specified NewsClientListener.
Let's add the code to download the preview pics:
NetworkKitHelper.kt (Adding)
Java:
object NetworkKitHelper {
private val httpClient: HttpClient = createClient()
private fun createClient(): HttpClient {
return HttpClient.Builder()
.callTimeout(1000)
.connectTimeout(10000)
.build()
}
fun createRequest(url: String): Request {
return httpClient.newRequest()
.url(url)
.method("GET")
.build()
}
fun httpClientEnqueue(request: Request, listener: HttpClientListener? = null) {
httpClient.newSubmit(request).enqueue(object : Callback<ResponseBody?>() {
@Throws(IOException::class)
override fun onResponse(
submit: Submit<ResponseBody?>?,
response: Response<ResponseBody?>
) {
// Process the response if the request is successful.
Log.i(TAG, "response code:" + response.code)
response.body?.let {
listener?.onSuccess(it.bytes())
}
}
override fun onFailure(submit: Submit<ResponseBody?>?, throwable: Throwable?) {
// Process the exception if the request fails.
Log.w(TAG, "response onFailure = ${throwable?.message}")
}
})
}
interface HttpClientListener {
fun onSuccess(body: ByteArray)
}
}
As well as with the REST Client mode, we are able to enqueue HTTP Requests and define a callback for each one. In this case, we are receiving a byte array which will be used to create and display a bitmap.
Here we will face a complication. If we try to store the bitmap in the same data class as the Article, Moshi will cause a reflection error at compilation time. To solve this, we will define a new class to store the article and be responsible to load the bitmap, by doing so, we will be able to load the news as soon as we get them and then using the observer pattern, the bitmap will be added to the view as soon as it's ready.
ArticleModel.kt
Java:
class ArticleModel(val article: Article) : NetworkKitHelper.HttpClientListener {
private val _bitmap= MutableLiveData<Bitmap?>().apply{postValue(null)}
val bitmap: LiveData<Bitmap?> =_bitmap
init {
loadBitmap()
}
fun loadBitmap() {
article.urlToImage?.let{
val request=NetworkKitHelper.createRequest(it)
NetworkKitHelper.httpClientEnqueue(request, this)
}
}
override fun onSuccess(body: ByteArray) {
val bitmap= BitmapFactory.decodeByteArray(body, 0, body.size)
val resizedBitmap = Bitmap.createScaledBitmap(bitmap, 1000, 600, true)
_bitmap.postValue(resizedBitmap)
}
}
As soon as any instance of ArticleModel is created, it will enqueue an HTTP request async for the preview pic. If the call is successfull, we will receive a ByteArray in the onSuccess callback to create our bitmap from it and let the observer know the bitmap is ready to be displayed.
Sending the request
Let's create a ViewModel which will be responsible to invoke the API and store the data. Here we will use the observer pattern to let the observer know the Articles are ready to be displayed.
MainViewModel.kt
Java:
class MainViewModel : ViewModel(), NetworkKitHelper.NewsClientListener {
private val _articles = MutableLiveData<ArrayList<ArticleModel>>().apply { value = ArrayList() }
val articles: LiveData<ArrayList<ArticleModel>> = _articles
fun loadTopHeadlines(){
articles.value?.let{
if(it.isEmpty()) getTopHeadlines()
else return
}
}
private fun getTopHeadlines() {
NetworkKitHelper.loadTopHeadlines(NetworkKitHelper.createNewsClient(),this)
}
override fun onNewsDownloaded(news: List<Article>) {
val list=ArrayList<ArticleModel>()
for (article: Article in news) {
list.add(ArticleModel(article))
}
_articles.postValue(list)
}
}
To avoid downloading the news again when the user rotates the screen, we are defining the loadTopHeadlines function. It will only make the request if the list of articles is empty.
Displaying the Articles
We will use DataBinding to quicly display our news in a RecyclerView on the MainActivity, let's take a look to the main layout
activity_main.xml
XML:
<?xml version="1.0" encoding="utf-8"?>
<layout
xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:app="http://schemas.android.com/apk/res-auto"
xmlns:tools="http://schemas.android.com/tools"
>
<data class="MainBinding"/>
<androidx.constraintlayout.widget.ConstraintLayout
android:layout_width="match_parent"
android:layout_height="match_parent"
tools:context=".MainActivity">
<androidx.recyclerview.widget.RecyclerView
android:id="@+id/recycler"
android:layout_height="match_parent"
android:layout_width="match_parent"
app:layoutManager="androidx.recyclerview.widget.LinearLayoutManager"
/>
</androidx.constraintlayout.widget.ConstraintLayout>
</layout>
Now we must define the card wich will be rendered for each article
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
article_card.xml
XML:
<?xml version="1.0" encoding="utf-8"?>
<layout
xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:card_view="http://schemas.android.com/apk/res-auto"
xmlns:tools="http://schemas.android.com/tools">
<data class="ArticleBinding">
<variable
name="item"
type="com.hms.demo.networkkitdemo.ArticleModel" />
</data>
<androidx.cardview.widget.CardView
android:id="@+id/card_view"
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:layout_gravity="center"
android:layout_marginHorizontal="5dp"
android:layout_marginVertical="5dp"
card_view:cardCornerRadius="15dp"
card_view:cardElevation="20dp"
android:foreground="?android:attr/selectableItemBackground"
android:clickable="true"
android:focusable="true">
<androidx.constraintlayout.widget.ConstraintLayout
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:padding="10dp">
<com.google.android.material.imageview.ShapeableImageView
android:id="@+id/pic"
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:layout_marginTop="5dp"
android:contentDescription="@string/desc"
card_view:layout_constraintEnd_toEndOf="parent"
card_view:layout_constraintHorizontal_bias="1.0"
card_view:layout_constraintStart_toStartOf="parent"
card_view:layout_constraintTop_toBottomOf="@+id/articleTitle"
card_view:shapeAppearanceOverlay="@style/roundedImageView"
tools:srcCompat="@tools:sample/avatars" />
<TextView
android:id="@+id/articleTitle"
android:layout_width="0dp"
android:layout_height="wrap_content"
android:text="@{item.article.title}"
android:textAlignment="viewStart"
android:textSize="24sp"
android:textStyle="bold"
card_view:layout_constraintEnd_toEndOf="parent"
card_view:layout_constraintHorizontal_bias="0.498"
card_view:layout_constraintStart_toStartOf="parent"
card_view:layout_constraintTop_toTopOf="parent" />
<TextView
android:id="@+id/desc"
android:layout_width="0dp"
android:layout_height="wrap_content"
android:layout_marginTop="5dp"
android:text="@{item.article.description}"
android:textSize="20sp"
card_view:layout_constraintEnd_toEndOf="parent"
card_view:layout_constraintStart_toStartOf="parent"
card_view:layout_constraintTop_toBottomOf="@+id/pic" />
<TextView
android:id="@+id/source"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_marginTop="15dp"
android:text="@{item.article.source.name}"
card_view:layout_constraintBottom_toBottomOf="parent"
card_view:layout_constraintStart_toStartOf="parent"
card_view:layout_constraintTop_toBottomOf="@+id/desc"
card_view:layout_constraintVertical_bias="0.09" />
</androidx.constraintlayout.widget.ConstraintLayout>
</androidx.cardview.widget.CardView>
</layout>
The ArticleBinding class will be responsible to fill the view with the values in it's ArticleModel instance for us. That's the magic of DataBinding.
As you may know, to display elements in a RecyclerView we need an Adapter, so let's define it
NewsAdapter.kt
Java:
class NewsAdapter: RecyclerView.Adapter<NewsAdapter.NewsViewHolder>() {
var articles:List<ArticleModel> =ArrayList()
class NewsViewHolder(private val binding:ArticleBinding): RecyclerView.ViewHolder(binding.root) {
fun bind(item:ArticleModel){
binding.item=item
item.bitmap.observe(binding.root.context as LifecycleOwner){
it?.let{
binding.pic.setImageBitmap(it)
}
}
}
}
override fun onCreateViewHolder(parent: ViewGroup, viewType: Int): NewsViewHolder {
val inflater=LayoutInflater.from(parent.context)
val binding=ArticleBinding.inflate(inflater,parent,false)
return NewsViewHolder(binding)
}
override fun onBindViewHolder(holder: NewsViewHolder, position: Int) {
holder.bind(articles[position])
}
override fun getItemCount(): Int {
return articles.size
}
}
Put special attention to the bind function of the NewsViewHolder class, from here we are telling to the ArticleBinding instance what is the information we want to display in the view. Also, we are using the observer pattern to update the ImageView once the article's preview pic has been downloaded.
Finally, is time to join everything through the MainActivity
MainActivity.kt
Java:
class MainActivity : AppCompatActivity() {
companion object {
const val TAG="Main"
}
private lateinit var binding:MainBinding
override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
binding= MainBinding.inflate(layoutInflater)
binding.lifecycleOwner = this
setContentView(binding.root)
val viewModel:MainViewModel=ViewModelProvider(this).get(MainViewModel::class.java)
val adapter=NewsAdapter()
viewModel.articles.observe(this){
adapter.articles=it
adapter.notifyDataSetChanged()
}
binding.recycler.adapter=adapter
viewModel.loadTopHeadlines()
}
}
Final result
Tips & Tricks
If your app will consume a REST API with Kotlin, is better to use Moshi instead of gson because Moshi can understand the kotlin's not-nullable types.
If you will use API keys to authenticate your client with the server, is better to use the NDK to hide your KEY and prevent it from being obtained by using reverse engineering. Let's use the Rahul Sharma's hidding method. (Make sure to download the Android NDK from the SDK Manager)
1. Swithch to the Project view and create a jni directory under the main directory.
2. Under the jni directory add the next 3 files:
Android.mk
Code:
LOCAL_PATH := $(call my-dir)
include $(CLEAR_VARS)
LOCAL_MODULE := keys
LOCAL_SRC_FILES := keys.c
include $(BUILD_SHARED_LIBRARY)
Application.mk
Code:
APP_ABI := all
Keys.c (Put here your API key)
Code:
#include <jni.h>
JNIEXPORT jstring JNICALL
Java_com_hms_demo_networkkitdemo_Keys_getApiKey(JNIEnv *env, jclass instance) {
return (*env)->NewStringUTF(env, "PUT_HERE_YOUR_API_KEY");
}
3. Switch back to the Android View and create a Keys kotlin object
Keys.kt
Java:
object Keys {
init {
System.loadLibrary("keys")
}
private external fun getApiKey(): String?
public fun readApiKey(): String? { //use this method for String
return getApiKey()
}
}
4. Tell gradle you will use NDK by adding the next code inside android
build.gradle (app-level)
Java:
plugins {
...
}
android {
...
externalNativeBuild {
ndkBuild {
path 'src/main/jni/Android.mk'
}
}
}
dependencies {
...
}
Finally, modify the NetworkKitHelper object to read the API key from the native library.
NetworkKitHelper.kt (Modifying)
Code:
object NetworkKitHelper {
val apiKey = Keys.readApiKey()
}
Conclusion
By using Network kit your app will be ready to perform requests over QUIC or HTTP/2 without writting extra code. The REST Client mode and it's annotations are helpful to to quickly consume a REST API without taking care about Threads or Coroutines. And finally, the HTTP Client mode is useful to download preview images or any other stuff which is not a JSON.
References
Read In Forum
Network Kit official Docs
Hiding Secret/Api key from reverse engineering in Android using NDK
Moshi
Hi, i have one question if we use network kit then we no need to use any third-party like volley, Retrofit?
Is it faster and easy to use than Retrofit library?

Book Reader Application Using General Text Recognition by Huawei HiAI in Android

Introduction
In this article, we will learn how to integrate Huawei General Text Recognition using Huawei HiAI. We will build the Book reader application.
About application:
Usually user get bored to read book. This application helps them to listen book reading instead of manual book reading. So all they need to do is just capture photo of book and whenever user is travelling or whenever user want to read the book on their free time. Just user need to select image from galley and listen like music.
Huawei general text recognition works on OCR technology.
First let us understand about OCR.
What is optical character recognition (OCR)?
Optical Character Recognition (OCR) technology is a business solution for automating data extraction from printed or written text from a scanned document or image file and then converting the text into a machine-readable form to be used for data processing like editing or searching.
Now let us understand about General Text Recognition (GTR).
At the core of the GTR is Optical Character Recognition (OCR) technology, which extracts text in screenshots and photos taken by the phone camera. For photos taken by the camera, this API can correct for tilts, camera angles, reflections, and messy backgrounds up to a certain degree. It can also be used for document and streetscape photography, as well as a wide range of usage scenarios, and it features strong anti-interference capability. This API works on device side processing and service connection.
Features
For photos: Provides text area detection and text recognition for Chinese, English, Japanese, Korean, Russian, Italian, Spanish, Portuguese, German, and French texts in multiple printing fonts. A wide range of scenarios are supported, and a high recognition accuracy can be achieved even under the influence of complex lighting condition, background, or more.
For screenshots: Optimizes text extraction algorithms based on the characteristics of screenshots captured on mobile phones. Currently, this function is available in the Chinese mainland supporting Chinese and English texts.
OCR features
Lightweight: This API greatly reduces the computing time and ROM space the algorithm model takes up, making your app more lightweight.
Customized hierarchical result return: You can choose to return the coordinates of text blocks, text lines, and text characters in the screenshot based on app requirements.
How to integrate General Text Recognition
1. Configure the application on the AGC.
2. Apply for HiAI Engine Library
3. Client application development process.
Configure application on the AGC
Follow the steps
Step 1: We need to register as a developer account in AppGallery Connect. If you are already a developer ignore this step.
Step 2: Create an app by referring to Creating a Project and Creating an App in the Project
Step 3: Set the data storage location based on the current location.
Step 4: Generating a Signing Certificate Fingerprint.
Step 5: Configuring the Signing Certificate Fingerprint.
Step 6: Download your agconnect-services.json file, paste it into the app root directory.
Apply for HiAI Engine Library
What is Huawei HiAI?
HiAI is Huawei’s AI computing platform. HUAWEI HiAI is a mobile terminal–oriented artificial intelligence (AI) computing platform that constructs three layers of ecology: service capability openness, application capability openness, and chip capability openness. The three-layer open platform that integrates terminals, chips, and the cloud brings more extraordinary experience for users and developers.
How to apply for HiAI Engine?
Follow the steps
Step 1: Navigate to this URL, choose App Service > Development and click HUAWEI HiAI.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Step 2: Click Apply for HUAWEI HiAI kit.
Step 3: Enter required information like Product name and Package name, click Next button.
Step 4: Verify the application details and click Submit button.
Step 5: Click the Download SDK button to open the SDK list.
Step 6: Unzip downloaded SDK and add into your android project under libs folder.
Step 7: Add jar files dependences into app build.gradle file.
Code:
implementation fileTree(include: ['*.aar', '*.jar'], dir: 'libs')
implementation 'com.google.code.gson:gson:2.8.6'
repositories {
flatDir {
dirs 'libs'
}
}
Client application development process
Follow the steps
Step 1: Create an Android application in the Android studio (Any IDE which is your favorite).
Step 2: Add the App level Gradle dependencies. Choose inside project Android > app > build.gradle.
Code:
apply plugin: 'com.android.application'
apply plugin: 'com.huawei.agconnect'
Root level gradle dependencies.
Code:
maven { url 'https://developer.huawei.com/repo/' }
classpath 'com.huawei.agconnect:agcp:1.4.1.300'
Step 3: Add permission in AndroidManifest.xml
XML:
<uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" />
<uses-permission android:name="android.permission.READ_INTERNAL_STORAGE" />
<uses-permission android:name="android.permission.CAMERA" />
Step 4: Build application.
Initialize all view.
Java:
private void initializeView() {
mPlayAudio = findViewById(R.id.playAudio);
mTxtViewResult = findViewById(R.id.result);
mImageView = findViewById(R.id.imgViewPicture);
}
Request the runtime permission
Java:
private void requestPermissions() {
try {
if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.M) {
int permission1 = ActivityCompat.checkSelfPermission(this,
Manifest.permission.WRITE_EXTERNAL_STORAGE);
int permission2 = ActivityCompat.checkSelfPermission(this,
Manifest.permission.CAMERA);
if (permission1 != PackageManager.PERMISSION_GRANTED || permission2 != PackageManager
.PERMISSION_GRANTED) {
ActivityCompat.requestPermissions(this, new String[]{Manifest.permission.WRITE_EXTERNAL_STORAGE,
Manifest.permission.READ_EXTERNAL_STORAGE, Manifest.permission.CAMERA}, 0x0010);
}
}
} catch (Exception e) {
e.printStackTrace();
}
}
@override
public void onRequestPermissionsResult(int requestCode, String[] permissions, int[] grantResults) {
super.onRequestPermissionsResult(requestCode, permissions, grantResults);
if (grantResults.length <= 0
|| grantResults[0] != PackageManager.PERMISSION_GRANTED) {
Toast.makeText(this, "Permission denied", Toast.LENGTH_SHORT).show();
}
}
Initialize vision base
Java:
private void initVision() {
VisionBase.init(this, new ConnectionCallback() {
@override
public void onServiceConnect() {
Log.e(TAG, " onServiceConnect");
}
@override
public void onServiceDisconnect() {
Log.e(TAG, " onServiceDisconnect");
}
});
}
Initialize text to speech
Java:
private void initializeTextToSpeech() {
textToSpeech = new TextToSpeech(getApplicationContext(), new TextToSpeech.OnInitListener() {
@override
public void onInit(int status) {
if (status != TextToSpeech.ERROR) {
textToSpeech.setLanguage(Locale.UK);
}
}
});
}
Copy code
Create TextDetector instance.
Java:
mTextDetector = new TextDetector(this);
Define Vision image.
Java:
VisionImage image = VisionImage.fromBitmap(mBitmap);
Create instance of Text class.
Java:
final Text result = new Text();
Create and set VisionTextConfiguration
Java:
VisionTextConfiguration config = new VisionTextConfiguration.Builder()
.setAppType(VisionTextConfiguration.APP_NORMAL)
.setProcessMode(VisionTextConfiguration.MODE_IN)
.setDetectType(TextDetectType.TYPE_TEXT_DETECT_FOCUS_SHOOT)
.setLanguage(TextConfiguration.AUTO).build();
//Set vision configuration
mTextDetector.setVisionConfiguration(config);
Call detect method to get the result
Java:
int result_code = mTextDetector.detect(image, result, new VisionCallback<Text>() {
@override
public void onResult(Text text) {
dismissDialog();
Message message = Message.obtain();
message.what = TYPE_SHOW_RESULT;
message.obj = text;
mHandler.sendMessage(message);
}
@override
public void onError(int i) {
Log.d(TAG, "Callback: onError " + i);
mHandler.sendEmptyMessage(TYPE_TEXT_ERROR);
}
@override
public void onProcessing(float v) {
Log.d(TAG, "Callback: onProcessing:" + v);
}
});
Copy code
Create Handler
Java:
private final Handler mHandler = new Handler() {
[USER=439709]@override[/USER]
public void handleMessage(Message msg) {
super.handleMessage(msg);
int status = msg.what;
Log.d(TAG, "handleMessage status = " + status);
switch (status) {
case TYPE_CHOOSE_PHOTO: {
if (mBitmap == null) {
Log.e(TAG, "bitmap is null");
return;
}
mImageView.setImageBitmap(mBitmap);
mTxtViewResult.setText("");
showDialog();
detectTex();
break;
}
case TYPE_SHOW_RESULT: {
Text result = (Text) msg.obj;
if (dialog != null && dialog.isShowing()) {
dialog.dismiss();
}
if (result == null) {
mTxtViewResult.setText("Failed to detect text lines, result is null.");
break;
}
String textValue = result.getValue();
Log.d(TAG, "text value: " + textValue);
StringBuffer textResult = new StringBuffer();
List<TextLine> textLines = result.getBlocks().get(0).getTextLines();
for (TextLine line : textLines) {
textResult.append(line.getValue() + " ");
}
Log.d(TAG, "OCR Detection succeeded.");
mTxtViewResult.setText(textResult.toString());
textToSpeechString = textResult.toString();
break;
}
case TYPE_TEXT_ERROR: {
mTxtViewResult.setText("Failed to detect text lines, result is null.");
}
default:
break;
}
}
};
Complete code as follows
Java:
import android.Manifest;
import android.app.Activity;
import android.app.ProgressDialog;
import android.content.Intent;
import android.content.pm.PackageManager;
import android.database.Cursor;
import android.graphics.Bitmap;
import android.graphics.BitmapFactory;
import android.net.Uri;
import android.os.Build;
import android.os.Handler;
import android.os.Message;
import android.provider.MediaStore;
import android.speech.tts.TextToSpeech;
import android.support.v4.app.ActivityCompat;
import android.support.v7.app.AppCompatActivity;
import android.os.Bundle;
import android.util.Log;
import android.view.View;
import android.widget.Button;
import android.widget.ImageView;
import android.widget.TextView;
import android.widget.Toast;
import com.huawei.hiai.vision.common.ConnectionCallback;
import com.huawei.hiai.vision.common.VisionBase;
import com.huawei.hiai.vision.common.VisionCallback;
import com.huawei.hiai.vision.common.VisionImage;
import com.huawei.hiai.vision.text.TextDetector;
import com.huawei.hiai.vision.visionkit.text.Text;
import com.huawei.hiai.vision.visionkit.text.TextDetectType;
import com.huawei.hiai.vision.visionkit.text.TextLine;
import com.huawei.hiai.vision.visionkit.text.config.TextConfiguration;
import com.huawei.hiai.vision.visionkit.text.config.VisionTextConfiguration;
import java.util.List;
import java.util.Locale;
public class MainActivity extends AppCompatActivity {
private static final String TAG = MainActivity.class.getSimpleName();
private static final int REQUEST_CHOOSE_PHOTO_CODE = 2;
private Bitmap mBitmap;
private ImageView mPlayAudio;
private ImageView mImageView;
private TextView mTxtViewResult;
protected ProgressDialog dialog;
private TextDetector mTextDetector;
Text imageText = null;
TextToSpeech textToSpeech;
String textToSpeechString = "";
private static final int TYPE_CHOOSE_PHOTO = 1;
private static final int TYPE_SHOW_RESULT = 2;
private static final int TYPE_TEXT_ERROR = 3;
[USER=439709]@override[/USER]
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
initializeView();
requestPermissions();
initVision();
initializeTextToSpeech();
}
private void initializeView() {
mPlayAudio = findViewById(R.id.playAudio);
mTxtViewResult = findViewById(R.id.result);
mImageView = findViewById(R.id.imgViewPicture);
}
private void initVision() {
VisionBase.init(this, new ConnectionCallback() {
[USER=439709]@override[/USER]
public void onServiceConnect() {
Log.e(TAG, " onServiceConnect");
}
[USER=439709]@override[/USER]
public void onServiceDisconnect() {
Log.e(TAG, " onServiceDisconnect");
}
});
}
private void initializeTextToSpeech() {
textToSpeech = new TextToSpeech(getApplicationContext(), new TextToSpeech.OnInitListener() {
[USER=439709]@override[/USER]
public void onInit(int status) {
if (status != TextToSpeech.ERROR) {
textToSpeech.setLanguage(Locale.UK);
}
}
});
}
public void onChildClick(View view) {
switch (view.getId()) {
case R.id.btnSelect: {
Log.d(TAG, "Select an image");
Intent intent = new Intent(Intent.ACTION_PICK);
intent.setType("image/*");
startActivityForResult(intent, REQUEST_CHOOSE_PHOTO_CODE);
break;
}
case R.id.playAudio: {
if (textToSpeechString != null && !textToSpeechString.isEmpty())
textToSpeech.speak(textToSpeechString, TextToSpeech.QUEUE_FLUSH, null);
break;
}
}
}
private void detectTex() {
/* create a TextDetector instance firstly */
mTextDetector = new TextDetector(this);
/*Define VisionImage and transfer the Bitmap image to be detected*/
VisionImage image = VisionImage.fromBitmap(mBitmap);
/*Define the Text class.*/
final Text result = new Text();
/*Use VisionTextConfiguration to select the type of the image to be called. */
VisionTextConfiguration config = new VisionTextConfiguration.Builder()
.setAppType(VisionTextConfiguration.APP_NORMAL)
.setProcessMode(VisionTextConfiguration.MODE_IN)
.setDetectType(TextDetectType.TYPE_TEXT_DETECT_FOCUS_SHOOT)
.setLanguage(TextConfiguration.AUTO).build();
//Set vision configuration
mTextDetector.setVisionConfiguration(config);
/*Call the detect method of TextDetector to obtain the result*/
int result_code = mTextDetector.detect(image, result, new VisionCallback<Text>() {
[USER=439709]@override[/USER]
public void onResult(Text text) {
dismissDialog();
Message message = Message.obtain();
message.what = TYPE_SHOW_RESULT;
message.obj = text;
mHandler.sendMessage(message);
}
[USER=439709]@override[/USER]
public void onError(int i) {
Log.d(TAG, "Callback: onError " + i);
mHandler.sendEmptyMessage(TYPE_TEXT_ERROR);
}
[USER=439709]@override[/USER]
public void onProcessing(float v) {
Log.d(TAG, "Callback: onProcessing:" + v);
}
});
}
private void showDialog() {
if (dialog == null) {
dialog = new ProgressDialog(MainActivity.this);
dialog.setTitle("Detecting text...");
dialog.setMessage("Please wait...");
dialog.setIndeterminate(true);
dialog.setCancelable(false);
}
dialog.show();
}
private final Handler mHandler = new Handler() {
[USER=439709]@override[/USER]
public void handleMessage(Message msg) {
super.handleMessage(msg);
int status = msg.what;
Log.d(TAG, "handleMessage status = " + status);
switch (status) {
case TYPE_CHOOSE_PHOTO: {
if (mBitmap == null) {
Log.e(TAG, "bitmap is null");
return;
}
mImageView.setImageBitmap(mBitmap);
mTxtViewResult.setText("");
showDialog();
detectTex();
break;
}
case TYPE_SHOW_RESULT: {
Text result = (Text) msg.obj;
if (dialog != null && dialog.isShowing()) {
dialog.dismiss();
}
if (result == null) {
mTxtViewResult.setText("Failed to detect text lines, result is null.");
break;
}
String textValue = result.getValue();
Log.d(TAG, "text value: " + textValue);
StringBuffer textResult = new StringBuffer();
List<TextLine> textLines = result.getBlocks().get(0).getTextLines();
for (TextLine line : textLines) {
textResult.append(line.getValue() + " ");
}
Log.d(TAG, "OCR Detection succeeded.");
mTxtViewResult.setText(textResult.toString());
textToSpeechString = textResult.toString();
break;
}
case TYPE_TEXT_ERROR: {
mTxtViewResult.setText("Failed to detect text lines, result is null.");
}
default:
break;
}
}
};
[USER=439709]@override[/USER]
protected void onActivityResult(int requestCode, int resultCode, Intent data) {
super.onActivityResult(requestCode, resultCode, data);
if (requestCode == REQUEST_CHOOSE_PHOTO_CODE && resultCode == Activity.RESULT_OK) {
if (data == null) {
return;
}
Uri selectedImage = data.getData();
getBitmap(selectedImage);
}
}
private void requestPermissions() {
try {
if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.M) {
int permission1 = ActivityCompat.checkSelfPermission(this,
Manifest.permission.WRITE_EXTERNAL_STORAGE);
int permission2 = ActivityCompat.checkSelfPermission(this,
Manifest.permission.CAMERA);
if (permission1 != PackageManager.PERMISSION_GRANTED || permission2 != PackageManager
.PERMISSION_GRANTED) {
ActivityCompat.requestPermissions(this, new String[]{Manifest.permission.WRITE_EXTERNAL_STORAGE,
Manifest.permission.READ_EXTERNAL_STORAGE, Manifest.permission.CAMERA}, 0x0010);
}
}
} catch (Exception e) {
e.printStackTrace();
}
}
private void getBitmap(Uri imageUri) {
String[] pathColumn = {MediaStore.Images.Media.DATA};
Cursor cursor = getContentResolver().query(imageUri, pathColumn, null, null, null);
if (cursor == null) return;
cursor.moveToFirst();
int columnIndex = cursor.getColumnIndex(pathColumn[0]);
/* get image path */
String picturePath = cursor.getString(columnIndex);
cursor.close();
mBitmap = BitmapFactory.decodeFile(picturePath);
if (mBitmap == null) {
return;
}
//You can set image here
//mImageView.setImageBitmap(mBitmap);
// You can pass it handler as well
mHandler.sendEmptyMessage(TYPE_CHOOSE_PHOTO);
mTxtViewResult.setText("");
mPlayAudio.setEnabled(true);
}
private void dismissDialog() {
if (dialog != null && dialog.isShowing()) {
dialog.dismiss();
}
}
[USER=439709]@override[/USER]
public void onRequestPermissionsResult(int requestCode, String[] permissions, int[] grantResults) {
super.onRequestPermissionsResult(requestCode, permissions, grantResults);
if (grantResults.length <= 0
|| grantResults[0] != PackageManager.PERMISSION_GRANTED) {
Toast.makeText(this, "Permission denied", Toast.LENGTH_SHORT).show();
}
}
[USER=439709]@override[/USER]
protected void onDestroy() {
super.onDestroy();
/* release ocr instance and free the npu resources*/
if (mTextDetector != null) {
mTextDetector.release();
}
dismissDialog();
if (mBitmap != null) {
mBitmap.recycle();
}
}
}
activity_main.xml
XML:
<?xml version="1.0" encoding="utf-8"?>
<LinearLayout xmlns:android="http://schemas.android.com/apk/res/android"
android:layout_width="match_parent"
android:layout_height="match_parent"
xmlns:app="http://schemas.android.com/apk/res-auto"
android:fitsSystemWindows="true"
androidrientation="vertical"
android:background="@android:color/darker_gray">
<android.support.v7.widget.Toolbar
android:layout_width="match_parent"
android:layout_height="50dp"
android:background="#ff0000"
android:elevation="10dp">
<LinearLayout
android:layout_width="match_parent"
android:layout_height="match_parent"
androidrientation="horizontal">
<TextView
android:layout_width="match_parent"
android:layout_height="match_parent"
android:text="Book Reader"
android:layout_gravity="center"
android:gravity="center|start"
android:layout_weight="1"
android:textColor="@android:color/white"
android:textStyle="bold"
android:textSize="20sp"/>
<ImageView
android:layout_width="40dp"
android:layout_height="40dp"
android:src="@drawable/ic_baseline_play_circle_outline_24"
android:layout_gravity="center|end"
android:layout_marginEnd="10dp"
android:id="@+id/playAudio"
androidadding="5dp"/>
</LinearLayout>
</android.support.v7.widget.Toolbar>
<ScrollView
android:layout_width="match_parent"
android:layout_height="match_parent"
android:fitsSystemWindows="true">
<LinearLayout
android:layout_width="match_parent"
android:layout_height="match_parent"
androidrientation="vertical"
android:background="@android:color/darker_gray"
>
<android.support.v7.widget.CardView
android:layout_width="match_parent"
android:layout_height="wrap_content"
app:cardCornerRadius="5dp"
app:cardElevation="10dp"
android:layout_marginStart="10dp"
android:layout_marginEnd="10dp"
android:layout_marginTop="20dp"
android:layout_gravity="center">
<ImageView
android:id="@+id/imgViewPicture"
android:layout_width="300dp"
android:layout_height="300dp"
android:layout_margin="8dp"
android:layout_gravity="center_horizontal"
android:scaleType="fitXY" />
</android.support.v7.widget.CardView>
<android.support.v7.widget.CardView
android:layout_width="match_parent"
android:layout_height="wrap_content"
app:cardCornerRadius="5dp"
app:cardElevation="10dp"
android:layout_marginStart="10dp"
android:layout_marginEnd="10dp"
android:layout_marginTop="10dp"
android:layout_gravity="center"
android:layout_marginBottom="20dp">
<LinearLayout
android:layout_width="match_parent"
android:layout_height="wrap_content"
androidrientation="vertical"
>
<TextView
android:layout_margin="5dp"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:textColor="@android:color/black"
android:text="Text on the image"
android:textStyle="normal"
/>
<TextView
android:id="@+id/result"
android:layout_margin="5dp"
android:layout_marginBottom="20dp"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:textSize="18sp"
android:textColor="#ff0000"/>
</LinearLayout>
</android.support.v7.widget.CardView>
<Button
android:id="@+id/btnSelect"
android:layout_width="match_parent"
android:layout_height="wrap_content"
androidnClick="onChildClick"
android:layout_marginStart="10dp"
android:layout_marginEnd="10dp"
android:layout_marginBottom="10dp"
android:text="[USER=936943]@string[/USER]/select_picture"
android:background="@drawable/round_button_bg"
android:textColor="@android:color/white"
android:textAllCaps="false"/>
</LinearLayout>
</ScrollView>
</LinearLayout>
Result
Tips and Tricks
Maximum width and height: 1440 px and 15210 px (If the image is larger than this, you will receive error code 200).
Photos recommended size for optimal recognition accuracy.
Resolution > 720P
Aspect ratio < 2:1
If you are taking Video from a camera or gallery make sure your app has camera and storage permission.
Add the downloaded huawei-hiai-vision-ove-10.0.4.307.aar, huawei-hiai-pdk-1.0.0.aar file to libs folder.
Check dependencies added properly
Latest HMS Core APK is required.
Min SDK is 21. Otherwise you will get Manifest merge issue.
Conclusion
In this article, we have learnt the following concepts.
What is OCR?
Learnt about general text recognition.
Feature of GTR
Features of OCR
How to integrate General Text Recognition using Huawei HiAI
How to Apply Huawei HiAI
How to build the application
Reference
General Text Recognition
Apply for Huawei HiAI
Happy coding
how many languages can it be detected?

Pygmy Collection Application Part 7 (Document Skew correction Huawei HiAI)

Introduction​
If are you new to this application, please follow my previous articles
Pygmy collection application Part 1 (Account kit)
Intermediate: Pygmy Collection Application Part 2 (Ads Kit)
Intermediate: Pygmy Collection Application Part 3 (Crash service)
Intermediate: Pygmy Collection Application Part 4 (Analytics Kit Custom Events)
Intermediate: Pygmy Collection Application Part 5 (Safety Detect)
Intermediate: Pygmy Collection Application Part 6 (Room database)
Click to expand...
Click to collapse
In this article, we will learn how to integrate Huawei Document skew correction using Huawei HiAI in Pygmy collection finance application.
In pygmy collection application for customers KYC update need to happen, so agents will update the KYC, in that case document should be proper, so we will integrate the document skew correction for the image angle adjustment.
Commonly user Struggles a lot while uploading or filling any form due to document issue. This application helps them to take picture from the camera or from the gallery, it automatically detects document from the image.
Document skew correction is used to improve the document photography process by automatically identifying the document in an image. This actually returns the position of the document in original image.
Document skew correction also adjusts the shooting angle of the document based on the position information of the document in original image. This function has excellent performance in scenarios where photos of old photos, paper letters, and drawings are taken for electronic storage.
Features
Document detection: Recognizes documents in images and returns the location information of the documents in the original images.
Document correction: Corrects the document shooting angle based on the document location information in the original images, where areas to be corrected can be customized.
How to integrate Document Skew Correction
1. Configure the application on the AGC.
2. Apply for HiAI Engine Library.
3. Client application development process.
Configure application on the AGC
Follow the steps.
Step 1: We need to register as a developer account in AppGallery Connect. If you are already a developer ignore this step.
Step 2: Create an app by referring to Creating a Project and Creating an App in the Project
Step 3: Set the data storage location based on the current location.
Step 4: Generating a Signing Certificate Fingerprint.
Step 5: Configuring the Signing Certificate Fingerprint.
Step 6: Download your agconnect-services.json file, paste it into the app root directory.
Apply for HiAI Engine Library
What is Huawei HiAI?
HiAI is Huawei’s AI computing platform. HUAWEI HiAI is a mobile terminal–oriented artificial intelligence (AI) computing platform that constructs three layers of ecology: service capability openness, application capability openness, and chip capability openness. The three-layer open platform that integrates terminals, chips, and the cloud brings more extraordinary experience for users and developers.
How to apply for HiAI Engine?
Follow the steps.
Step 1: Navigate to this URL, choose App Service > Development and click HUAWEI HiAI.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Step 2: Click Apply for HUAWEI HiAI kit.
Step 3: Enter required information like Product name and Package name, click Next button.
Step 4: Verify the application details and click Submit button.
Step 5: Click the Download SDK button to open the SDK list.
Step 6: Unzip downloaded SDK and add into your android project under libs folder.
Step 7: Add jar files dependences into app build.gradle file.
Code:
implementation fileTree(include: ['*.aar', '*.jar'], dir: 'libs')
implementation 'com.google.code.gson:gson:2.8.6'
repositories {
flatDir {
dirs 'libs'
}
}
Client application development process
Follow the steps.
Step 1: Create an Android application in the Android studio (Any IDE which is your favorite).
Step 2: Add the App level Gradle dependencies. Choose inside project Android > app > build.gradle.
Code:
apply plugin: 'com.android.application'
apply plugin: 'com.huawei.agconnect'
Root level gradle dependencies.
Code:
maven { url 'https://developer.huawei.com/repo/' }
classpath 'com.huawei.agconnect:agcp:1.4.1.300'
Step 3: Add permission in AndroidManifest.xml
XML:
<uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" />
<uses-permission android:name="android.permission.READ_INTERNAL_STORAGE" />
<uses-permission android:name="android.permission.CAMERA" />
Step 4: Build application.
Select image dialog
Java:
private void selectImage() {
final CharSequence[] items = {"Take Photo", "Choose from Library",
"Cancel"};
AlertDialog.Builder builder = new AlertDialog.Builder(KycUpdateActivity.this);
builder.setTitle("Add Photo!");
builder.setItems(items, new DialogInterface.OnClickListener() {
@Override
public void onClick(DialogInterface dialog, int item) {
boolean result = PygmyUtils.checkPermission(KycUpdateActivity.this);
if (items[item].equals("Take Photo")) {
/*userChoosenTask = "Take Photo";
if (result)
cameraIntent();*/
operate_type = TAKE_PHOTO;
requestPermission(Manifest.permission.CAMERA);
} else if (items[item].equals("Choose from Library")) {
/* userChoosenTask = "Choose from Library";
if (result)
galleryIntent();*/
operate_type = SELECT_ALBUM;
requestPermission(Manifest.permission.READ_EXTERNAL_STORAGE);
} else if (items[item].equals("Cancel")) {
dialog.dismiss();
}
}
});
builder.show();
}
Open Document Skew correction activity
Java:
private void startSuperResolutionActivity() {
Intent intent = new Intent(KycUpdateActivity.this, DocumentSkewCorrectionActivity.class);
intent.putExtra("operate_type", operate_type);
startActivityForResult(intent, DOCUMENT_SKEW_CORRECTION_REQUEST);
}
DocumentSkewCorrectonActivity.java
Java:
import androidx.annotation.Nullable;
import androidx.appcompat.app.AppCompatActivity;
import android.app.Activity;
import android.app.AlertDialog;
import android.app.ProgressDialog;
import android.content.ContentValues;
import android.content.DialogInterface;
import android.content.Intent;
import android.graphics.Bitmap;
import android.graphics.Matrix;
import android.graphics.Point;
import android.media.ExifInterface;
import android.net.Uri;
import android.os.Bundle;
import android.os.Handler;
import android.provider.MediaStore;
import android.util.Log;
import android.view.View;
import android.widget.ImageButton;
import android.widget.ImageView;
import android.widget.RelativeLayout;
import android.widget.TextView;
import android.widget.Toast;
import com.huawei.hmf.tasks.OnFailureListener;
import com.huawei.hmf.tasks.OnSuccessListener;
import com.huawei.hmf.tasks.Task;
import com.huawei.hms.mlsdk.common.MLFrame;
import com.huawei.hms.mlsdk.dsc.MLDocumentSkewCorrectionAnalyzer;
import com.huawei.hms.mlsdk.dsc.MLDocumentSkewCorrectionAnalyzerFactory;
import com.huawei.hms.mlsdk.dsc.MLDocumentSkewCorrectionAnalyzerSetting;
import com.huawei.hms.mlsdk.dsc.MLDocumentSkewCorrectionCoordinateInput;
import com.huawei.hms.mlsdk.dsc.MLDocumentSkewCorrectionResult;
import com.huawei.hms.mlsdk.dsc.MLDocumentSkewDetectResult;
import com.shea.pygmycollection.R;
import com.shea.pygmycollection.customview.DocumentCorrectImageView;
import com.shea.pygmycollection.utils.FileUtils;
import com.shea.pygmycollection.utils.UserDataUtils;
import java.io.IOException;
import java.util.ArrayList;
import java.util.List;
public class DocumentSkewCorrectionActivity extends AppCompatActivity implements View.OnClickListener {
private static final String TAG = "SuperResolutionActivity";
private static final int REQUEST_SELECT_IMAGE = 1000;
private static final int REQUEST_TAKE_PHOTO = 1;
private ImageView desImageView;
private ImageButton adjustImgButton;
private Bitmap srcBitmap;
private Bitmap getCompressesBitmap;
private Uri imageUri;
private MLDocumentSkewCorrectionAnalyzer analyzer;
private Bitmap corrected;
private ImageView back;
private Task<MLDocumentSkewCorrectionResult> correctionTask;
private DocumentCorrectImageView documetScanView;
private Point[] _points;
private RelativeLayout layout_image;
private MLFrame frame;
TextView selectTv;
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_document_skew_corretion);
//setStatusBarColor(this, R.color.black);
analyzer = createAnalyzer();
adjustImgButton = findViewById(R.id.adjust);
layout_image = findViewById(R.id.layout_image);
desImageView = findViewById(R.id.des_image);
documetScanView = findViewById(R.id.iv_documetscan);
back = findViewById(R.id.back);
selectTv = findViewById(R.id.selectTv);
adjustImgButton.setOnClickListener(this);
findViewById(R.id.back).setOnClickListener(this);
selectTv.setOnClickListener(this);
findViewById(R.id.rl_chooseImg).setOnClickListener(this);
back.setOnClickListener(this);
int operate_type = getIntent().getIntExtra("operate_type", 101);
if (operate_type == 101) {
takePhoto();
} else if (operate_type == 102) {
selectLocalImage();
}
}
private String[] chooseTitles;
@Override
public void onClick(View v) {
if (v.getId() == R.id.adjust) {
List<Point> points = new ArrayList<>();
Point[] cropPoints = documetScanView.getCropPoints();
if (cropPoints != null) {
points.add(cropPoints[0]);
points.add(cropPoints[1]);
points.add(cropPoints[2]);
points.add(cropPoints[3]);
}
MLDocumentSkewCorrectionCoordinateInput coordinateData = new MLDocumentSkewCorrectionCoordinateInput(points);
getDetectdetectResult(coordinateData, frame);
} else if (v.getId() == R.id.rl_chooseImg) {
chooseTitles = new String[]{getResources().getString(R.string.take_photo), getResources().getString(R.string.select_from_album)};
AlertDialog.Builder builder = new AlertDialog.Builder(this);
builder.setItems(chooseTitles, new DialogInterface.OnClickListener() {
@Override
public void onClick(DialogInterface dialogInterface, int position) {
if (position == 0) {
takePhoto();
} else {
selectLocalImage();
}
}
});
builder.create().show();
} else if (v.getId() == R.id.selectTv) {
if (corrected == null) {
Toast.makeText(this, "Document Skew correction is not yet success", Toast.LENGTH_SHORT).show();
return;
} else {
ProgressDialog pd = new ProgressDialog(this);
pd.setMessage("Please wait...");
pd.show();
//UserDataUtils.saveBitMap(this, corrected);
Intent intent = new Intent();
intent.putExtra("status", "success");
new Handler().postDelayed(new Runnable() {
@Override
public void run() {
if (pd != null && pd.isShowing()) {
pd.dismiss();
}
setResult(Activity.RESULT_OK, intent);
finish();
}
}, 3000);
}
} else if (v.getId() == R.id.back) {
finish();
}
}
private MLDocumentSkewCorrectionAnalyzer createAnalyzer() {
MLDocumentSkewCorrectionAnalyzerSetting setting = new MLDocumentSkewCorrectionAnalyzerSetting
.Factory()
.create();
return MLDocumentSkewCorrectionAnalyzerFactory.getInstance().getDocumentSkewCorrectionAnalyzer(setting);
}
private void takePhoto() {
layout_image.setVisibility(View.GONE);
Intent takePictureIntent = new Intent(MediaStore.ACTION_IMAGE_CAPTURE);
if (takePictureIntent.resolveActivity(this.getPackageManager()) != null) {
ContentValues values = new ContentValues();
values.put(MediaStore.Images.Media.TITLE, "New Picture");
values.put(MediaStore.Images.Media.DESCRIPTION, "From Camera");
this.imageUri = this.getContentResolver().insert(MediaStore.Images.Media.EXTERNAL_CONTENT_URI, values);
takePictureIntent.putExtra(MediaStore.EXTRA_OUTPUT, this.imageUri);
this.startActivityForResult(takePictureIntent, DocumentSkewCorrectionActivity.this.REQUEST_TAKE_PHOTO);
}
}
private void selectLocalImage() {
layout_image.setVisibility(View.GONE);
Intent intent = new Intent(Intent.ACTION_PICK, null);
intent.setDataAndType(MediaStore.Images.Media.EXTERNAL_CONTENT_URI, "image/*");
startActivityForResult(intent, REQUEST_SELECT_IMAGE);
}
@Override
protected void onActivityResult(int requestCode, int resultCode, @Nullable Intent data) {
super.onActivityResult(requestCode, resultCode, data);
if (requestCode == REQUEST_SELECT_IMAGE && resultCode == Activity.RESULT_OK) {
imageUri = data.getData();
try {
if (imageUri != null) {
srcBitmap = MediaStore.Images.Media.getBitmap(getContentResolver(), imageUri);
String realPathFromURI = getRealPathFromURI(imageUri);
int i = readPictureDegree(realPathFromURI);
Bitmap spBitmap = rotaingImageView(i, srcBitmap);
Matrix matrix = new Matrix();
matrix.setScale(0.5f, 0.5f);
getCompressesBitmap = Bitmap.createBitmap(spBitmap, 0, 0, spBitmap.getWidth(),
spBitmap.getHeight(), matrix, true);
reloadAndDetectImage();
}
} catch (IOException e) {
Log.e(TAG, e.getMessage());
}
} else if (requestCode == REQUEST_TAKE_PHOTO && resultCode == Activity.RESULT_OK) {
try {
if (imageUri != null) {
srcBitmap = MediaStore.Images.Media.getBitmap(getContentResolver(), imageUri);
String realPathFromURI = getRealPathFromURI(imageUri);
int i = readPictureDegree(realPathFromURI);
Bitmap spBitmap = rotaingImageView(i, srcBitmap);
Matrix matrix = new Matrix();
matrix.setScale(0.5f, 0.5f);
getCompressesBitmap = Bitmap.createBitmap(spBitmap, 0, 0, spBitmap.getWidth(),
srcBitmap.getHeight(), matrix, true);
reloadAndDetectImage();
}
} catch (IOException e) {
Log.e(TAG, e.getMessage());
}
} else if (resultCode == REQUEST_SELECT_IMAGE && resultCode == Activity.RESULT_CANCELED) {
finish();
}
}
private void reloadAndDetectImage() {
if (imageUri == null) {
return;
}
frame = MLFrame.fromBitmap(getCompressesBitmap);
Task<MLDocumentSkewDetectResult> task = analyzer.asyncDocumentSkewDetect(frame);
task.addOnSuccessListener(new OnSuccessListener<MLDocumentSkewDetectResult>() {
public void onSuccess(MLDocumentSkewDetectResult result) {
if (result.getResultCode() != 0) {
corrected = null;
Toast.makeText(DocumentSkewCorrectionActivity.this, "The picture does not meet the requirements.", Toast.LENGTH_SHORT).show();
} else {
// Recognition success.
Point leftTop = result.getLeftTopPosition();
Point rightTop = result.getRightTopPosition();
Point leftBottom = result.getLeftBottomPosition();
Point rightBottom = result.getRightBottomPosition();
_points = new Point[4];
_points[0] = leftTop;
_points[1] = rightTop;
_points[2] = rightBottom;
_points[3] = leftBottom;
layout_image.setVisibility(View.GONE);
documetScanView.setImageBitmap(getCompressesBitmap);
documetScanView.setPoints(_points);
}
}
}).addOnFailureListener(new OnFailureListener() {
public void onFailure(Exception e) {
Toast.makeText(getApplicationContext(), e.getMessage(), Toast.LENGTH_SHORT).show();
}
});
}
private void getDetectdetectResult(MLDocumentSkewCorrectionCoordinateInput coordinateData, MLFrame frame) {
try {
correctionTask = analyzer.asyncDocumentSkewCorrect(frame, coordinateData);
} catch (Exception e) {
Log.e(TAG, "The image does not meet the detection requirements.");
}
try {
correctionTask.addOnSuccessListener(new OnSuccessListener<MLDocumentSkewCorrectionResult>() {
@Override
public void onSuccess(MLDocumentSkewCorrectionResult refineResult) {
// The check is successful.
if (refineResult != null && refineResult.getResultCode() == 0) {
corrected = refineResult.getCorrected();
layout_image.setVisibility(View.VISIBLE);
desImageView.setImageBitmap(corrected);
UserDataUtils.saveBitMap(DocumentSkewCorrectionActivity.this, corrected);
} else {
Toast.makeText(DocumentSkewCorrectionActivity.this, "The check fails.", Toast.LENGTH_SHORT).show();
}
}
}).addOnFailureListener(new OnFailureListener() {
@Override
public void onFailure(Exception e) {
Toast.makeText(DocumentSkewCorrectionActivity.this, "The check fails.", Toast.LENGTH_SHORT).show();
}
});
} catch (Exception e) {
Log.e(TAG, "Please set an image.");
}
}
@Override
protected void onDestroy() {
super.onDestroy();
if (srcBitmap != null) {
srcBitmap.recycle();
}
if (getCompressesBitmap != null) {
getCompressesBitmap.recycle();
}
if (corrected != null) {
corrected.recycle();
}
if (analyzer != null) {
try {
analyzer.stop();
} catch (IOException e) {
Log.e(TAG, e.getMessage());
}
}
}
public static int readPictureDegree(String path) {
int degree = 0;
try {
ExifInterface exifInterface = new ExifInterface(path);
int orientation = exifInterface.getAttributeInt(
ExifInterface.TAG_ORIENTATION,
ExifInterface.ORIENTATION_NORMAL);
switch (orientation) {
case ExifInterface.ORIENTATION_ROTATE_90:
degree = 90;
break;
case ExifInterface.ORIENTATION_ROTATE_180:
degree = 180;
break;
case ExifInterface.ORIENTATION_ROTATE_270:
degree = 270;
break;
}
} catch (IOException e) {
Log.e(TAG, e.getMessage());
}
return degree;
}
private String getRealPathFromURI(Uri contentURI) {
String result;
result = FileUtils.getFilePathByUri(this, contentURI);
return result;
}
public static Bitmap rotaingImageView(int angle, Bitmap bitmap) {
Matrix matrix = new Matrix();
matrix.postRotate(angle);
Bitmap resizedBitmap = Bitmap.createBitmap(bitmap, 0, 0,
bitmap.getWidth(), bitmap.getHeight(), matrix, true);
return resizedBitmap;
}
}
activity_document_skew_correction.xml
XML:
<?xml version="1.0" encoding="utf-8"?>
<RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:app="http://schemas.android.com/apk/res-auto"
xmlns:tools="http://schemas.android.com/tools"
android:layout_width="match_parent"
android:layout_height="match_parent"
android:background="#000000"
tools:ignore="MissingDefaultResource">
<LinearLayout
android:id="@+id/linear_views"
android:layout_width="match_parent"
android:layout_height="80dp"
android:layout_alignParentBottom="true"
android:layout_marginBottom="10dp"
android:orientation="horizontal">
<RelativeLayout
android:id="@+id/rl_chooseImg"
android:layout_width="0dp"
android:layout_height="match_parent"
android:layout_weight="1">
<ImageView
android:layout_width="25dp"
android:layout_height="25dp"
android:layout_centerInParent="true"
android:src="@drawable/add_picture"
app:tint="@color/colorPrimary" />
</RelativeLayout>
<RelativeLayout
android:id="@+id/rl_adjust"
android:layout_width="0dp"
android:layout_height="match_parent"
android:layout_weight="1">
<ImageButton
android:id="@+id/adjust"
android:layout_width="70dp"
android:layout_height="70dp"
android:layout_centerInParent="true"
android:background="@drawable/ic_baseline_adjust_24" />
</RelativeLayout>
<RelativeLayout
android:id="@+id/rl_help"
android:layout_width="0dp"
android:layout_height="match_parent"
android:layout_weight="1">
<ImageView
android:layout_width="30dp"
android:layout_height="30dp"
android:layout_centerInParent="true"
android:src="@drawable/back"
android:visibility="invisible" />
</RelativeLayout>
</LinearLayout>
<RelativeLayout
android:id="@+id/rl_navigation"
android:layout_width="match_parent"
android:layout_height="50dp"
android:background="@color/colorPrimary">
<ImageButton
android:id="@+id/back"
android:layout_width="30dp"
android:layout_height="30dp"
android:layout_centerVertical="true"
android:layout_marginLeft="20dp"
android:layout_marginTop="@dimen/icon_back_margin"
android:background="@drawable/back" />
<TextView
android:id="@+id/selectTv"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_alignParentEnd="true"
android:layout_centerVertical="true"
android:layout_marginEnd="10dp"
android:fontFamily="@font/montserrat_bold"
android:padding="@dimen/hiad_10_dp"
android:text="Select Doc"
android:textColor="@color/white" />
</RelativeLayout>
<RelativeLayout
android:layout_width="match_parent"
android:layout_height="match_parent"
android:layout_centerHorizontal="true"
android:layout_marginTop="99dp"
android:layout_marginBottom="100dp">
<com.shea.pygmycollection.customview.DocumentCorrectImageView
android:id="@+id/iv_documetscan"
android:layout_width="match_parent"
android:layout_height="match_parent"
android:layout_centerInParent="true"
android:padding="20dp"
app:LineColor="@color/colorPrimary" />
</RelativeLayout>
<RelativeLayout
android:id="@+id/layout_image"
android:layout_width="match_parent"
android:layout_height="match_parent"
android:layout_centerHorizontal="true"
android:layout_marginTop="99dp"
android:layout_marginBottom="100dp"
android:background="#000000"
android:visibility="gone">
<ImageView
android:id="@+id/des_image"
android:layout_width="match_parent"
android:layout_height="match_parent"
android:layout_centerInParent="true"
android:padding="20dp" />
</RelativeLayout>
</RelativeLayout>
Result​
Tips and Tricks​
Recommended image width and height: 1080 px and 2560 px.
Multi-thread invoking is currently not supported.
The document detection and correction API can only be called by 64-bit apps.
If you are taking Video from a camera or gallery make sure your app has camera and storage permission.
Add the downloaded huawei-hiai-vision-ove-10.0.4.307.aar, huawei-hiai-pdk-1.0.0.aar file to libs folder.
Check dependencies added properly.
Latest HMS Core APK is required.
Min SDK is 21. Otherwise you will get Manifest merge issue.
Conclusion​
In this article, we have built an application where that detects the document in the image, and correct the document and it gives a result. We have learnt the following concepts.
1. What is Document skew correction?
2. Feature of Document skew correction.
3. How to integrate Document Skew correction using Huawei HiAI?
4. How to Apply Huawei HiAI?
5. How to build the application?
Reference​
Document skew correction
Apply for Huawei HiAI​

Enter the attendance timings using Room Database and Huawei Analytics Kit in Attendance Tracker Android app (Kotlin) – Part 5

{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Introduction
In this article, we can learn about that how to enter In Time and Out time of an users by the Room Database and also we can integrate the Analytics Kit for event tracking.
Room is an ORM (Object Relational Mapping) library. In other words, Room will map our database objects to Java objects. Room provides an abstraction layer over SQLite to allow fluent database access while harnessing the full power of SQLite. Room can provide to create sqlite databases and perform the operations like create, read, update and delete.
So, I will provide the series of articles on this Attendance Tracker App, in upcoming articles I will integrate other Huawei Kits.
If you are new to this application, follow my previous articles
Beginner: Integration of Huawei Account Kit of Obtaining Icon Resources feature in Attendance Tracker Android app (Kotlin) - Part 1
Beginner: Find Log in via SMS and Forgot password? Features using Huawei Account Kit, and Ads Kit in Attendance Tracker Android app (Kotlin) – Part 2
Beginner: Find the CRUD operations in Room Database in Attendance Tracker Android app (Kotlin) – Part 3
Beginner: Integration of Huawei Crash Service in Attendance Tracker Android app (Kotlin) – Part 4
Analytics Kit
HUAWEI Analytics Kit provides analysis models to understand user behaviour and gain in-depth insights into users, products and content. It helps you to gain insight about user behaviour on different platforms based on the user behaviour events and user attributes reported by through apps.
AppGallery Connect
Find the Analytics using AppGallery connect dashboard.
Choose My Projects > Huawei Analytics > Overview > Project overview.
Project overview displays the core indicators of current project, such as the number of new users, User activity, User acquisition, User revisit, New user retention, Active user retention, User characteristics and Popular pages etc. providing a quick overview of the users and how they are using your app.
Requirements
1. Any operating system (MacOS, Linux and Windows).
2. Must have a Huawei phone with HMS 4.0.0.300 or later.
3. Must have a laptop or desktop with Android Studio, Jdk 1.8, SDK platform 26 and Gradle 4.6 and above installed.
4. Minimum API Level 24 is required.
5. Required EMUI 9.0.0 and later version devices.
How to integrate HMS Dependencies
1. First register as Huawei developer and complete identity verification in Huawei developers website, refer to register a Huawei ID.
2. Create a project in android studio, refer Creating an Android Studio Project.
3. Generate a SHA-256 certificate fingerprint.
4. To generate SHA-256 certificate fingerprint. On right-upper corner of android project click Gradle, choose Project Name > Tasks > android, and then click signingReport, as follows.
Note: Project Name depends on the user created name.
5. Create an App in AppGallery Connect.
6. Download the agconnect-services.json file from App information, copy and paste in android Project under app directory, as follows.
7. Enter SHA-256 certificate fingerprint and click Save button, as follows.
Note: Above steps from Step 1 to 7 is common for all Huawei Kits.
8. Click Manage APIs tab and enable HUAWEI Analytics.
9. Add the below maven URL in build.gradle(Project) file under the repositories of buildscript, dependencies and allprojects, refer Add Configuration.
Java:
maven { url 'http://developer.huawei.com/repo/' }
classpath 'com.huawei.agconnect:agcp:1.6.0.300'
10. Add the below plugin and dependencies in build.gradle(Module) file.
Java:
apply plugin: id 'com.huawei.agconnect'
apply plugin: id 'kotlin-kapt'
// Huawei AGC
implementation 'com.huawei.agconnect:agconnect-core:1.6.0.300'
// Room Database
implementation "androidx.room:room-runtime:2.4.2"
kapt "androidx.room:room-compiler:2.4.2"
implementation "androidx.room:room-ktx:2.4.2"
androidTestImplementation "androidx.room:room-testing:2.4.2"
// Lifecycle components
implementation "androidx.lifecycle:lifecycle-extensions:2.2.0"
implementation "androidx.lifecycle:lifecycle-viewmodel-ktx:2.4.1"
// Recyclerview
implementation 'androidx.recyclerview:recyclerview:1.2.1'
// Huawei Analytics
implementation 'com.huawei.hms:hianalytics:6.4.0.300'
11. Now Sync the gradle.
12. Add the required permission to the AndroidManifest.xml file.
XML:
<uses-permission android:name="android.permission.ACCESS_NETWORK_STATE" />
<uses-permission android:name="android.permission.INTERNET" />
Let us move to development
I have created a project on Android studio with empty activity let us start coding.
Initialize Huawei Analytics and enable it.
Java:
// Initialize the Analytics
var mInstance: HiAnalyticsInstance? = null
// Initialize the Analytics function
initAna()
private fun initAna() {
// Enable Analytics Kit Log
HiAnalyticsTools.enableLog()
// Generate the Analytics Instance
mInstance = HiAnalytics.getInstance(this)
// Enable collection capability
mInstance?.setAnalyticsEnabled(true)
}
Create a PunchRecord.kt class annotated with @entity to create a table for each class.
Java:
@Entity(tableName = "punchrecords")
data class PunchRecord(
@PrimaryKey(autoGenerate = true)
val id: Long,
val empid: String,
val intime: String,
val outtime: String
)
Create a PunchRecordDao.kt interface class annotated with @dao and responsible for defining the methods that access the database.
Code:
@Dao
interface PunchRecordDao {
@Query("SELECT * from punchrecords")
fun punchgetall(): LiveData<List<PunchRecord>>
@Insert(onConflict = OnConflictStrategy.ABORT)
suspend fun insert(item: PunchRecord)
@Query("SELECT * FROM punchrecords WHERE punchrecords.id == :id")
fun get(id: Long): LiveData<PunchRecord>
@Update
suspend fun update(vararg items: PunchRecord)
@Delete
suspend fun delete(vararg items: PunchRecord)
}
Create a AppRoomDatabase.kt abstract class that extends RoomDatabase annotated with @database to lists the entities contained in the database, and the DAOs which access them.
Code:
@Database(entities = [PunchRecord::class], version = 1)
abstract class PunchAppRoomDatabase : RoomDatabase() {
abstract fun punchrecordDao(): PunchRecordDao
companion object {
@Volatile
private var INSTANCE: RoomDatabase? = null
fun getDatabase(context: Context): PunchAppRoomDatabase {
val tempInstance = INSTANCE
if (tempInstance != null) {
return tempInstance as PunchAppRoomDatabase
}
synchronized(this) {
val instance = Room.databaseBuilder(context.applicationContext,PunchAppRoomDatabase::class.java,
"datarecords_database").fallbackToDestructiveMigration()
.build()
INSTANCE = instance
return instance
}
}
}
}
Create a PunchRepository.kt class to find the functions.
Code:
class PunchRepository(private val punchDao: PunchRecordDao) {
val myallItems: LiveData<List<PunchRecord>> = punchDao.punchgetall()
fun getrepo(id: Long):LiveData<PunchRecord> {
return punchDao.get(id)
}
suspend fun updatepunch(item: PunchRecord) {
punchDao.update(item)
}
suspend fun insertpunch(item: PunchRecord) {
punchDao.insert(item)
}
suspend fun deletepunch(item: PunchRecord) {
punchDao.delete(item)
}
}
Create a PunchViewModel.kt class that extends AndroidViewModel and provides the PunchRepository functions.
Java:
class PunchViewModel(application: Application): AndroidViewModel(application) {
private val repository: PunchRepository
val allPunchItems: LiveData<List<PunchRecord>>
init {
Log.d(ContentValues.TAG, "Inside ViewModel init")
val dao = PunchAppRoomDatabase.getDatabase(application).punchrecordDao()
repository = PunchRepository(dao)
allPunchItems = repository.myallItems
}
fun insert(item: PunchRecord) = viewModelScope.launch {
repository.insertpunch(item)
}
fun update(item: PunchRecord) = viewModelScope.launch {
repository.updatepunch(item)
}
fun delete(item: PunchRecord) = viewModelScope.launch {
repository.deletepunch(item)
}
fun get(id: Long) = repository.getrepo(id)
}
In the EmpLoginActivity.kt activity to find the business logic for button click.
Java:
class EmpLoginActivity : AppCompatActivity() {
private var buttonAddPunch: Button? = null
private lateinit var myViewModel: PunchViewModel
override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
setContentView(R.layout.activity_emp_login)
val recyclerView = recyclerview_list
val adapter = PunchAdapter(this)
recyclerView.adapter = adapter
recyclerView.layoutManager = LinearLayoutManager(this)
buttonAddPunch = findViewById(R.id.btn_punch)
buttonAddPunch!!.setOnClickListener(View.OnClickListener {
val intent = Intent(this, TimeActivity::class.java)
startActivity(intent)
})
myViewModel = ViewModelProvider(this)[PunchViewModel::class.java]
myViewModel.allPunchItems.observe(this, Observer { items ->
items?.let { adapter.setItems(it) }
})
}
}
Create a PunchAdapter.kt adapter class to hold the list.
Java:
class PunchAdapter internal constructor (context: Context) : RecyclerView.Adapter<PunchAdapter.PunchRecordViewHolder>() {
private val inflater: LayoutInflater = LayoutInflater.from(context)
private var itemsList = emptyList<PunchRecord>().toMutableList()
private val onClickListener: View.OnClickListener
init {
onClickListener = View.OnClickListener { v ->
val item = v.tag as PunchRecord
Log.d(TAG, "Setting onClickListener for item ${item.id}")
val intent = Intent(v.context, TimeActivity::class.java).apply {
putExtra(DATA_RECORD_ID, item.id)
}
v.context.startActivity(intent)
}
}
inner class PunchRecordViewHolder(itemView: View) : RecyclerView.ViewHolder(itemView) {
val itemId: TextView = itemView.findViewById(R.id.datarecord_viewholder_id)
val itemID: TextView = itemView.findViewById(R.id.punch_id)
val itemInTime: TextView = itemView.findViewById(R.id.punch_in)
val itemOutTime: TextView = itemView.findViewById(R.id.punch_out)
}
override fun onCreateViewHolder(parent: ViewGroup, viewType: Int): PunchRecordViewHolder {
val itemView = inflater.inflate(R.layout.punch_list, parent, false)
return PunchRecordViewHolder(itemView)
}
override fun onBindViewHolder(holder: PunchRecordViewHolder, position: Int) {
val current = itemsList[position]
// Needed: will be referenced in the View.OnClickListener above
holder.itemView.tag = current
with(holder) {
// Set UI values
itemId.text = current.id.toString()
// itemRecord.text = current.record
itemID.text = current.empid
itemInTime.text = current.intime
itemOutTime.text = current.outtime
// Set handlers
itemView.setOnClickListener(onClickListener)
}
}
internal fun setItems(items: List<PunchRecord>) {
this.itemsList = items.toMutableList()
notifyDataSetChanged()
}
override fun getItemCount() = itemsList.size
companion object {
const val DATA_RECORD_ID : String = "datarecord_id"
}
}
In the TimeActivity.kt activity to find the business logic to add In Time and Out Time.
Java:
class TimeActivity : AppCompatActivity(), DatePickerDialog.OnDateSetListener, TimePickerDialog.OnTimeSetListener {
private lateinit var dataViewModel: PunchViewModel
private var recordId: Long = 0L
private var isEdit: Boolean = false
var day = 0
var month = 0
var year = 0
var hour = 0
var minute = 0
var savedDay = 0
var savedMonth = 0
var savedYear = 0
var savedHour = 0
var savedMinute = 0
var isInTime : Boolean = false
var isOutTime : Boolean = false
override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
setContentView(R.layout.activity_time)
dataViewModel = ViewModelProvider(this)[PunchViewModel::class.java]
if (intent.hasExtra(PunchAdapter.DATA_RECORD_ID)) {
recordId = intent.getLongExtra(PunchAdapter.DATA_RECORD_ID, 0L)
dataViewModel.get(recordId).observe(this, Observer {
val viewId = findViewById<TextView>(R.id.datarecord_id)
val viewEmpid = findViewById<EditText>(R.id.edtpunch_id)
val viewInTime = findViewById<EditText>(R.id.edtpunch_intime)
val viewOutTime = findViewById<EditText>(R.id.edtpunch_outtime)
if (it != null) {
// populate with data
viewId.text = it.id.toString()
viewEmpid.setText(it.empid)
viewInTime.setText(it.intime)
viewOutTime.setText(it.outtime)
}
})
isEdit = true
}
edtpunch_intime.setOnClickListener{
isInTime = true
isOutTime = false
getDateTimeCalendar()
DatePickerDialog(this,this,year,month,day ).show()
}
edtpunch_outtime.setOnClickListener{
getDateTimeCalendar()
isInTime = false
isOutTime = true
DatePickerDialog(this,this,year, month, day ).show()
}
btnpunch_save.setOnClickListener { view ->
val id = 0L
val mEmpid = edtpunch_id.text.toString()
val mInTime = edtpunch_intime.text.toString()
val mOutTime = edtpunch_outtime.text.toString()
if (mEmpid.isBlank() ) {
Toast.makeText(this, "Employee ID is blank", Toast.LENGTH_SHORT).show()
}
else if (mInTime.isBlank()) {
Toast.makeText(this, "In Time is blank", Toast.LENGTH_SHORT).show()
}
else if(mOutTime.isBlank()) {
Toast.makeText(this, "Out Time is blank", Toast.LENGTH_SHORT).show()
}
else {
val item = PunchRecord(id = id, empid = mEmpid, intime = mInTime, outtime = mOutTime)
dataViewModel.insert(item)
finish()
}
}
btnpunch_update.setOnClickListener { view ->
val id = datarecord_id.text.toString().toLong()
val nEmpid = edtpunch_id.text.toString()
val nInTime = edtpunch_intime.text.toString()
val nOutTime = edtpunch_outtime.text.toString()
if (nEmpid.isBlank() && nInTime.isBlank() && nOutTime.isBlank()) {
Toast.makeText(this, "Empty data is not allowed", Toast.LENGTH_SHORT).show()
} else {
val item = PunchRecord(id = id, empid = nEmpid, intime = nInTime, outtime = nOutTime)
dataViewModel.update(item)
finish()
}
}
btnpunch_delete.setOnClickListener {
val id = datarecord_id.text.toString().toLong()
val nEmpid = edtpunch_id.text.toString()
val nInTime = edtpunch_intime.text.toString()
val nOutTime = edtpunch_outtime.text.toString()
val item = PunchRecord( id =id, empid = nEmpid, intime = nInTime, outtime = nOutTime)
dataViewModel.delete(item)
finish()
}
}
private fun getDateTimeCalendar(){
val cal: Calendar = Calendar.getInstance()
day = cal.get(Calendar.DAY_OF_MONTH)
month = cal.get(Calendar.MONTH)
year = cal.get(Calendar.YEAR)
hour = cal.get(Calendar.HOUR)
minute = cal.get(Calendar.MINUTE)
}
override fun onDateSet(view: DatePicker?, year: Int, month: Int, dayOfMonth: Int) {
savedDay = dayOfMonth
savedMonth = month
savedYear = year
getDateTimeCalendar()
TimePickerDialog(this, this, hour, minute, true).show()
}
override fun onTimeSet(view: TimePicker?, hourOfDay: Int, minute: Int) {
savedHour = hourOfDay
savedMinute = minute
if (isInTime) {
edtpunch_intime.setText("$savedDay-$savedMonth-$savedYear, $savedHour:$savedMinute")
}
if (isOutTime) {
edtpunch_outtime.setText("$savedDay-$savedMonth-$savedYear, $savedHour:$savedMinute")
}
}
}
In the activity_emp_login.xml we can create the UI screen.
XML:
<?xml version="1.0" encoding="utf-8"?>
<LinearLayout xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:app="http://schemas.android.com/apk/res-auto"
xmlns:tools="http://schemas.android.com/tools"
android:layout_width="match_parent"
android:layout_height="match_parent"
android:orientation="vertical"
tools:context=".RoomDatabase.EmpLoginActivity">
<Button
android:id="@+id/btn_punch"
android:layout_width="200dp"
android:layout_height="50dp"
android:textSize="18dp"
android:layout_marginTop="20dp"
android:layout_marginBottom="20dp"
android:layout_gravity="center_horizontal"
android:textAllCaps="false"
android:text="Click to Punch"/>
<androidx.recyclerview.widget.RecyclerView
android:id="@+id/recyclerview_list"
android:layout_width="match_parent"
android:layout_height="match_parent" />
</LinearLayout>
In the activity_time.xml we can create the UI screen.
XML:
<?xml version="1.0" encoding="utf-8"?>
<LinearLayout xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:app="http://schemas.android.com/apk/res-auto"
xmlns:tools="http://schemas.android.com/tools"
android:layout_width="match_parent"
android:layout_height="match_parent"
android:orientation="vertical"
tools:context=".RoomDatabase.TimeActivity">
<TextView
android:id="@+id/datarecord_id"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_marginTop="15dp"
android:layout_gravity="left"
android:hint="id "
android:padding="5dp"
android:layout_marginLeft="10dp"
android:textSize="18sp"
android:textColor="@color/black" />
<EditText
android:id="@+id/edtpunch_id"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_marginTop="15dp"
android:layout_gravity="left"
android:hint="EmpID: "
android:padding="5dp"
android:layout_marginLeft="10dp"
android:textSize="18sp"
android:textColor="@color/black" />
<EditText
android:id="@+id/edtpunch_intime"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_marginTop="15dp"
android:layout_gravity="left"
android:hint="In Time: "
android:textSize="18sp"
android:padding="5dp"
android:layout_marginLeft="10dp"
android:textColor="@color/black" />
<EditText
android:id="@+id/edtpunch_outtime"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_marginTop="15dp"
android:layout_gravity="left"
android:hint="Out Time: "
android:textSize="18sp"
android:padding="5dp"
android:layout_marginLeft="10dp"
android:textColor="@color/black" />
<Button
android:id="@+id/btnpunch_save"
android:layout_width="150dp"
android:layout_height="50dp"
android:layout_gravity="center_horizontal"
android:textSize="18dp"
android:textAllCaps="false"
android:layout_marginTop="5dp"
android:text="Save"/>
<Button
android:id="@+id/btnpunch_update"
android:layout_width="150dp"
android:layout_height="50dp"
android:layout_gravity="center_horizontal"
android:textSize="18dp"
android:textAllCaps="false"
android:layout_marginTop="5dp"
android:text="Update"/>
<Button
android:id="@+id/btnpunch_delete"
android:layout_width="150dp"
android:layout_height="50dp"
android:layout_gravity="center_horizontal"
android:textSize="18dp"
android:textAllCaps="false"
android:layout_marginTop="5dp"
android:text="Delete"/>
</LinearLayout>
In the punch_list.xml we can create the UI screen for customized items.
XML:
<?xml version="1.0" encoding="utf-8"?>
<androidx.cardview.widget.CardView xmlns:android="http://schemas.android.com/apk/res/android"
android:layout_width="match_parent"
android:layout_height="wrap_content">
<LinearLayout
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:orientation="vertical"
android:padding="10dp">
<TextView
android:id="@+id/datarecord_viewholder_id"
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:layout_marginBottom="5dp"
android:layout_marginTop="10dp"
android:textSize="18sp"
android:text="id: "
android:textColor="@color/black"
android:textStyle="bold" />
<TextView
android:id="@+id/punch_id"
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:textSize="18sp"
android:layout_marginTop="10dp"
android:textStyle="bold"
android:textColor="@color/black"
android:text="ID: " />
<TextView
android:id="@+id/punch_in"
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:textSize="18sp"
android:layout_marginTop="10dp"
android:textStyle="bold"
android:textColor="@color/black"
android:text="IN Time: " />
<TextView
android:id="@+id/punch_out"
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:textSize="18sp"
android:layout_marginTop="10dp"
android:textStyle="bold"
android:textColor="@color/black"
android:text="OUT Time: " />
</LinearLayout>
</androidx.cardview.widget.CardView>
Demo
Tips and Tricks
1. Make sure you are already registered as Huawei developer.
2. Set minSDK version to 24 or later, otherwise you will get AndriodManifest merge issue.
3. Make sure you have added the agconnect-services.json file to app folder.
4. Make sure you have added SHA-256 fingerprint without fail.
5. Make sure all the dependencies are added properly.
Conclusion
In this article, we can learn about that how to enter In Time and Out time of an users by the Room Database and also, we can integrate the Analytics Kit for event tracking.
I hope you have read this article. If you found it is helpful, please provide likes and comments.
Reference
Analytics Kit - Documentation
Analytics Kit – Training Video
Room Database

Find hand points using Hand Gesture Recognition feature by Huawei ML Kit in Android (Kotlin)

{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Introduction
In this article, we can learn how to find the hand key points using Huawei ML Kit of Hand Gesture Recognition feature. This service provides two capabilities: hand keypoint detection and hand gesture recognition. The hand keypoint detection capability can detect 21 hand keypoints (including fingertips, knuckles, and wrists) and return the positions of the key points. The hand gesture recognition capability can detect and return the positions of all rectangle areas of the hand from images and videos, and the type and confidence of a gesture. This capability can recognize 14 gestures, including the thumbs-up/down, OK sign, fist, finger heart, and number gestures from 1 to 9. Both capabilities support detection from static images and real-time camera streams.
Use Cases
Hand keypoint detection is widely used in daily life. For example, after integrating this capability, users can convert the detected hand keypoints into a 2D model, and synchronize the model to the character model, to produce a vivid 2D animation. In addition, when shooting a short video, special effects can be generated based on dynamic hand trajectories. This allows users to play finger games, thereby making the video shooting process more creative and interactive. Hand gesture recognition enables your app to call various commands by recognizing users' gestures. Users can control their smart home appliances without touching them. In this way, this capability makes the human-machine interaction more efficient.
Requirements
1. Any operating system (MacOS, Linux and Windows).
2. Must have a Huawei phone with HMS 4.0.0.300 or later.
3. Must have a laptop or desktop with Android Studio, Jdk 1.8, SDK platform 26 and Gradle 4.6 and above installed.
4. Minimum API Level 21 is required.
5. Required EMUI 9.0.0 and later version devices.
How to integrate HMS Dependencies
1. First register as Huawei developer and complete identity verification in Huawei developers website, refer to register a Huawei ID.
2. Create a project in android studio, refer Creating an Android Studio Project.
3. Generate a SHA-256 certificate fingerprint.
4. To generate SHA-256 certificate fingerprint. On right-upper corner of android project click Gradle, choose Project Name > Tasks > android, and then click signingReport, as follows.
Note: Project Name depends on the user created name.
5. Create an App in AppGallery Connect.
6. Download the agconnect-services.json file from App information, copy and paste in android Project under app directory, as follows.
7. Enter SHA-256 certificate fingerprint and click Save button, as follows.
Note: Above steps from Step 1 to 7 is common for all Huawei Kits.
8. Click Manage APIs tab and enable ML Kit.
9. Add the below maven URL in build.gradle(Project) file under the repositories of buildscript, dependencies and allprojects, refer Add Configuration.
Java:
maven { url 'http://developer.huawei.com/repo/' }
classpath 'com.huawei.agconnect:agcp:1.4.1.300'
10. Add the below plugin and dependencies in build.gradle(Module) file.
Java:
apply plugin: 'com.huawei.agconnect'
// Huawei AGC
implementation 'com.huawei.agconnect:agconnect-core:1.6.5.300'
/ ML Kit Hand Gesture
// Import the base SDK
implementation 'com.huawei.hms:ml-computer-vision-handkeypoint:2.1.0.300'
// Import the hand keypoint detection model package.
implementation 'com.huawei.hms:ml-computer-vision-handkeypoint-model:2.1.0.300'
11. Now Sync the gradle.
12. Add the required permission to the AndroidManifest.xml file.
Java:
<uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" />
<uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE" />
<uses-permission android:name="android.permission.CAMERA" />
<uses-permission android:name="android.permission.INTERNET" />
Let us move to development
I have created a project on Android studio with empty activity let us start coding.
In the MainActivity.kt we can find the business logic for buttons.
Java:
class MainActivity : AppCompatActivity() {
private var staticButton: Button? = null
private var liveButton: Button? = null
override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
setContentView(R.layout.activity_main)
staticButton = findViewById(R.id.btn_static)
liveButton = findViewById(R.id.btn_live)
staticButton!!.setOnClickListener {
val intent = Intent([email protected], StaticHandKeyPointAnalyse::class.java)
startActivity(intent)
}
liveButton!!.setOnClickListener {
val intent = Intent([email protected], LiveHandKeyPointAnalyse::class.java)
startActivity(intent)
}
}
}
In the LiveHandKeyPointAnalyse.kt we can find the business logic for live analysis.
Java:
class LiveHandKeyPointAnalyse : AppCompatActivity(), View.OnClickListener {
private val TAG: String = LiveHandKeyPointAnalyse::class.java.getSimpleName()
private var mPreview: LensEnginePreview? = null
private var mOverlay: GraphicOverlay? = null
private var mFacingSwitch: Button? = null
private var mAnalyzer: MLHandKeypointAnalyzer? = null
private var mLensEngine: LensEngine? = null
private val lensType = LensEngine.BACK_LENS
private var mLensType = 0
private var isFront = false
private var isPermissionRequested = false
private val CAMERA_PERMISSION_CODE = 0
private val ALL_PERMISSION = arrayOf(Manifest.permission.CAMERA)
override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
setContentView(R.layout.activity_live_hand_key_point_analyse)
if (savedInstanceState != null) {
mLensType = savedInstanceState.getInt("lensType")
}
initView()
createHandAnalyzer()
if (Camera.getNumberOfCameras() == 1) {
mFacingSwitch!!.visibility = View.GONE
}
// Checking Camera Permissions
if (ActivityCompat.checkSelfPermission(this, Manifest.permission.CAMERA) == PackageManager.PERMISSION_GRANTED) {
createLensEngine()
} else {
checkPermission()
}
}
private fun initView() {
mPreview = findViewById(R.id.hand_preview)
mOverlay = findViewById(R.id.hand_overlay)
mFacingSwitch = findViewById(R.id.handswitch)
mFacingSwitch!!.setOnClickListener(this)
}
private fun createHandAnalyzer() {
// Create a analyzer. You can create an analyzer using the provided customized face detection parameter: MLHandKeypointAnalyzerSetting
val setting = MLHandKeypointAnalyzerSetting.Factory()
.setMaxHandResults(2)
.setSceneType(MLHandKeypointAnalyzerSetting.TYPE_ALL)
.create()
mAnalyzer = MLHandKeypointAnalyzerFactory.getInstance().getHandKeypointAnalyzer(setting)
mAnalyzer!!.setTransactor(HandAnalyzerTransactor(this, mOverlay!!) )
}
// Check the permissions required by the SDK.
private fun checkPermission() {
if (Build.VERSION.SDK_INT >= 23 && !isPermissionRequested) {
isPermissionRequested = true
val permissionsList = ArrayList<String>()
for (perm in getAllPermission()!!) {
if (PackageManager.PERMISSION_GRANTED != checkSelfPermission(perm.toString())) {
permissionsList.add(perm.toString())
}
}
if (!permissionsList.isEmpty()) {
requestPermissions(permissionsList.toTypedArray(), 0)
}
}
}
private fun getAllPermission(): MutableList<Array<String>> {
return Collections.unmodifiableList(listOf(ALL_PERMISSION))
}
private fun createLensEngine() {
val context = this.applicationContext
// Create LensEngine.
mLensEngine = LensEngine.Creator(context, mAnalyzer)
.setLensType(mLensType)
.applyDisplayDimension(640, 480)
.applyFps(25.0f)
.enableAutomaticFocus(true)
.create()
}
private fun startLensEngine() {
if (mLensEngine != null) {
try {
mPreview!!.start(mLensEngine, mOverlay)
} catch (e: IOException) {
Log.e(TAG, "Failed to start lens engine.", e)
mLensEngine!!.release()
mLensEngine = null
}
}
}
// Permission application callback.
override fun onRequestPermissionsResult(requestCode: Int, permissions: Array<String?>, grantResults: IntArray) {
var hasAllGranted = true
if (requestCode == CAMERA_PERMISSION_CODE) {
if (grantResults[0] == PackageManager.PERMISSION_GRANTED) {
createLensEngine()
} else if (grantResults[0] == PackageManager.PERMISSION_DENIED) {
hasAllGranted = false
if (!ActivityCompat.shouldShowRequestPermissionRationale(this, permissions[0]!!)) {
showWaringDialog()
} else {
Toast.makeText(this, R.string.toast, Toast.LENGTH_SHORT).show()
finish()
}
}
return
}
super.onRequestPermissionsResult(requestCode, permissions, grantResults)
}
override fun onSaveInstanceState(outState: Bundle) {
outState.putInt("lensType", lensType)
super.onSaveInstanceState(outState)
}
private class HandAnalyzerTransactor internal constructor(mainActivity: LiveHandKeyPointAnalyse?,
private val mGraphicOverlay: GraphicOverlay) : MLTransactor<MLHandKeypoints?> {
// Process the results returned by the analyzer.
override fun transactResult(result: MLAnalyzer.Result<MLHandKeypoints?>) {
mGraphicOverlay.clear()
val handKeypointsSparseArray = result.analyseList
val list: MutableList<MLHandKeypoints?> = ArrayList()
for (i in 0 until handKeypointsSparseArray.size()) {
list.add(handKeypointsSparseArray.valueAt(i))
}
val graphic = HandKeypointGraphic(mGraphicOverlay, list)
mGraphicOverlay.add(graphic)
}
override fun destroy() {
mGraphicOverlay.clear()
}
}
override fun onClick(v: View?) {
when (v!!.id) {
R.id.handswitch -> switchCamera()
else -> {}
}
}
private fun switchCamera() {
isFront = !isFront
mLensType = if (isFront) {
LensEngine.FRONT_LENS
} else {
LensEngine.BACK_LENS
}
if (mLensEngine != null) {
mLensEngine!!.close()
}
createLensEngine()
startLensEngine()
}
private fun showWaringDialog() {
val dialog = AlertDialog.Builder(this)
dialog.setMessage(R.string.Information_permission)
.setPositiveButton(R.string.go_authorization,
DialogInterface.OnClickListener { dialog, which ->
val intent = Intent(Settings.ACTION_APPLICATION_DETAILS_SETTINGS)
val uri = Uri.fromParts("package", applicationContext.packageName, null)
intent.data = uri
startActivity(intent)
})
.setNegativeButton("Cancel", DialogInterface.OnClickListener { dialog, which -> finish() })
.setOnCancelListener(dialogInterface)
dialog.setCancelable(false)
dialog.show()
}
var dialogInterface = DialogInterface.OnCancelListener { }
override fun onResume() {
super.onResume()
if (ActivityCompat.checkSelfPermission(this, Manifest.permission.CAMERA) == PackageManager.PERMISSION_GRANTED) {
createLensEngine()
startLensEngine()
} else {
checkPermission()
}
}
override fun onPause() {
super.onPause()
mPreview!!.stop()
}
override fun onDestroy() {
super.onDestroy()
if (mLensEngine != null) {
mLensEngine!!.release()
}
if (mAnalyzer != null) {
mAnalyzer!!.stop()
}
}
}
Create LensEnginePreview.kt class to find the business logic for lens engine view.
Java:
class LensEnginePreview(private val mContext: Context, attrs: AttributeSet?) : ViewGroup(mContext, attrs) {
private val mSurfaceView: SurfaceView
private var mStartRequested = false
private var mSurfaceAvailable = false
private var mLensEngine: LensEngine? = null
private var mOverlay: GraphicOverlay? = null
@Throws(IOException::class)
fun start(lensEngine: LensEngine?) {
if (lensEngine == null) {
stop()
}
mLensEngine = lensEngine
if (mLensEngine != null) {
mStartRequested = true
startIfReady()
}
}
@Throws(IOException::class)
fun start(lensEngine: LensEngine?, overlay: GraphicOverlay?) {
mOverlay = overlay
this.start(lensEngine)
}
fun stop() {
if (mLensEngine != null) {
mLensEngine!!.close()
}
}
@Throws(IOException::class)
private fun startIfReady() {
if (mStartRequested && mSurfaceAvailable) {
mLensEngine!!.run(mSurfaceView.holder)
if (mOverlay != null) {
val size = mLensEngine!!.displayDimension
val min = Math.min(size.width, size.height)
val max = Math.max(size.width, size.height)
if (isPortraitMode) {
// Swap width and height sizes when in portrait, since it will be rotated by 90 degrees.
mOverlay!!.setCameraInfo(min, max, mLensEngine!!.lensType)
} else {
mOverlay!!.setCameraInfo(max, min, mLensEngine!!.lensType)
}
mOverlay!!.clear()
}
mStartRequested = false
}
}
private inner class SurfaceCallback : SurfaceHolder.Callback {
override fun surfaceCreated(surface: SurfaceHolder) {
mSurfaceAvailable = true
try {
startIfReady()
} catch (e: IOException) {
Log.e(TAG, "Could not start camera source.", e)
}
}
override fun surfaceDestroyed(surface: SurfaceHolder) {
mSurfaceAvailable = false
}
override fun surfaceChanged(holder: SurfaceHolder, format: Int, width: Int, height: Int) {}
}
override fun onLayout(changed: Boolean, left: Int, top: Int, right: Int, bottom: Int) {
var previewWidth = 480
var previewHeight = 360
if (mLensEngine != null) {
val size = mLensEngine!!.displayDimension
if (size != null) {
previewWidth = size.width
previewHeight = size.height
}
}
// Swap width and height sizes when in portrait, since it will be rotated 90 degrees
if (isPortraitMode) {
val tmp = previewWidth
previewWidth = previewHeight
previewHeight = tmp
}
val viewWidth = right - left
val viewHeight = bottom - top
val childWidth: Int
val childHeight: Int
var childXOffset = 0
var childYOffset = 0
val widthRatio = viewWidth.toFloat() / previewWidth.toFloat()
val heightRatio = viewHeight.toFloat() / previewHeight.toFloat()
// To fill the view with the camera preview, while also preserving the correct aspect ratio,
// it is usually necessary to slightly oversize the child and to crop off portions along one
// of the dimensions. We scale up based on the dimension requiring the most correction, and
// compute a crop offset for the other dimension.
if (widthRatio > heightRatio) {
childWidth = viewWidth
childHeight = (previewHeight.toFloat() * widthRatio).toInt()
childYOffset = (childHeight - viewHeight) / 2
} else {
childWidth = (previewWidth.toFloat() * heightRatio).toInt()
childHeight = viewHeight
childXOffset = (childWidth - viewWidth) / 2
}
for (i in 0 until this.childCount) {
// One dimension will be cropped. We shift child over or up by this offset and adjust
// the size to maintain the proper aspect ratio.
getChildAt(i).layout(-1 * childXOffset, -1 * childYOffset,
childWidth - childXOffset,childHeight - childYOffset )
}
try {
startIfReady()
} catch (e: IOException) {
Log.e(TAG, "Could not start camera source.", e)
}
}
private val isPortraitMode: Boolean
get() {
val orientation = mContext.resources.configuration.orientation
if (orientation == Configuration.ORIENTATION_LANDSCAPE) {
return false
}
if (orientation == Configuration.ORIENTATION_PORTRAIT) {
return true
}
Log.d(TAG, "isPortraitMode returning false by default")
return false
}
companion object {
private val TAG = LensEnginePreview::class.java.simpleName
}
init {
mSurfaceView = SurfaceView(mContext)
mSurfaceView.holder.addCallback(SurfaceCallback())
this.addView(mSurfaceView)
}
}
Create HandKeypointGraphic.kt class to find the business logic for hand key point.
Java:
class HandKeypointGraphic(overlay: GraphicOverlay?, private val handKeypoints: MutableList<MLHandKeypoints?>) : GraphicOverlay.Graphic(overlay!!) {
private val rectPaint: Paint
private val idPaintnew: Paint
companion object {
private const val BOX_STROKE_WIDTH = 5.0f
}
private fun translateRect(rect: Rect): Rect {
var left: Float = translateX(rect.left)
var right: Float = translateX(rect.right)
var bottom: Float = translateY(rect.bottom)
var top: Float = translateY(rect.top)
if (left > right) {
val size = left
left = right
right = size
}
if (bottom < top) {
val size = bottom
bottom = top
top = size
}
return Rect(left.toInt(), top.toInt(), right.toInt(), bottom.toInt())
}
init {
val selectedColor = Color.WHITE
idPaintnew = Paint()
idPaintnew.color = Color.GREEN
idPaintnew.textSize = 32f
rectPaint = Paint()
rectPaint.color = selectedColor
rectPaint.style = Paint.Style.STROKE
rectPaint.strokeWidth = BOX_STROKE_WIDTH
}
override fun draw(canvas: Canvas?) {
for (i in handKeypoints.indices) {
val mHandKeypoints = handKeypoints[i]
if (mHandKeypoints!!.getHandKeypoints() == null) {
continue
}
val rect = translateRect(handKeypoints[i]!!.getRect())
canvas!!.drawRect(rect, rectPaint)
for (handKeypoint in mHandKeypoints.getHandKeypoints()) {
if (!(Math.abs(handKeypoint.getPointX() - 0f) == 0f && Math.abs(handKeypoint.getPointY() - 0f) == 0f)) {
canvas!!.drawCircle(translateX(handKeypoint.getPointX().toInt()),
translateY(handKeypoint.getPointY().toInt()), 24f, idPaintnew)
}
}
}
}
}
Create GraphicOverlay.kt class to find the business logic for graphic overlay.
Java:
class GraphicOverlay(context: Context?, attrs: AttributeSet?) : View(context, attrs) {
private val mLock = Any()
private var mPreviewWidth = 0
private var mWidthScaleFactor = 1.0f
private var mPreviewHeight = 0
private var mHeightScaleFactor = 1.0f
private var mFacing = LensEngine.BACK_LENS
private val mGraphics: MutableSet<Graphic> = HashSet()
// Base class for a custom graphics object to be rendered within the graphic overlay. Subclass
// this and implement the [Graphic.draw] method to define the graphics element. Add instances to the overlay using [GraphicOverlay.add].
abstract class Graphic(private val mOverlay: GraphicOverlay) {
// Draw the graphic on the supplied canvas. Drawing should use the following methods to
// convert to view coordinates for the graphics that are drawn:
// 1. [Graphic.scaleX] and [Graphic.scaleY] adjust the size of the supplied value from the preview scale to the view scale.
// 2. [Graphic.translateX] and [Graphic.translateY] adjust the coordinate from the preview's coordinate system to the view coordinate system.
// @param canvas drawing canvas
abstract fun draw(canvas: Canvas?)
// Adjusts a horizontal value of the supplied value from the preview scale to the view scale.
fun scaleX(horizontal: Float): Float {
return horizontal * mOverlay.mWidthScaleFactor
}
// Adjusts a vertical value of the supplied value from the preview scale to the view scale.
fun scaleY(vertical: Float): Float {
return vertical * mOverlay.mHeightScaleFactor
}
// Adjusts the x coordinate from the preview's coordinate system to the view coordinate system.
fun translateX(x: Int): Float {
return if (mOverlay.mFacing == LensEngine.FRONT_LENS) {
mOverlay.width - scaleX(x.toFloat())
} else {
scaleX(x.toFloat())
}
}
// Adjusts the y coordinate from the preview's coordinate system to the view coordinate system.
fun translateY(y: Int): Float {
return scaleY(y.toFloat())
}
}
// Removes all graphics from the overlay.
fun clear() {
synchronized(mLock) { mGraphics.clear() }
postInvalidate()
}
// Adds a graphic to the overlay.
fun add(graphic: Graphic) {
synchronized(mLock) { mGraphics.add(graphic) }
postInvalidate()
}
// Sets the camera attributes for size and facing direction, which informs how to transform image coordinates later.
fun setCameraInfo(previewWidth: Int, previewHeight: Int, facing: Int) {
synchronized(mLock) {
mPreviewWidth = previewWidth
mPreviewHeight = previewHeight
mFacing = facing
}
postInvalidate()
}
// Draws the overlay with its associated graphic objects.
override fun onDraw(canvas: Canvas) {
super.onDraw(canvas)
synchronized(mLock) {
if (mPreviewWidth != 0 && mPreviewHeight != 0) {
mWidthScaleFactor = canvas.width.toFloat() / mPreviewWidth.toFloat()
mHeightScaleFactor = canvas.height.toFloat() / mPreviewHeight.toFloat()
}
for (graphic in mGraphics) {
graphic.draw(canvas)
}
}
}
}
In the StaticHandKeyPointAnalyse.kt we can find the business logic static hand key point analyses.
Java:
class StaticHandKeyPointAnalyse : AppCompatActivity() {
var analyzer: MLHandKeypointAnalyzer? = null
var bitmap: Bitmap? = null
var mutableBitmap: Bitmap? = null
var mlFrame: MLFrame? = null
var imageSelected: ImageView? = null
var picUri: Uri? = null
var pickButton: Button? = null
var analyzeButton:Button? = null
var permissions = arrayOf(Manifest.permission.READ_EXTERNAL_STORAGE, Manifest.permission.WRITE_EXTERNAL_STORAGE,
Manifest.permission.CAMERA)
override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
setContentView(R.layout.activity_static_hand_key_point_analyse)
pickButton = findViewById(R.id.pick_img)
analyzeButton = findViewById(R.id.analyse_img)
imageSelected = findViewById(R.id.selected_img)
initialiseSettings()
pickButton!!.setOnClickListener(View.OnClickListener {
pickRequiredImage()
})
analyzeButton!!.setOnClickListener(View.OnClickListener {
asynchronouslyStaticHandkey()
})
checkRequiredPermission()
}
private fun checkRequiredPermission() {
if (PackageManager.PERMISSION_GRANTED != ContextCompat.checkSelfPermission(this, Manifest.permission.WRITE_EXTERNAL_STORAGE)
|| PackageManager.PERMISSION_GRANTED != ContextCompat.checkSelfPermission(this, Manifest.permission.READ_EXTERNAL_STORAGE)
|| PackageManager.PERMISSION_GRANTED != ContextCompat.checkSelfPermission(this, Manifest.permission.CAMERA)) {
ActivityCompat.requestPermissions(this, permissions, 111)
}
}
private fun initialiseSettings() {
val setting = MLHandKeypointAnalyzerSetting.Factory() // MLHandKeypointAnalyzerSetting.TYPE_ALL indicates that all results are returned.
// MLHandKeypointAnalyzerSetting.TYPE_KEYPOINT_ONLY indicates that only hand keypoint information is returned.
// MLHandKeypointAnalyzerSetting.TYPE_RECT_ONLY indicates that only palm information is returned.
.setSceneType(MLHandKeypointAnalyzerSetting.TYPE_ALL) // Set the maximum number of hand regions that can be detected in an image. By default, a maximum of 10 hand regions can be detected.
.setMaxHandResults(1)
.create()
analyzer = MLHandKeypointAnalyzerFactory.getInstance().getHandKeypointAnalyzer(setting)
}
private fun asynchronouslyStaticHandkey() {
val task = analyzer!!.asyncAnalyseFrame(mlFrame)
task.addOnSuccessListener { results ->
val canvas = Canvas(mutableBitmap!!)
val paint = Paint()
paint.color = Color.GREEN
paint.style = Paint.Style.FILL
val mlHandKeypoints = results[0]
for (mlHandKeypoint in mlHandKeypoints.getHandKeypoints()) {
canvas.drawCircle(mlHandKeypoint.pointX, mlHandKeypoint.pointY, 48f, paint)
}
imageSelected!!.setImageBitmap(mutableBitmap)
checkAnalyserForStop()
}.addOnFailureListener { // Detection failure.
checkAnalyserForStop()
}
}
private fun checkAnalyserForStop() {
if (analyzer != null) {
analyzer!!.stop()
}
}
private fun pickRequiredImage() {
val intent = Intent()
intent.type = "image/*"
intent.action = Intent.ACTION_PICK
startActivityForResult(Intent.createChooser(intent, "Select Picture"), 20)
}
override fun onActivityResult(requestCode: Int, resultCode: Int, data: Intent?) {
super.onActivityResult(requestCode, resultCode, data)
if (requestCode == 20 && resultCode == RESULT_OK && null != data) {
picUri = data.data
val filePathColumn = arrayOf(MediaStore.Images.Media.DATA)
val cursor = contentResolver.query( picUri!!, filePathColumn, null, null, null)
cursor!!.moveToFirst()
cursor.close()
imageSelected!!.setImageURI(picUri)
imageSelected!!.invalidate()
val drawable = imageSelected!!.drawable as BitmapDrawable
bitmap = drawable.bitmap
mutableBitmap = bitmap!!.copy(Bitmap.Config.ARGB_8888, true)
mlFrame = null
mlFrame = MLFrame.fromBitmap(bitmap)
}
}
}
In the activity_main.xml we can create the UI screen.
Java:
<?xml version="1.0" encoding="utf-8"?>
<androidx.constraintlayout.widget.ConstraintLayout xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:app="http://schemas.android.com/apk/res-auto"
xmlns:tools="http://schemas.android.com/tools"
android:layout_width="match_parent"
android:layout_height="match_parent"
tools:context=".MainActivity">
<Button
android:id="@+id/btn_static"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:text="Static Detection"
android:textAllCaps="false"
android:textSize="18sp"
app:layout_constraintBottom_toTopOf="@+id/btn_live"
app:layout_constraintLeft_toLeftOf="parent"
app:layout_constraintRight_toRightOf="parent"
android:textColor="@color/black"
style="@style/Widget.MaterialComponents.Button.MyTextButton"
app:layout_constraintTop_toTopOf="parent" />
<Button
android:id="@+id/btn_live"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:text="Live Detection"
android:textAllCaps="false"
android:textSize="18sp"
android:layout_marginBottom="150dp"
app:layout_constraintBottom_toBottomOf="parent"
app:layout_constraintLeft_toLeftOf="parent"
android:textColor="@color/black"
style="@style/Widget.MaterialComponents.Button.MyTextButton"
app:layout_constraintRight_toRightOf="parent" />
</androidx.constraintlayout.widget.ConstraintLayout>
In the activity_live_hand_key_point_analyse.xml we can create the UI screen.
Java:
<?xml version="1.0" encoding="utf-8"?>
<androidx.constraintlayout.widget.ConstraintLayout xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:app="http://schemas.android.com/apk/res-auto"
xmlns:tools="http://schemas.android.com/tools"
android:layout_width="match_parent"
android:layout_height="match_parent"
tools:context=".LiveHandKeyPointAnalyse">
<com.example.mlhandgesturesample.LensEnginePreview
android:id="@+id/hand_preview"
android:layout_width="match_parent"
android:layout_height="match_parent"
tools:ignore="MissingClass">
<com.example.mlhandgesturesample.GraphicOverlay
android:id="@+id/hand_overlay"
android:layout_width="match_parent"
android:layout_height="match_parent" />
</com.example.mlhandgesturesample.LensEnginePreview>
<Button
android:id="@+id/handswitch"
android:layout_width="35dp"
android:layout_height="35dp"
app:layout_constraintBottom_toBottomOf="parent"
app:layout_constraintStart_toStartOf="parent"
app:layout_constraintEnd_toEndOf="parent"
android:layout_marginBottom="35dp"
android:background="@drawable/front_back_switch"
android:textOff=""
android:textOn=""
tools:ignore="MissingConstraints" />
</androidx.constraintlayout.widget.ConstraintLayout>
In the activity_static_hand_key_point_analyse.xml we can create the UI screen.
Java:
<?xml version="1.0" encoding="utf-8"?>
<androidx.constraintlayout.widget.ConstraintLayout xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:app="http://schemas.android.com/apk/res-auto"
xmlns:tools="http://schemas.android.com/tools"
android:layout_width="match_parent"
android:layout_height="match_parent"
tools:context=".StaticHandKeyPointAnalyse">
<com.google.android.material.button.MaterialButton
android:id="@+id/pick_img"
android:text="Pick Image"
android:textSize="18sp"
android:textColor="@android:color/black"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:textAllCaps="false"
app:layout_constraintTop_toTopOf="parent"
app:layout_constraintBottom_toTopOf="@id/selected_img"
app:layout_constraintLeft_toLeftOf="@id/selected_img"
app:layout_constraintRight_toRightOf="@id/selected_img"
style="@style/Widget.MaterialComponents.Button.MyTextButton"/>
<ImageView
android:visibility="visible"
android:id="@+id/selected_img"
android:layout_width="350dp"
android:layout_height="350dp"
app:layout_constraintBottom_toBottomOf="parent"
app:layout_constraintLeft_toLeftOf="parent"
app:layout_constraintRight_toRightOf="parent"
app:layout_constraintTop_toTopOf="parent" />
<com.google.android.material.button.MaterialButton
android:id="@+id/analyse_img"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:textColor="@android:color/black"
android:text="Analyse"
android:textSize="18sp"
android:textAllCaps="false"
app:layout_constraintBottom_toBottomOf="parent"
app:layout_constraintTop_toBottomOf="@id/selected_img"
app:layout_constraintLeft_toLeftOf="@id/selected_img"
app:layout_constraintRight_toRightOf="@id/selected_img"
style="@style/Widget.MaterialComponents.Button.MyTextButton"/>
</androidx.constraintlayout.widget.ConstraintLayout>
Demo
Tips and Tricks
1. Make sure you are already registered as Huawei developer.
2. Set minSDK version to 21 or later, otherwise you will get AndriodManifest merge issue.
3. Make sure you have added the agconnect-services.json file to app folder.
4. Make sure you have added SHA-256 fingerprint without fail.
5. Make sure all the dependencies are added properly.
Conclusion
In this article, we have learned how to find the hand key points using Huawei ML Kit of Hand Gesture Recognition feature. This service provides two capabilities: hand keypoint detection and hand gesture recognition. The hand keypoint detection capability can detect 21 hand keypoints (including fingertips, knuckles, and wrists) and return the positions of the key points. The hand gesture recognition capability can detect and return the positions of all rectangle areas of the hand from images and videos, and the type and confidence of a gesture. This capability can recognize 14 gestures, including the thumbs-up/down, OK sign, fist, finger heart, and number gestures from 1 to 9. Both capabilities support detection from static images and real-time camera streams.
I hope you have read this article. If you found it is helpful, please provide likes and comments.
Reference
ML Kit – Hand Gesture Recognition
ML Kit – Training Video

Categories

Resources