Find hand points using Hand Gesture Recognition feature by Huawei ML Kit in Android (Kotlin) - Huawei Developers

{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Introduction
In this article, we can learn how to find the hand key points using Huawei ML Kit of Hand Gesture Recognition feature. This service provides two capabilities: hand keypoint detection and hand gesture recognition. The hand keypoint detection capability can detect 21 hand keypoints (including fingertips, knuckles, and wrists) and return the positions of the key points. The hand gesture recognition capability can detect and return the positions of all rectangle areas of the hand from images and videos, and the type and confidence of a gesture. This capability can recognize 14 gestures, including the thumbs-up/down, OK sign, fist, finger heart, and number gestures from 1 to 9. Both capabilities support detection from static images and real-time camera streams.
Use Cases
Hand keypoint detection is widely used in daily life. For example, after integrating this capability, users can convert the detected hand keypoints into a 2D model, and synchronize the model to the character model, to produce a vivid 2D animation. In addition, when shooting a short video, special effects can be generated based on dynamic hand trajectories. This allows users to play finger games, thereby making the video shooting process more creative and interactive. Hand gesture recognition enables your app to call various commands by recognizing users' gestures. Users can control their smart home appliances without touching them. In this way, this capability makes the human-machine interaction more efficient.
Requirements
1. Any operating system (MacOS, Linux and Windows).
2. Must have a Huawei phone with HMS 4.0.0.300 or later.
3. Must have a laptop or desktop with Android Studio, Jdk 1.8, SDK platform 26 and Gradle 4.6 and above installed.
4. Minimum API Level 21 is required.
5. Required EMUI 9.0.0 and later version devices.
How to integrate HMS Dependencies
1. First register as Huawei developer and complete identity verification in Huawei developers website, refer to register a Huawei ID.
2. Create a project in android studio, refer Creating an Android Studio Project.
3. Generate a SHA-256 certificate fingerprint.
4. To generate SHA-256 certificate fingerprint. On right-upper corner of android project click Gradle, choose Project Name > Tasks > android, and then click signingReport, as follows.
Note: Project Name depends on the user created name.
5. Create an App in AppGallery Connect.
6. Download the agconnect-services.json file from App information, copy and paste in android Project under app directory, as follows.
7. Enter SHA-256 certificate fingerprint and click Save button, as follows.
Note: Above steps from Step 1 to 7 is common for all Huawei Kits.
8. Click Manage APIs tab and enable ML Kit.
9. Add the below maven URL in build.gradle(Project) file under the repositories of buildscript, dependencies and allprojects, refer Add Configuration.
Java:
maven { url 'http://developer.huawei.com/repo/' }
classpath 'com.huawei.agconnect:agcp:1.4.1.300'
10. Add the below plugin and dependencies in build.gradle(Module) file.
Java:
apply plugin: 'com.huawei.agconnect'
// Huawei AGC
implementation 'com.huawei.agconnect:agconnect-core:1.6.5.300'
/ ML Kit Hand Gesture
// Import the base SDK
implementation 'com.huawei.hms:ml-computer-vision-handkeypoint:2.1.0.300'
// Import the hand keypoint detection model package.
implementation 'com.huawei.hms:ml-computer-vision-handkeypoint-model:2.1.0.300'
11. Now Sync the gradle.
12. Add the required permission to the AndroidManifest.xml file.
Java:
<uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" />
<uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE" />
<uses-permission android:name="android.permission.CAMERA" />
<uses-permission android:name="android.permission.INTERNET" />
Let us move to development
I have created a project on Android studio with empty activity let us start coding.
In the MainActivity.kt we can find the business logic for buttons.
Java:
class MainActivity : AppCompatActivity() {
private var staticButton: Button? = null
private var liveButton: Button? = null
override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
setContentView(R.layout.activity_main)
staticButton = findViewById(R.id.btn_static)
liveButton = findViewById(R.id.btn_live)
staticButton!!.setOnClickListener {
val intent = Intent([email protected], StaticHandKeyPointAnalyse::class.java)
startActivity(intent)
}
liveButton!!.setOnClickListener {
val intent = Intent([email protected], LiveHandKeyPointAnalyse::class.java)
startActivity(intent)
}
}
}
In the LiveHandKeyPointAnalyse.kt we can find the business logic for live analysis.
Java:
class LiveHandKeyPointAnalyse : AppCompatActivity(), View.OnClickListener {
private val TAG: String = LiveHandKeyPointAnalyse::class.java.getSimpleName()
private var mPreview: LensEnginePreview? = null
private var mOverlay: GraphicOverlay? = null
private var mFacingSwitch: Button? = null
private var mAnalyzer: MLHandKeypointAnalyzer? = null
private var mLensEngine: LensEngine? = null
private val lensType = LensEngine.BACK_LENS
private var mLensType = 0
private var isFront = false
private var isPermissionRequested = false
private val CAMERA_PERMISSION_CODE = 0
private val ALL_PERMISSION = arrayOf(Manifest.permission.CAMERA)
override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
setContentView(R.layout.activity_live_hand_key_point_analyse)
if (savedInstanceState != null) {
mLensType = savedInstanceState.getInt("lensType")
}
initView()
createHandAnalyzer()
if (Camera.getNumberOfCameras() == 1) {
mFacingSwitch!!.visibility = View.GONE
}
// Checking Camera Permissions
if (ActivityCompat.checkSelfPermission(this, Manifest.permission.CAMERA) == PackageManager.PERMISSION_GRANTED) {
createLensEngine()
} else {
checkPermission()
}
}
private fun initView() {
mPreview = findViewById(R.id.hand_preview)
mOverlay = findViewById(R.id.hand_overlay)
mFacingSwitch = findViewById(R.id.handswitch)
mFacingSwitch!!.setOnClickListener(this)
}
private fun createHandAnalyzer() {
// Create a analyzer. You can create an analyzer using the provided customized face detection parameter: MLHandKeypointAnalyzerSetting
val setting = MLHandKeypointAnalyzerSetting.Factory()
.setMaxHandResults(2)
.setSceneType(MLHandKeypointAnalyzerSetting.TYPE_ALL)
.create()
mAnalyzer = MLHandKeypointAnalyzerFactory.getInstance().getHandKeypointAnalyzer(setting)
mAnalyzer!!.setTransactor(HandAnalyzerTransactor(this, mOverlay!!) )
}
// Check the permissions required by the SDK.
private fun checkPermission() {
if (Build.VERSION.SDK_INT >= 23 && !isPermissionRequested) {
isPermissionRequested = true
val permissionsList = ArrayList<String>()
for (perm in getAllPermission()!!) {
if (PackageManager.PERMISSION_GRANTED != checkSelfPermission(perm.toString())) {
permissionsList.add(perm.toString())
}
}
if (!permissionsList.isEmpty()) {
requestPermissions(permissionsList.toTypedArray(), 0)
}
}
}
private fun getAllPermission(): MutableList<Array<String>> {
return Collections.unmodifiableList(listOf(ALL_PERMISSION))
}
private fun createLensEngine() {
val context = this.applicationContext
// Create LensEngine.
mLensEngine = LensEngine.Creator(context, mAnalyzer)
.setLensType(mLensType)
.applyDisplayDimension(640, 480)
.applyFps(25.0f)
.enableAutomaticFocus(true)
.create()
}
private fun startLensEngine() {
if (mLensEngine != null) {
try {
mPreview!!.start(mLensEngine, mOverlay)
} catch (e: IOException) {
Log.e(TAG, "Failed to start lens engine.", e)
mLensEngine!!.release()
mLensEngine = null
}
}
}
// Permission application callback.
override fun onRequestPermissionsResult(requestCode: Int, permissions: Array<String?>, grantResults: IntArray) {
var hasAllGranted = true
if (requestCode == CAMERA_PERMISSION_CODE) {
if (grantResults[0] == PackageManager.PERMISSION_GRANTED) {
createLensEngine()
} else if (grantResults[0] == PackageManager.PERMISSION_DENIED) {
hasAllGranted = false
if (!ActivityCompat.shouldShowRequestPermissionRationale(this, permissions[0]!!)) {
showWaringDialog()
} else {
Toast.makeText(this, R.string.toast, Toast.LENGTH_SHORT).show()
finish()
}
}
return
}
super.onRequestPermissionsResult(requestCode, permissions, grantResults)
}
override fun onSaveInstanceState(outState: Bundle) {
outState.putInt("lensType", lensType)
super.onSaveInstanceState(outState)
}
private class HandAnalyzerTransactor internal constructor(mainActivity: LiveHandKeyPointAnalyse?,
private val mGraphicOverlay: GraphicOverlay) : MLTransactor<MLHandKeypoints?> {
// Process the results returned by the analyzer.
override fun transactResult(result: MLAnalyzer.Result<MLHandKeypoints?>) {
mGraphicOverlay.clear()
val handKeypointsSparseArray = result.analyseList
val list: MutableList<MLHandKeypoints?> = ArrayList()
for (i in 0 until handKeypointsSparseArray.size()) {
list.add(handKeypointsSparseArray.valueAt(i))
}
val graphic = HandKeypointGraphic(mGraphicOverlay, list)
mGraphicOverlay.add(graphic)
}
override fun destroy() {
mGraphicOverlay.clear()
}
}
override fun onClick(v: View?) {
when (v!!.id) {
R.id.handswitch -> switchCamera()
else -> {}
}
}
private fun switchCamera() {
isFront = !isFront
mLensType = if (isFront) {
LensEngine.FRONT_LENS
} else {
LensEngine.BACK_LENS
}
if (mLensEngine != null) {
mLensEngine!!.close()
}
createLensEngine()
startLensEngine()
}
private fun showWaringDialog() {
val dialog = AlertDialog.Builder(this)
dialog.setMessage(R.string.Information_permission)
.setPositiveButton(R.string.go_authorization,
DialogInterface.OnClickListener { dialog, which ->
val intent = Intent(Settings.ACTION_APPLICATION_DETAILS_SETTINGS)
val uri = Uri.fromParts("package", applicationContext.packageName, null)
intent.data = uri
startActivity(intent)
})
.setNegativeButton("Cancel", DialogInterface.OnClickListener { dialog, which -> finish() })
.setOnCancelListener(dialogInterface)
dialog.setCancelable(false)
dialog.show()
}
var dialogInterface = DialogInterface.OnCancelListener { }
override fun onResume() {
super.onResume()
if (ActivityCompat.checkSelfPermission(this, Manifest.permission.CAMERA) == PackageManager.PERMISSION_GRANTED) {
createLensEngine()
startLensEngine()
} else {
checkPermission()
}
}
override fun onPause() {
super.onPause()
mPreview!!.stop()
}
override fun onDestroy() {
super.onDestroy()
if (mLensEngine != null) {
mLensEngine!!.release()
}
if (mAnalyzer != null) {
mAnalyzer!!.stop()
}
}
}
Create LensEnginePreview.kt class to find the business logic for lens engine view.
Java:
class LensEnginePreview(private val mContext: Context, attrs: AttributeSet?) : ViewGroup(mContext, attrs) {
private val mSurfaceView: SurfaceView
private var mStartRequested = false
private var mSurfaceAvailable = false
private var mLensEngine: LensEngine? = null
private var mOverlay: GraphicOverlay? = null
@Throws(IOException::class)
fun start(lensEngine: LensEngine?) {
if (lensEngine == null) {
stop()
}
mLensEngine = lensEngine
if (mLensEngine != null) {
mStartRequested = true
startIfReady()
}
}
@Throws(IOException::class)
fun start(lensEngine: LensEngine?, overlay: GraphicOverlay?) {
mOverlay = overlay
this.start(lensEngine)
}
fun stop() {
if (mLensEngine != null) {
mLensEngine!!.close()
}
}
@Throws(IOException::class)
private fun startIfReady() {
if (mStartRequested && mSurfaceAvailable) {
mLensEngine!!.run(mSurfaceView.holder)
if (mOverlay != null) {
val size = mLensEngine!!.displayDimension
val min = Math.min(size.width, size.height)
val max = Math.max(size.width, size.height)
if (isPortraitMode) {
// Swap width and height sizes when in portrait, since it will be rotated by 90 degrees.
mOverlay!!.setCameraInfo(min, max, mLensEngine!!.lensType)
} else {
mOverlay!!.setCameraInfo(max, min, mLensEngine!!.lensType)
}
mOverlay!!.clear()
}
mStartRequested = false
}
}
private inner class SurfaceCallback : SurfaceHolder.Callback {
override fun surfaceCreated(surface: SurfaceHolder) {
mSurfaceAvailable = true
try {
startIfReady()
} catch (e: IOException) {
Log.e(TAG, "Could not start camera source.", e)
}
}
override fun surfaceDestroyed(surface: SurfaceHolder) {
mSurfaceAvailable = false
}
override fun surfaceChanged(holder: SurfaceHolder, format: Int, width: Int, height: Int) {}
}
override fun onLayout(changed: Boolean, left: Int, top: Int, right: Int, bottom: Int) {
var previewWidth = 480
var previewHeight = 360
if (mLensEngine != null) {
val size = mLensEngine!!.displayDimension
if (size != null) {
previewWidth = size.width
previewHeight = size.height
}
}
// Swap width and height sizes when in portrait, since it will be rotated 90 degrees
if (isPortraitMode) {
val tmp = previewWidth
previewWidth = previewHeight
previewHeight = tmp
}
val viewWidth = right - left
val viewHeight = bottom - top
val childWidth: Int
val childHeight: Int
var childXOffset = 0
var childYOffset = 0
val widthRatio = viewWidth.toFloat() / previewWidth.toFloat()
val heightRatio = viewHeight.toFloat() / previewHeight.toFloat()
// To fill the view with the camera preview, while also preserving the correct aspect ratio,
// it is usually necessary to slightly oversize the child and to crop off portions along one
// of the dimensions. We scale up based on the dimension requiring the most correction, and
// compute a crop offset for the other dimension.
if (widthRatio > heightRatio) {
childWidth = viewWidth
childHeight = (previewHeight.toFloat() * widthRatio).toInt()
childYOffset = (childHeight - viewHeight) / 2
} else {
childWidth = (previewWidth.toFloat() * heightRatio).toInt()
childHeight = viewHeight
childXOffset = (childWidth - viewWidth) / 2
}
for (i in 0 until this.childCount) {
// One dimension will be cropped. We shift child over or up by this offset and adjust
// the size to maintain the proper aspect ratio.
getChildAt(i).layout(-1 * childXOffset, -1 * childYOffset,
childWidth - childXOffset,childHeight - childYOffset )
}
try {
startIfReady()
} catch (e: IOException) {
Log.e(TAG, "Could not start camera source.", e)
}
}
private val isPortraitMode: Boolean
get() {
val orientation = mContext.resources.configuration.orientation
if (orientation == Configuration.ORIENTATION_LANDSCAPE) {
return false
}
if (orientation == Configuration.ORIENTATION_PORTRAIT) {
return true
}
Log.d(TAG, "isPortraitMode returning false by default")
return false
}
companion object {
private val TAG = LensEnginePreview::class.java.simpleName
}
init {
mSurfaceView = SurfaceView(mContext)
mSurfaceView.holder.addCallback(SurfaceCallback())
this.addView(mSurfaceView)
}
}
Create HandKeypointGraphic.kt class to find the business logic for hand key point.
Java:
class HandKeypointGraphic(overlay: GraphicOverlay?, private val handKeypoints: MutableList<MLHandKeypoints?>) : GraphicOverlay.Graphic(overlay!!) {
private val rectPaint: Paint
private val idPaintnew: Paint
companion object {
private const val BOX_STROKE_WIDTH = 5.0f
}
private fun translateRect(rect: Rect): Rect {
var left: Float = translateX(rect.left)
var right: Float = translateX(rect.right)
var bottom: Float = translateY(rect.bottom)
var top: Float = translateY(rect.top)
if (left > right) {
val size = left
left = right
right = size
}
if (bottom < top) {
val size = bottom
bottom = top
top = size
}
return Rect(left.toInt(), top.toInt(), right.toInt(), bottom.toInt())
}
init {
val selectedColor = Color.WHITE
idPaintnew = Paint()
idPaintnew.color = Color.GREEN
idPaintnew.textSize = 32f
rectPaint = Paint()
rectPaint.color = selectedColor
rectPaint.style = Paint.Style.STROKE
rectPaint.strokeWidth = BOX_STROKE_WIDTH
}
override fun draw(canvas: Canvas?) {
for (i in handKeypoints.indices) {
val mHandKeypoints = handKeypoints[i]
if (mHandKeypoints!!.getHandKeypoints() == null) {
continue
}
val rect = translateRect(handKeypoints[i]!!.getRect())
canvas!!.drawRect(rect, rectPaint)
for (handKeypoint in mHandKeypoints.getHandKeypoints()) {
if (!(Math.abs(handKeypoint.getPointX() - 0f) == 0f && Math.abs(handKeypoint.getPointY() - 0f) == 0f)) {
canvas!!.drawCircle(translateX(handKeypoint.getPointX().toInt()),
translateY(handKeypoint.getPointY().toInt()), 24f, idPaintnew)
}
}
}
}
}
Create GraphicOverlay.kt class to find the business logic for graphic overlay.
Java:
class GraphicOverlay(context: Context?, attrs: AttributeSet?) : View(context, attrs) {
private val mLock = Any()
private var mPreviewWidth = 0
private var mWidthScaleFactor = 1.0f
private var mPreviewHeight = 0
private var mHeightScaleFactor = 1.0f
private var mFacing = LensEngine.BACK_LENS
private val mGraphics: MutableSet<Graphic> = HashSet()
// Base class for a custom graphics object to be rendered within the graphic overlay. Subclass
// this and implement the [Graphic.draw] method to define the graphics element. Add instances to the overlay using [GraphicOverlay.add].
abstract class Graphic(private val mOverlay: GraphicOverlay) {
// Draw the graphic on the supplied canvas. Drawing should use the following methods to
// convert to view coordinates for the graphics that are drawn:
// 1. [Graphic.scaleX] and [Graphic.scaleY] adjust the size of the supplied value from the preview scale to the view scale.
// 2. [Graphic.translateX] and [Graphic.translateY] adjust the coordinate from the preview's coordinate system to the view coordinate system.
// @param canvas drawing canvas
abstract fun draw(canvas: Canvas?)
// Adjusts a horizontal value of the supplied value from the preview scale to the view scale.
fun scaleX(horizontal: Float): Float {
return horizontal * mOverlay.mWidthScaleFactor
}
// Adjusts a vertical value of the supplied value from the preview scale to the view scale.
fun scaleY(vertical: Float): Float {
return vertical * mOverlay.mHeightScaleFactor
}
// Adjusts the x coordinate from the preview's coordinate system to the view coordinate system.
fun translateX(x: Int): Float {
return if (mOverlay.mFacing == LensEngine.FRONT_LENS) {
mOverlay.width - scaleX(x.toFloat())
} else {
scaleX(x.toFloat())
}
}
// Adjusts the y coordinate from the preview's coordinate system to the view coordinate system.
fun translateY(y: Int): Float {
return scaleY(y.toFloat())
}
}
// Removes all graphics from the overlay.
fun clear() {
synchronized(mLock) { mGraphics.clear() }
postInvalidate()
}
// Adds a graphic to the overlay.
fun add(graphic: Graphic) {
synchronized(mLock) { mGraphics.add(graphic) }
postInvalidate()
}
// Sets the camera attributes for size and facing direction, which informs how to transform image coordinates later.
fun setCameraInfo(previewWidth: Int, previewHeight: Int, facing: Int) {
synchronized(mLock) {
mPreviewWidth = previewWidth
mPreviewHeight = previewHeight
mFacing = facing
}
postInvalidate()
}
// Draws the overlay with its associated graphic objects.
override fun onDraw(canvas: Canvas) {
super.onDraw(canvas)
synchronized(mLock) {
if (mPreviewWidth != 0 && mPreviewHeight != 0) {
mWidthScaleFactor = canvas.width.toFloat() / mPreviewWidth.toFloat()
mHeightScaleFactor = canvas.height.toFloat() / mPreviewHeight.toFloat()
}
for (graphic in mGraphics) {
graphic.draw(canvas)
}
}
}
}
In the StaticHandKeyPointAnalyse.kt we can find the business logic static hand key point analyses.
Java:
class StaticHandKeyPointAnalyse : AppCompatActivity() {
var analyzer: MLHandKeypointAnalyzer? = null
var bitmap: Bitmap? = null
var mutableBitmap: Bitmap? = null
var mlFrame: MLFrame? = null
var imageSelected: ImageView? = null
var picUri: Uri? = null
var pickButton: Button? = null
var analyzeButton:Button? = null
var permissions = arrayOf(Manifest.permission.READ_EXTERNAL_STORAGE, Manifest.permission.WRITE_EXTERNAL_STORAGE,
Manifest.permission.CAMERA)
override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
setContentView(R.layout.activity_static_hand_key_point_analyse)
pickButton = findViewById(R.id.pick_img)
analyzeButton = findViewById(R.id.analyse_img)
imageSelected = findViewById(R.id.selected_img)
initialiseSettings()
pickButton!!.setOnClickListener(View.OnClickListener {
pickRequiredImage()
})
analyzeButton!!.setOnClickListener(View.OnClickListener {
asynchronouslyStaticHandkey()
})
checkRequiredPermission()
}
private fun checkRequiredPermission() {
if (PackageManager.PERMISSION_GRANTED != ContextCompat.checkSelfPermission(this, Manifest.permission.WRITE_EXTERNAL_STORAGE)
|| PackageManager.PERMISSION_GRANTED != ContextCompat.checkSelfPermission(this, Manifest.permission.READ_EXTERNAL_STORAGE)
|| PackageManager.PERMISSION_GRANTED != ContextCompat.checkSelfPermission(this, Manifest.permission.CAMERA)) {
ActivityCompat.requestPermissions(this, permissions, 111)
}
}
private fun initialiseSettings() {
val setting = MLHandKeypointAnalyzerSetting.Factory() // MLHandKeypointAnalyzerSetting.TYPE_ALL indicates that all results are returned.
// MLHandKeypointAnalyzerSetting.TYPE_KEYPOINT_ONLY indicates that only hand keypoint information is returned.
// MLHandKeypointAnalyzerSetting.TYPE_RECT_ONLY indicates that only palm information is returned.
.setSceneType(MLHandKeypointAnalyzerSetting.TYPE_ALL) // Set the maximum number of hand regions that can be detected in an image. By default, a maximum of 10 hand regions can be detected.
.setMaxHandResults(1)
.create()
analyzer = MLHandKeypointAnalyzerFactory.getInstance().getHandKeypointAnalyzer(setting)
}
private fun asynchronouslyStaticHandkey() {
val task = analyzer!!.asyncAnalyseFrame(mlFrame)
task.addOnSuccessListener { results ->
val canvas = Canvas(mutableBitmap!!)
val paint = Paint()
paint.color = Color.GREEN
paint.style = Paint.Style.FILL
val mlHandKeypoints = results[0]
for (mlHandKeypoint in mlHandKeypoints.getHandKeypoints()) {
canvas.drawCircle(mlHandKeypoint.pointX, mlHandKeypoint.pointY, 48f, paint)
}
imageSelected!!.setImageBitmap(mutableBitmap)
checkAnalyserForStop()
}.addOnFailureListener { // Detection failure.
checkAnalyserForStop()
}
}
private fun checkAnalyserForStop() {
if (analyzer != null) {
analyzer!!.stop()
}
}
private fun pickRequiredImage() {
val intent = Intent()
intent.type = "image/*"
intent.action = Intent.ACTION_PICK
startActivityForResult(Intent.createChooser(intent, "Select Picture"), 20)
}
override fun onActivityResult(requestCode: Int, resultCode: Int, data: Intent?) {
super.onActivityResult(requestCode, resultCode, data)
if (requestCode == 20 && resultCode == RESULT_OK && null != data) {
picUri = data.data
val filePathColumn = arrayOf(MediaStore.Images.Media.DATA)
val cursor = contentResolver.query( picUri!!, filePathColumn, null, null, null)
cursor!!.moveToFirst()
cursor.close()
imageSelected!!.setImageURI(picUri)
imageSelected!!.invalidate()
val drawable = imageSelected!!.drawable as BitmapDrawable
bitmap = drawable.bitmap
mutableBitmap = bitmap!!.copy(Bitmap.Config.ARGB_8888, true)
mlFrame = null
mlFrame = MLFrame.fromBitmap(bitmap)
}
}
}
In the activity_main.xml we can create the UI screen.
Java:
<?xml version="1.0" encoding="utf-8"?>
<androidx.constraintlayout.widget.ConstraintLayout xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:app="http://schemas.android.com/apk/res-auto"
xmlns:tools="http://schemas.android.com/tools"
android:layout_width="match_parent"
android:layout_height="match_parent"
tools:context=".MainActivity">
<Button
android:id="@+id/btn_static"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:text="Static Detection"
android:textAllCaps="false"
android:textSize="18sp"
app:layout_constraintBottom_toTopOf="@+id/btn_live"
app:layout_constraintLeft_toLeftOf="parent"
app:layout_constraintRight_toRightOf="parent"
android:textColor="@color/black"
style="@style/Widget.MaterialComponents.Button.MyTextButton"
app:layout_constraintTop_toTopOf="parent" />
<Button
android:id="@+id/btn_live"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:text="Live Detection"
android:textAllCaps="false"
android:textSize="18sp"
android:layout_marginBottom="150dp"
app:layout_constraintBottom_toBottomOf="parent"
app:layout_constraintLeft_toLeftOf="parent"
android:textColor="@color/black"
style="@style/Widget.MaterialComponents.Button.MyTextButton"
app:layout_constraintRight_toRightOf="parent" />
</androidx.constraintlayout.widget.ConstraintLayout>
In the activity_live_hand_key_point_analyse.xml we can create the UI screen.
Java:
<?xml version="1.0" encoding="utf-8"?>
<androidx.constraintlayout.widget.ConstraintLayout xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:app="http://schemas.android.com/apk/res-auto"
xmlns:tools="http://schemas.android.com/tools"
android:layout_width="match_parent"
android:layout_height="match_parent"
tools:context=".LiveHandKeyPointAnalyse">
<com.example.mlhandgesturesample.LensEnginePreview
android:id="@+id/hand_preview"
android:layout_width="match_parent"
android:layout_height="match_parent"
tools:ignore="MissingClass">
<com.example.mlhandgesturesample.GraphicOverlay
android:id="@+id/hand_overlay"
android:layout_width="match_parent"
android:layout_height="match_parent" />
</com.example.mlhandgesturesample.LensEnginePreview>
<Button
android:id="@+id/handswitch"
android:layout_width="35dp"
android:layout_height="35dp"
app:layout_constraintBottom_toBottomOf="parent"
app:layout_constraintStart_toStartOf="parent"
app:layout_constraintEnd_toEndOf="parent"
android:layout_marginBottom="35dp"
android:background="@drawable/front_back_switch"
android:textOff=""
android:textOn=""
tools:ignore="MissingConstraints" />
</androidx.constraintlayout.widget.ConstraintLayout>
In the activity_static_hand_key_point_analyse.xml we can create the UI screen.
Java:
<?xml version="1.0" encoding="utf-8"?>
<androidx.constraintlayout.widget.ConstraintLayout xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:app="http://schemas.android.com/apk/res-auto"
xmlns:tools="http://schemas.android.com/tools"
android:layout_width="match_parent"
android:layout_height="match_parent"
tools:context=".StaticHandKeyPointAnalyse">
<com.google.android.material.button.MaterialButton
android:id="@+id/pick_img"
android:text="Pick Image"
android:textSize="18sp"
android:textColor="@android:color/black"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:textAllCaps="false"
app:layout_constraintTop_toTopOf="parent"
app:layout_constraintBottom_toTopOf="@id/selected_img"
app:layout_constraintLeft_toLeftOf="@id/selected_img"
app:layout_constraintRight_toRightOf="@id/selected_img"
style="@style/Widget.MaterialComponents.Button.MyTextButton"/>
<ImageView
android:visibility="visible"
android:id="@+id/selected_img"
android:layout_width="350dp"
android:layout_height="350dp"
app:layout_constraintBottom_toBottomOf="parent"
app:layout_constraintLeft_toLeftOf="parent"
app:layout_constraintRight_toRightOf="parent"
app:layout_constraintTop_toTopOf="parent" />
<com.google.android.material.button.MaterialButton
android:id="@+id/analyse_img"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:textColor="@android:color/black"
android:text="Analyse"
android:textSize="18sp"
android:textAllCaps="false"
app:layout_constraintBottom_toBottomOf="parent"
app:layout_constraintTop_toBottomOf="@id/selected_img"
app:layout_constraintLeft_toLeftOf="@id/selected_img"
app:layout_constraintRight_toRightOf="@id/selected_img"
style="@style/Widget.MaterialComponents.Button.MyTextButton"/>
</androidx.constraintlayout.widget.ConstraintLayout>
Demo
Tips and Tricks
1. Make sure you are already registered as Huawei developer.
2. Set minSDK version to 21 or later, otherwise you will get AndriodManifest merge issue.
3. Make sure you have added the agconnect-services.json file to app folder.
4. Make sure you have added SHA-256 fingerprint without fail.
5. Make sure all the dependencies are added properly.
Conclusion
In this article, we have learned how to find the hand key points using Huawei ML Kit of Hand Gesture Recognition feature. This service provides two capabilities: hand keypoint detection and hand gesture recognition. The hand keypoint detection capability can detect 21 hand keypoints (including fingertips, knuckles, and wrists) and return the positions of the key points. The hand gesture recognition capability can detect and return the positions of all rectangle areas of the hand from images and videos, and the type and confidence of a gesture. This capability can recognize 14 gestures, including the thumbs-up/down, OK sign, fist, finger heart, and number gestures from 1 to 9. Both capabilities support detection from static images and real-time camera streams.
I hope you have read this article. If you found it is helpful, please provide likes and comments.
Reference
ML Kit – Hand Gesture Recognition
ML Kit – Training Video

Related

1 Map makes you feel easy in a strange city (Part 2)

This article is orginally from HUAWEI Developer Forum.
Forum link: https://forums.developer.huawei.com/forumPortal/en/home
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Before we start learning about today’s topic, I strongly recommend you to go through my previous article i.e. HMS Site Map (Part 1). It will help you to have a clear picture.
Let’s Begin
In the previous article, we were successfully able to get details after selecting the place that we want to search using Site Kit. Today in this article we are going to see how to show a map using Map Kit after fetching the Latitude and Longitude from the details we selected. Also we are going to see how to use the Site APIs and Map APIs using POSTMAN in our Part 3 article.
One Step at a time
First we need to add Map Kit dependencies in the app gradle file and sync the app.
implementation 'com.huawei.hms:maps:4.0.1.300'
After adding the dependencies we need to provide permission in AndroidManifest.xml file.
Code:
<uses-permission android:name="android.permission.INTERNET" />
<uses-permission android:name="android.permission.ACCESS_COARSE_LOCATION" />
<uses-permission android:name="android.permission.ACCESS_FINE_LOCATION" />
<uses-permission android:name="android.permission.ACCESS_NETWORK_STATE"/>
<uses-permission android:name="com.huawei.appmarket.service.commondata.permission.GET_COMMON_DATA"/>
Let’s Code
Main Activity class
Code:
private void showDetails(String item) {
String pattern = Pattern.quote("\\" + "n");
String[] lines = item.split("\\n+");
autoCompleteTextView.setText(lines[0]);
mLat = lines[2]; // This is latitude
mLon = lines[3]; // This is longitude
title = lines[0]; // This is title or place name
String details = "<font color='red'>PLACE NAME : </font>" + lines[0] + "<br>"
+ "<font color='#CD5C5C'>COUNTRY : </font>" + lines[1] + "<br>"
+ "<font color='#8E44AD'>ADDRESS : </font>" + lines[4] + "<br>"
+ "<font color='#008000'>PHONE : </font>" + lines[5];
txtDetails.setText(Html.fromHtml(details, Html.FROM_HTML_MODE_COMPACT));
}
private void showMap(){
Intent intent = new Intent(MainActivity.this, MapActivity.class);
intent.putExtra("lat",mLat); // Here we are passing Latitude and Longitude
intent.putExtra("lon",mLon); // and titile from MainActivity class to
intent.putExtra("title",title);// MapActivity class…
startActivity(intent);
}v
Main Code
1) First we need to understand whether we are showing the map in view or fragment. Because there are two way we can show our map.
a) Fragment way
In fragment way we add MapFragment in the layout file of an activity.
Code:
<fragment xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:map="http://schemas.android.com/apk/res-auto"
android:id="@+id/mapfragment_mapfragmentdemo"
class="com.huawei.hms.maps.MapFragment"
android:layout_width="match_parent"
android:layout_height="match_parent"
map:cameraTargetLat="48.893478"
map:cameraTargetLng="2.334595"
map:cameraZoom="10" />
b) MapView way
Here we add MapView in the layout file of an activity.
Code:
<com.huawei.hms.maps.MapView
xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:map="http://schemas.android.com/apk/res-auto"
android:id="@+id/mapView"
android:layout_width="match_parent"
android:layout_height="match_parent"
map:mapType="normal"
map:uiCompass="true"
map:uiZoomControls="true"
map:cameraTargetLat="51"
map:cameraTargetLng="10"
map:cameraZoom="8.5"/>
2) Here we are going with MapView.
3) For both Fragment as well as View, we need to implement OnMapReadyCallback API in our MapActivity to use a Map. After implementing this API, it will ask to implement onMapReady method.
Code:
public void onMapReady(HuaweiMap map) {
Log.d(TAG, "onMapReady: ");
hMap = map;
}
4) The only difference which we will see between MapFragment and MapView is instantiating Map.
a) MapFragement
Code:
private MapFragment mMapFragment;
mMapFragment = (MapFragment) getFragmentManager()
.findFragmentById(R.id.mapfragment_mapfragmentdemo);
mMapFragment.getMapAsync(this);
b) MapView
Code:
private MapView mMapView;
mMapView = findViewById(R.id.mapview_mapviewdemo);
Bundle mapViewBundle = null;
if (savedInstanceState != null) {
mapViewBundle = savedInstanceState.getBundle("MapViewBundleKey");
}
mMapView.onCreate(mapViewBundle);
mMapView.getMapAsync(this);
5) Permission we need to check
Code:
//Put this in the top of the onCreate() method …
private static final String[] RUNTIME_PERMISSIONS = {
Manifest.permission.WRITE_EXTERNAL_STORAGE,
Manifest.permission.READ_EXTERNAL_STORAGE,
Manifest.permission.ACCESS_COARSE_LOCATION,
Manifest.permission.ACCESS_FINE_LOCATION,
Manifest.permission.INTERNET
};
// This will placed in the onCreate() method …
if (!hasPermissions(this, RUNTIME_PERMISSIONS)) {
ActivityCompat.requestPermissions(this, RUNTIME_PERMISSIONS, REQUEST_CODE);
}
// Use this method to check Permission …
private static boolean hasPermissions(Context context, String... permissions) {
if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.M && permissions != null) {
for (String permission : permissions) {
if (ActivityCompat.checkSelfPermission(context, permission)
!= PackageManager.PERMISSION_GRANTED) {
return false;
}
}
}
return true;
}
MapActivity Class
Code:
public class MapActivity extends AppCompatActivity implements OnMapReadyCallback {
private static final String TAG = "MapActivity";
private MapView mMapView;
private HuaweiMap hmap;
private Marker mMarker;
private static final String[] RUNTIME_PERMISSIONS = {
Manifest.permission.WRITE_EXTERNAL_STORAGE,
Manifest.permission.READ_EXTERNAL_STORAGE,
Manifest.permission.ACCESS_COARSE_LOCATION,
Manifest.permission.ACCESS_FINE_LOCATION,
Manifest.permission.INTERNET
};
private static final String MAPVIEW_BUNDLE_KEY = "MapViewBundleKey";
private static final int REQUEST_CODE = 100;
private String mLatitude, mLongitude,mTitle;
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_map);
mLatitude = getIntent().getExtras().getString("lat");
mLongitude = getIntent().getExtras().getString("lon");
mTitle = getIntent().getExtras().getString("title");
if (!hasPermissions(this, RUNTIME_PERMISSIONS)) {
ActivityCompat.requestPermissions(this, RUNTIME_PERMISSIONS, REQUEST_CODE);
}
mMapView = findViewById(R.id.mapView);
Bundle mapViewBundle = null;
if (savedInstanceState != null) {
mapViewBundle = savedInstanceState.getBundle(MAPVIEW_BUNDLE_KEY);
}
mMapView.onCreate(mapViewBundle);
mMapView.getMapAsync(this);
}
@Override
protected void onStart() {
super.onStart();
mMapView.onStart();
}
@Override
protected void onStop() {
super.onStop();
mMapView.onStop();
}
@Override
protected void onDestroy() {
super.onDestroy();
mMapView.onDestroy();
}
@Override
protected void onPause() {
mMapView.onPause();
super.onPause();
}
@Override
protected void onResume() {
super.onResume();
mMapView.onResume();
}
@Override
public void onLowMemory() {
super.onLowMemory();
mMapView.onLowMemory();
}
@Override
public void onMapReady(HuaweiMap huaweiMap) {
Log.d(TAG, "onMapReady: ");
hmap = huaweiMap;
hmap.setMyLocationEnabled(true);
hmap.setMapType(HuaweiMap.MAP_TYPE_NORMAL);
hmap.setMaxZoomPreference(15);
hmap.setMinZoomPreference(5);
CameraPosition build = new CameraPosition.Builder()
.target(new LatLng(Double.parseDouble(mLatitude), Double.parseDouble(mLongitude)))
.build();
CameraUpdate cameraUpdate = CameraUpdateFactory
.newCameraPosition(build);
hmap.animateCamera(cameraUpdate);
MarkerOptions options = new MarkerOptions()
.position(new LatLng(Double.parseDouble(mLatitude),
Double.parseDouble(mLongitude)))
.title(mTitle);
mMarker = hmap.addMarker(options);
mMarker.showInfoWindow();
hmap.setOnMarkerClickListener(new HuaweiMap.OnMarkerClickListener() {
@Override
public boolean onMarkerClick(Marker marker) {
Toast.makeText(getApplicationContext(), "onMarkerClick:" +
marker.getTitle(), Toast.LENGTH_SHORT).show();
return false;
}
});
}
private static boolean hasPermissions(Context context, String... permissions) {
if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.M && permissions != null) {
for (String permission : permissions) {
if (ActivityCompat.checkSelfPermission(context, permission)
!= PackageManager.PERMISSION_GRANTED) {
return false;
}
}
}
return true;
}
}
Core Functionality of Map
1) Types of Map
There are five types map:
· HuaweiMap.MAP_TYPE_NORMAL
· HuaweiMap.MAP_TYPE_NONE
· HuaweiMap.MAP_TYPE_SATELLITE
· HuaweiMap.MAP_TYPE_HYBRID
· HuaweiMap.MAP_TYPE_TERRAIN
But we can only use MAP_TYPE_NORMAL and MAP_TYPE_NONE. Normal type is a standard map, which shows roads, artificial structures, and natural features such as rivers. None type is an empty map without any data.
The Rest Map type is in development phase.
2) Camera Movement
Huawei maps are moved by simulating camera movement. You can control the visible region of a map by changing the camera's position. To change the camera's position, create different types of CameraUpdate objects using the CameraUpdateFactory class, and use these objects to move the camera.
Code:
CameraPosition build = new CameraPosition.Builder().target(new
LatLng(Double.parseDouble(mLatitude),
Double.parseDouble(mLongitude))).build();
CameraUpdate cameraUpdate = CameraUpdateFactory
.newCameraPosition(build);
hmap.animateCamera(cameraUpdate);
In the above code we are using Map camera in animation mode. When moving the map camera in animation mode, you can set the animation duration and API to be called back when the animation stops. By default, the animation duration is 250 ms.
3) My Location in Map
We can get our location in our Map by simply enabling my-location layer. Also, we can display my-location icon in the Map.
Code:
hmap.setMyLocationEnabled(true);
hmap.getUiSettings().setMyLocationButtonEnabled(true);
4) Show Marker in Map
We can add markers to a map to identify locations such as stores and buildings, and provide additional information with information windows.
Code:
MarkerOptions options = new MarkerOptions()
.position(new LatLng(Double.parseDouble(mLatitude),
Double.parseDouble(mLongitude)))
.title(mTitle); // Adding the title here …
mMarker = hmap.addMarker(options);
mMarker.showInfoWindow();
We can customize our marker according to our need using BitmapDescriptor object.
Code:
Bitmap bitmap = ResourceBitmapDescriptor.drawableToBitmap(this,
ContextCompat.getDrawable(this, R.drawable.badge_ph));
BitmapDescriptor bitmapDescriptor = BitmapDescriptorFactory.fromBitmap(bitmap);
mMarker.setIcon(bitmapDescriptor);
We can title to the Marker as shown in the above code. We can also make the marker clickable as shown below.
Code:
hmap.setOnMarkerClickListener(new HuaweiMap.OnMarkerClickListener() {
@Override
public boolean onMarkerClick(Marker marker) {
Toast.makeText(getApplicationContext(), "onMarkerClick:" +
marker.getTitle(), Toast.LENGTH_SHORT).show();
return false;
}
});
5) Map comes in shape
a) Polyline
b) Polygon
c) Circle
We can use Polyline if we need to show routes from one place to another. We can combine Direction API with Polyline to show routes for walking, bicycling and driving also calculating routes distance.
If we need to show radius like the location under 500 meter or something we use Circle shape to show in the map.
The Result
Any questions about this process, you can try to acquire answers from HUAWEI Developer Forum.​

Object Detection & Tracking with HMS ML Kit (Video Mode)

More articles like this, you can visit HUAWEI Developer Forum.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
In this article I will tell you about Object Detection with HMS ML Kit first and then we are going to build an Android application which uses HMS ML Kit to detect and track objects in a camera stream. If you haven’t read my last article on detecting objects statically yet, here it is. You can also find introductive information about Artificial Intelligence, Machine Learning and Huawei ML Kit’s capabilities in that article.
The object detection and tracking service can detect and track multiple objects in an image. The detected objects can be located and classified in real time. It is also an ideal choice for filtering out unwanted objects in an image. By the way, Huawei ML Kit provides on device object detection capabilities, hence it is completely free.
Let’s don’t waste our precious time and start doing our sample project step by step!
1. If you haven’t registered as a Huawei Developer yet. Here is the link.
2. Create a Project on AppGalleryConnect. You can follow the steps shown here.
3. In HUAWEI Developer AppGallery Connect, go to Develop > Manage APIs. Make sure ML Kit is activated.
4. Integrate ML Kit SDK into your project. Your app level build.gradle will look like this:
Code:
apply plugin: 'com.android.application'
apply plugin: 'kotlin-android'
apply plugin: 'kotlin-android-extensions'
android {
compileSdkVersion 29
buildToolsVersion "29.0.3"
defaultConfig {
applicationId "com.demo.objectdetection"
minSdkVersion 21
targetSdkVersion 29
versionCode 1
versionName "1.0"
testInstrumentationRunner "androidx.test.runner.AndroidJUnitRunner"
}
buildTypes {
release {
minifyEnabled false
proguardFiles getDefaultProguardFile('proguard-android-optimize.txt'), 'proguard-rules.pro'
}
}
compileOptions {
sourceCompatibility JavaVersion.VERSION_1_8
targetCompatibility JavaVersion.VERSION_1_8
}
kotlinOptions { jvmTarget = "1.8" }
}
dependencies {
implementation fileTree(dir: 'libs', include: ['*.jar'])
implementation "org.jetbrains.kotlin:kotlin-stdlib-jdk7:$kotlin_version"
implementation 'androidx.appcompat:appcompat:1.1.0'
implementation 'androidx.core:core-ktx:1.3.0'
implementation 'androidx.constraintlayout:constraintlayout:1.1.3'
testImplementation 'junit:junit:4.12'
androidTestImplementation 'androidx.test.ext:junit:1.1.1'
androidTestImplementation 'androidx.test.espresso:espresso-core:3.2.0'
//HMS ML Kit
implementation 'com.huawei.hms:ml-computer-vision:1.0.2.300'
}
and your project-level build.gradle is like this:
Code:
buildscript {
ext.kotlin_version = '1.3.72'
repositories {
google()
jcenter()
maven { url 'https://developer.huawei.com/repo/' }
}
dependencies {
classpath 'com.android.tools.build:gradle:3.6.3'
classpath "org.jetbrains.kotlin:kotlin-gradle-plugin:$kotlin_version"
classpath 'com.huawei.agconnect:agcp:1.3.1.300'
}
}
allprojects {
repositories {
google()
jcenter()
maven { url 'https://developer.huawei.com/repo/' }
}
}
task clean(type: Delete) {
delete rootProject.buildDir
}
5. Create the layout first. There will be two surfaceViews. The first surfaceView is to display our camera stream, the second surfaceView is to draw our canvas. We will draw rectangles around detected objects and write their respective types on our canvas and show this canvas on our second surfaceView. Here is the sample:
Code:
<?xml version="1.0" encoding="utf-8"?>
<androidx.constraintlayout.widget.ConstraintLayout xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:tools="http://schemas.android.com/tools"
android:layout_width="match_parent"
android:layout_height="match_parent"
tools:context=".MainActivity">
<SurfaceView
android:id="@+id/surface_view_camera"
android:layout_width="match_parent"
android:layout_height="match_parent" />
<SurfaceView
android:id="@+id/surface_view_overlay"
android:layout_width="match_parent"
android:layout_height="match_parent" />
</androidx.constraintlayout.widget.ConstraintLayout>
6. By the way, make sure you set your activity style as “Theme.AppCompat.Light.NoActionBar” or similar in res/styles.xml to hide the action bar.
7.1. We have two important classes that help us detect objects in HMS ML Kit. MLObjectAnalyzer and LensEngine. MLObjectAnalyzer detects object information (MLObject) in an image. We can also customize it using MLObjectAnalyzerSetting. Here is our createAnalyzer method:
Code:
private fun createAnalyzer(): MLObjectAnalyzer {
val analyzerSetting = MLObjectAnalyzerSetting.Factory()
.setAnalyzerType(MLObjectAnalyzerSetting.TYPE_VIDEO)
.allowMultiResults()
.allowClassification()
.create()
return MLAnalyzerFactory.getInstance().getLocalObjectAnalyzer(analyzerSetting)
}
7.2. Other important class that we are using today is LensEngine. LensEngine is responsible for camera initialization, frame obtaining, and logic control functions. Here is our createLensEngine method:
Code:
private fun createLensEngine(orientation: Int): LensEngine {
val lensEngineCreator = LensEngine.Creator(applicationContext, mAnalyzer)
.setLensType(LensEngine.BACK_LENS)
.applyFps(10F)
.enableAutomaticFocus(true)
return when(orientation) {
Configuration.ORIENTATION_PORTRAIT ->
lensEngineCreator.applyDisplayDimension(getDisplayMetrics().heightPixels, getDisplayMetrics().widthPixels).create()
else ->
lensEngineCreator.applyDisplayDimension(getDisplayMetrics().widthPixels, getDisplayMetrics().heightPixels).create()
}
}
8. Well, LensEngine handles camera frames, MLObjectAnalyzer detects MLObjects in those frames. Now we need to create our ObjectAnalyzerTranscator class which implements MLAnalyzer.MLTransactor interface. The detected MLObjects are going to be dropped in transactResult method of this class. I will share our ObjectAnalyzerTransactor class here with an additional draw method for drawing rectangles and some text around detected objects.
Code:
package com.demo.objectdetection
import android.graphics.Color
import android.graphics.Paint
import android.graphics.PorterDuff
import android.util.Log
import android.util.SparseArray
import android.view.SurfaceHolder
import androidx.core.util.forEach
import androidx.core.util.isNotEmpty
import androidx.core.util.valueIterator
import com.huawei.hms.mlsdk.common.MLAnalyzer
import com.huawei.hms.mlsdk.objects.MLObject
class ObjectAnalyzerTransactor : MLAnalyzer.MLTransactor<MLObject> {
companion object {
private const val TAG = "ML_ObAnalyzerTransactor"
}
private var mSurfaceHolderOverlay: SurfaceHolder? = null
fun setSurfaceHolderOverlay(surfaceHolder: SurfaceHolder) {
mSurfaceHolderOverlay = surfaceHolder
}
override fun transactResult(results: MLAnalyzer.Result<MLObject>?) {
val items = results?.analyseList
items?.forEach { key, value ->
Log.d(TAG, "transactResult -> " +
"Border: ${value.border} " + //Rectangle around this object
"Type Possibility: ${value.typePossibility} " + //Possibility between 0-1
"Tracing Identity: ${value.tracingIdentity} " + //Tracing number of this object
"Type Identity: ${value.typeIdentity}") //Furniture, Plant, Food etc.
}
items?.also {
draw(it)
}
}
private fun draw(items: SparseArray<MLObject>) {
val canvas = mSurfaceHolderOverlay?.lockCanvas()
if (canvas != null) {
//Clear canvas first
canvas.drawColor(0, PorterDuff.Mode.CLEAR)
for (item in items.valueIterator()) {
val type = getItemType(item)
//Draw a rectangle around detected object.
val rectangle = item.border
Paint().also {
it.color = Color.YELLOW
it.style = Paint.Style.STROKE
it.strokeWidth = 8F
canvas.drawRect(rectangle, it)
}
//Draw text on the upper left corner of the detected object, writing its type.
Paint().also {
it.color = Color.BLACK
it.style = Paint.Style.FILL
it.textSize = 24F
canvas.drawText(type, (rectangle.left).toFloat(), (rectangle.top).toFloat(), it)
}
}
}
mSurfaceHolderOverlay?.unlockCanvasAndPost(canvas)
}
private fun getItemType(item: MLObject) = when(item.typeIdentity) {
MLObject.TYPE_OTHER -> "Other"
MLObject.TYPE_FACE -> "Face"
MLObject.TYPE_FOOD -> "Food"
MLObject.TYPE_FURNITURE -> "Furniture"
MLObject.TYPE_PLACE -> "Place"
MLObject.TYPE_PLANT -> "Plant"
MLObject.TYPE_GOODS -> "Goods"
else -> "No match"
}
override fun destroy() {
Log.d(TAG, "destroy")
}
}
9. Our lensEngine needs a surfaceHolder to run on. Therefore will start it when our surfaceHolder is ready. Here is our callback:
Code:
private val surfaceHolderCallback = object : SurfaceHolder.Callback {
override fun surfaceChanged(holder: SurfaceHolder?, format: Int, width: Int, height: Int) {
mLensEngine.close()
init()
mLensEngine.run(holder)
}
override fun surfaceDestroyed(holder: SurfaceHolder?) {
mLensEngine.release()
}
override fun surfaceCreated(holder: SurfaceHolder?) {
mLensEngine.run(holder)
}
}
10. We require CAMERA and WRITE_EXTERNAL_STORAGE permissions. Make sure you add them to your AndroidManifest.xml file and ask user at runtime. For the sake of simplicity we do it as shown below:
Code:
class MainActivity : AppCompatActivity() {
companion object {
private const val PERMISSION_REQUEST_CODE = 8
private val requiredPermissions = arrayOf(Manifest.permission.CAMERA, Manifest.permission.WRITE_EXTERNAL_STORAGE)
}
override fun onCreate(savedInstanceState: Bundle?) {
if (hasPermissions(requiredPermissions))
init()
else
ActivityCompat.requestPermissions(this, requiredPermissions, PERMISSION_REQUEST_CODE)
}
private fun hasPermissions(permissions: Array<String>) = permissions.all {
ContextCompat.checkSelfPermission(this, it) == PackageManager.PERMISSION_GRANTED
}
override fun onRequestPermissionsResult(requestCode: Int, permissions: Array<out String>, grantResults: IntArray) {
super.onRequestPermissionsResult(requestCode, permissions, grantResults)
if (requestCode == PERMISSION_REQUEST_CODE && hasPermissions(requiredPermissions))
init()
}
}
11. Let’s bring them all the pieces together. Here is our MainActivity.
Code:
package com.demo.objectdetection
import android.Manifest
import android.content.Context
import android.content.pm.PackageManager
import android.content.res.Configuration
import android.graphics.PixelFormat
import android.os.Bundle
import android.util.DisplayMetrics
import android.view.SurfaceHolder
import android.view.WindowManager
import androidx.appcompat.app.AppCompatActivity
import androidx.core.app.ActivityCompat
import androidx.core.content.ContextCompat
import com.huawei.hms.mlsdk.MLAnalyzerFactory
import com.huawei.hms.mlsdk.common.LensEngine
import com.huawei.hms.mlsdk.objects.MLObjectAnalyzer
import com.huawei.hms.mlsdk.objects.MLObjectAnalyzerSetting
import kotlinx.android.synthetic.main.activity_main.*
class MainActivity : AppCompatActivity() {
companion object {
private const val TAG = "ML_MainActivity"
private const val PERMISSION_REQUEST_CODE = 8
private val requiredPermissions = arrayOf(Manifest.permission.CAMERA, Manifest.permission.WRITE_EXTERNAL_STORAGE)
}
private lateinit var mAnalyzer: MLObjectAnalyzer
private lateinit var mLensEngine: LensEngine
private lateinit var mSurfaceHolderCamera: SurfaceHolder
private lateinit var mSurfaceHolderOverlay: SurfaceHolder
private lateinit var mObjectAnalyzerTransactor: ObjectAnalyzerTransactor
override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
setContentView(R.layout.activity_main)
if (hasPermissions(requiredPermissions))
init()
else
ActivityCompat.requestPermissions(this, requiredPermissions, PERMISSION_REQUEST_CODE)
}
private fun init() {
mAnalyzer = createAnalyzer()
mLensEngine = createLensEngine(resources.configuration.orientation)
mSurfaceHolderCamera = surface_view_camera.holder
mSurfaceHolderOverlay = surface_view_overlay.holder
mSurfaceHolderOverlay.setFormat(PixelFormat.TRANSPARENT)
mSurfaceHolderCamera.addCallback(surfaceHolderCallback)
mObjectAnalyzerTransactor = ObjectAnalyzerTransactor()
mObjectAnalyzerTransactor.setSurfaceHolderOverlay(mSurfaceHolderOverlay)
mAnalyzer.setTransactor(mObjectAnalyzerTransactor)
}
private fun createAnalyzer(): MLObjectAnalyzer {
val analyzerSetting = MLObjectAnalyzerSetting.Factory()
.setAnalyzerType(MLObjectAnalyzerSetting.TYPE_VIDEO)
.allowMultiResults()
.allowClassification()
.create()
return MLAnalyzerFactory.getInstance().getLocalObjectAnalyzer(analyzerSetting)
}
private fun createLensEngine(orientation: Int): LensEngine {
val lensEngineCreator = LensEngine.Creator(applicationContext, mAnalyzer)
.setLensType(LensEngine.BACK_LENS)
.applyFps(10F)
.enableAutomaticFocus(true)
return when(orientation) {
Configuration.ORIENTATION_PORTRAIT ->
lensEngineCreator.applyDisplayDimension(getDisplayMetrics().heightPixels, getDisplayMetrics().widthPixels).create()
else ->
lensEngineCreator.applyDisplayDimension(getDisplayMetrics().widthPixels, getDisplayMetrics().heightPixels).create()
}
}
private val surfaceHolderCallback = object : SurfaceHolder.Callback {
override fun surfaceChanged(holder: SurfaceHolder?, format: Int, width: Int, height: Int) {
mLensEngine.close()
init()
mLensEngine.run(holder)
}
override fun surfaceDestroyed(holder: SurfaceHolder?) {
mLensEngine.release()
}
override fun surfaceCreated(holder: SurfaceHolder?) {
mLensEngine.run(holder)
}
}
override fun onDestroy() {
super.onDestroy()
//Release resources
mAnalyzer.stop()
mLensEngine.release()
}
private fun getDisplayMetrics() = DisplayMetrics().let {
(getSystemService(Context.WINDOW_SERVICE) as WindowManager).defaultDisplay.getMetrics(it)
it
}
private fun hasPermissions(permissions: Array<String>) = permissions.all {
ContextCompat.checkSelfPermission(this, it) == PackageManager.PERMISSION_GRANTED
}
override fun onRequestPermissionsResult(requestCode: Int, permissions: Array<out String>, grantResults: IntArray) {
super.onRequestPermissionsResult(requestCode, permissions, grantResults)
if (requestCode == PERMISSION_REQUEST_CODE && hasPermissions(requiredPermissions))
init()
}
}
12. In summary we used LensEngine to handle camera frames for us. We displayed the frames on our first surfaceView. Then MLObjectAnalyzer analyzed these frames and detected objects came into transactResult of our ObjectAnalyzerTrasactor class. In this method we iterated through all objects detected and drew them on our second surfaceView which we used as an overlay. Here is the output:

Create and Monitor Geofences with HuaweiMap in Xamarin.Android Application

{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
A geofence is a virtual perimeter set on a real geographic area. Combining a user position with a geofence perimeter, it is possible to know if the user is inside the geofence or if he is exiting or entering the area.
In this article, we will discuss how to use the geofence to notify the user when the device enters/exits an area using the HMS Location Kit in a Xamarin.Android application. We will also add and customize HuaweiMap, which includes drawing circles, adding pointers, and using nearby searches in search places. We are going to learn how to use the below features together:
Geofence
Reverse Geocode
HuaweiMap
Nearby Search
First of all, you need to be a registered Huawei Mobile Developer and create an application in Huawei App Console in order to use HMS Map Location and Site Kits. You can follow there steps in to complete the configuration that required for development.
Configuring App Information in AppGallery Connect --> shorturl.at/rL347
Creating Xamarin Android Binding Libraries --> shorturl.at/rBP46
Integrating the HMS Map Kit Libraries for Xamarin --> shorturl.at/vAHPX
Integrating the HMS Location Kit Libraries for Xamarin --> shorturl.at/dCX07
Integrating the HMS Site Kit Libraries for Xamarin --> shorturl.at/bmDX6
Integrating the HMS Core SDK --> shorturl.at/qBISV
Setting Package in Xamarin --> shorturl.at/brCU1
When we create our Xamarin.Android application in the above steps, we need to make sure that the package name is the same as we entered the Console. Also, don’t forget the enable them in Console.
Manifest & Permissions
We have to update the application’s manifest file by declaring permissions that we need as shown below.
Code:
<uses-permission android:name="android.permission.ACCESS_COARSE_LOCATION" />
<uses-permission android:name="android.permission.ACCESS_FINE_LOCATION" />
<uses-permission android:name="android.permission.INTERNET" />
<uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" />
<uses-permission android:name="android.permission.ACCESS_NETWORK_STATE" />
<uses-permission android:name="android.permission.ACCESS_BACKGROUND_LOCATION" />
Also, add a meta-data element to embed your app id in the application tag, it is required for this app to authenticate on the Huawei’s cloud server. You can find this id in agconnect-services.json file.
Code:
<meta-data android:name="com.huawei.hms.client.appid" android:value="appid=YOUR_APP_ID" />
Request location permission
Code:
private void RequestPermissions()
{
if (ContextCompat.CheckSelfPermission(this, Manifest.Permission.AccessCoarseLocation) != (int)Permission.Granted ||
ContextCompat.CheckSelfPermission(this, Manifest.Permission.AccessFineLocation) != (int)Permission.Granted ||
ContextCompat.CheckSelfPermission(this, Manifest.Permission.WriteExternalStorage) != (int)Permission.Granted ||
ContextCompat.CheckSelfPermission(this, Manifest.Permission.ReadExternalStorage) != (int)Permission.Granted ||
ContextCompat.CheckSelfPermission(this, Manifest.Permission.Internet) != (int)Permission.Granted)
{
ActivityCompat.RequestPermissions(this,
new System.String[]
{
Manifest.Permission.AccessCoarseLocation,
Manifest.Permission.AccessFineLocation,
Manifest.Permission.WriteExternalStorage,
Manifest.Permission.ReadExternalStorage,
Manifest.Permission.Internet
},
100);
}
else
GetCurrentPosition();
}
Add a Map
Add a <fragment> element to your activity’s layout file, activity_main.xml. This element defines a MapFragment to act as a container for the map and to provide access to the HuaweiMap object.
Code:
<fragment
android:id="@+id/mapfragment"
class="com.huawei.hms.maps.MapFragment"
android:layout_width="match_parent"
android:layout_height="match_parent"/>
Implement the IOnMapReadyCallback interface to MainActivity and override OnMapReady method which is triggered when the map is ready to use. Then use GetMapAsync to register for the map callback.
We request the address corresponding to a given latitude/longitude. Also specified that the output must be in JSON format.
Code:
public class MainActivity : AppCompatActivity, IOnMapReadyCallback
{
...
public void OnMapReady(HuaweiMap map)
{
hMap = map;
hMap.UiSettings.MyLocationButtonEnabled = true;
hMap.UiSettings.CompassEnabled = true;
hMap.UiSettings.ZoomControlsEnabled = true;
hMap.UiSettings.ZoomGesturesEnabled = true;
hMap.MyLocationEnabled = true;
hMap.MapClick += HMap_MapClick;
if (selectedCoordinates == null)
selectedCoordinates = new GeofenceModel { LatLng = CurrentPosition, Radius = 30 };
}
}
As you can see above, with the UiSettings property of the HuaweiMap object we set my location button, enable compass, etc. Now when the app launch, directly get the current location and move the camera to it. In order to do that we use FusedLocationProviderClient that we instantiated and call LastLocation API.
LastLocation API returns a Task object that we can check the result by implementing the relevant listeners for success and failure.In success listener we are going to move the map’s camera position to the last known position.
Code:
private void GetCurrentPosition()
{
var locationTask = fusedLocationProviderClient.LastLocation;
locationTask.AddOnSuccessListener(new LastLocationSuccess(this));
locationTask.AddOnFailureListener(new LastLocationFail(this));
}
...
public class LastLocationSuccess : Java.Lang.Object, IOnSuccessListener
{
...
public void OnSuccess(Java.Lang.Object location)
{
Toast.MakeText(mainActivity, "LastLocation request successful", ToastLength.Long).Show();
if (location != null)
{
MainActivity.CurrentPosition = new LatLng((location as Location).Latitude, (location as Location).Longitude);
mainActivity.RepositionMapCamera((location as Location).Latitude, (location as Location).Longitude);
}
}
}
To change the position of the camera, we must specify where we want to move the camera, using a CameraUpdate. The Map Kit allows us to create many different types of CameraUpdate using CameraUpdateFactory.
There are some methods for the camera position changes as we see above. Simply these are:
NewLatLng: Change camera’s latitude and longitude, while keeping other properties
NewLatLngZoom: Changes the camera’s latitude, longitude, and zoom, while keeping other properties
NewCameraPosition: Full flexibility in changing the camera position
We are going to use NewCameraPosition. A CameraPosition can be obtained with a CameraPosition.Builder. And then we can set target, bearing, tilt and zoom properties.
Code:
public void RepositionMapCamera(double lat, double lng)
{
var cameraPosition = new CameraPosition.Builder();
cameraPosition.Target(new LatLng(lat, lng));
cameraPosition.Zoom(1000);
cameraPosition.Bearing(45);
cameraPosition.Tilt(20);
CameraUpdate cameraUpdate = CameraUpdateFactory.NewCameraPosition(cameraPosition.Build());
hMap.MoveCamera(cameraUpdate);
}
Creating Geofence
In this part, we will choose the location where we want to set geofence in two different ways. The first is to select the location by clicking on the map, and the second is to search for nearby places by keyword and select one after placing them on the map with the marker.
Set the geofence location by clicking on the map
It is always easier to select a location by seeing it. After this section, we are able to set a geofence around the clicked point when the map’s clicked. We attached the Click event to our map in the OnMapReady method. In this Click event, we will add a marker to the clicked point and draw a circle around it.
Also, we will use the Seekbar at the bottom of the page to adjust the circle radius. We set selectedCoordinates variable when adding the marker. Let’s create the following method to create the marker:
Code:
private void HMap_MapClick(object sender, HuaweiMap.MapClickEventArgs e)
{
selectedCoordinates.LatLng = e.P0;
if (circle != null)
{
circle.Remove();
circle = null;
}
AddMarkerOnMap();
}
void AddMarkerOnMap()
{
if (marker != null) marker.Remove();
var markerOption = new MarkerOptions()
.InvokeTitle("You are here now")
.InvokePosition(selectedCoordinates.LatLng);
hMap.SetInfoWindowAdapter(new MapInfoWindowAdapter(this));
marker = hMap.AddMarker(markerOption);
bool isInfoWindowShown = marker.IsInfoWindowShown;
if (isInfoWindowShown)
marker.HideInfoWindow();
else
marker.ShowInfoWindow();
}
Adding MapInfoWindowAdapter class to our project for rendering the custom info model. And implement HuaweiMap.IInfoWindowAdapter interface to it. When an information window needs to be displayed for a marker, methods provided by this adapter are called in any case.
Now let’s create a custom info window layout and named it as map_info_view.xml
Code:
<?xml version="1.0" encoding="utf-8"?>
<LinearLayout xmlns:android="schemas.android.com/apk/res/android"
android:orientation="vertical"
android:layout_width="match_parent"
android:layout_height="match_parent">
<Button
android:text="Add geofence"
android:width="100dp"
style="@style/Widget.AppCompat.Button.Colored"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:id="@+id/btnInfoWindow" />
</LinearLayout>
And return it after customizing it in GetInfoWindow() method. The full code of the adapter is below:
Code:
internal class MapInfoWindowAdapter : Java.Lang.Object, HuaweiMap.IInfoWindowAdapter
{
private MainActivity activity;
private GeofenceModel selectedCoordinates;
private View addressLayout;
public MapInfoWindowAdapter(MainActivity currentActivity){activity = currentActivity;}
public View GetInfoContents(Marker marker){return null;}
public View GetInfoWindow(Marker marker)
{
if (marker == null)
return null;
selectedCoordinates = new GeofenceModel { LatLng = new LatLng(marker.Position.Latitude, marker.Position.Longitude) };
View mapInfoView = activity.LayoutInflater.Inflate(Resource.Layout.map_info_view, null);
var radiusBar = activity.FindViewById<SeekBar>(Resource.Id.radiusBar);
if (radiusBar.Visibility == Android.Views.ViewStates.Invisible)
{
radiusBar.Visibility = Android.Views.ViewStates.Visible;
radiusBar.SetProgress(30, true);
}
activity.FindViewById<SeekBar>(Resource.Id.radiusBar)?.SetProgress(30, true);
activity.DrawCircleOnMap(selectedCoordinates);
Button button = mapInfoView.FindViewById<Button>(Resource.Id.btnInfoWindow);
button.Click += btnInfoWindow_ClickAsync;
return mapInfoView;
}
}
Now we create a method to arrange a circle around the marker that representing the geofence radius. Create a new DrawCircleOnMap method in MainActivity for this. To construct a circle, we must specify the Center and Radius. Also, I set other properties like StrokeColor etc.
Code:
public void DrawCircleOnMap(GeofenceModel geoModel)
{
if (circle != null)
{
circle.Remove();
circle = null;
}
CircleOptions circleOptions = new CircleOptions()
.InvokeCenter(geoModel.LatLng)
.InvokeRadius(geoModel.Radius)
.InvokeFillColor(Color.Argb(50, 0, 14, 84))
.InvokeStrokeColor(Color.Yellow)
.InvokeStrokeWidth(15);
circle = hMap.AddCircle(circleOptions);
}
private void radiusBar_ProgressChanged(object sender, SeekBar.ProgressChangedEventArgs e)
{
selectedCoordinates.Radius = e.Progress;
DrawCircleOnMap(selectedCoordinates);
}
We will use SeekBar to change the radius of the circle. As the value changes, the drawn circle will expand or shrink.
Reverse Geocoding
Now let’s handle the click event of the info window.
But before open that window, we need to reverse geocoding selected coordinates to getting a formatted address. HUAWEI Site Kit provides us a set of HTTP API including the one that we need, reverseGeocode.
Let’s add the GeocodeManager class to our project and update it as follows:
Code:
public async Task<Site> ReverseGeocode(double lat, double lng)
{
string result = "";
using (var client = new HttpClient())
{
MyLocation location = new MyLocation();
location.Lat = lat;
location.Lng = lng;
var root = new ReverseGeocodeRequest();
root.Location = location;
var settings = new JsonSerializerSettings();
settings.ContractResolver = new LowercaseSerializer();
var json = JsonConvert.SerializeObject(root, Formatting.Indented, settings);
var data = new StringContent(json, Encoding.UTF8, "application/json");
var url = "siteapi.cloud.huawei.com/mapApi/v1/siteService/reverseGeocode?key=" + Android.Net.Uri.Encode(ApiKey);
var response = await client.PostAsync(url, data);
result = response.Content.ReadAsStringAsync().Result;
}
return JsonConvert.DeserializeObject<ReverseGeocodeResponse>(result).sites.FirstOrDefault();
}
In the above code, we request the address corresponding to a given latitude/longitude. Also specified that the output must be in JSON format.
siteapi.cloud.huawei.com/mapApi/v1/siteService/reverseGeocode?key=APIKEY
Click to expand...
Click to collapse
Request model:
Code:
public class MyLocation
{
public double Lat { get; set; }
public double Lng { get; set; }
}
public class ReverseGeocodeRequest
{
public MyLocation Location { get; set; }
}
Note that the JSON response contains three root elements:
“returnCode”: For details, please refer to Result Codes.
“returnDesc”: description
“sites” contains an array of geocoded address information
Generally, only one entry in the “sites” array is returned for address lookups, though the geocoder may return several results when address queries are ambiguous.
Add the following codes to our MapInfoWindowAdapter where we get results from the Reverse Geocode API and set the UI elements.
Code:
private async void btnInfoWindow_ClickAsync(object sender, System.EventArgs e)
{
addressLayout = activity.LayoutInflater.Inflate(Resource.Layout.reverse_alert_layout, null);
GeocodeManager geocodeManager = new GeocodeManager(activity);
var addressResult = await geocodeManager.ReverseGeocode(selectedCoordinates.LatLng.Latitude, selectedCoordinates.LatLng.Longitude);
if (addressResult.ReturnCode != 0)
return;
var address = addressResult.Sites.FirstOrDefault();
var txtAddress = addressLayout.FindViewById<TextView>(Resource.Id.txtAddress);
var txtRadius = addressLayout.FindViewById<TextView>(Resource.Id.txtRadius);
txtAddress.Text = address.FormatAddress;
txtRadius.Text = selectedCoordinates.Radius.ToString();
AlertDialog.Builder builder = new AlertDialog.Builder(activity);
builder.SetView(addressLayout);
builder.SetTitle(address.Name);
builder.SetPositiveButton("Save", (sender, arg) =>
{
selectedCoordinates.Conversion = GetSelectedConversion();
GeofenceManager geofenceManager = new GeofenceManager(activity);
geofenceManager.AddGeofences(selectedCoordinates);
});
builder.SetNegativeButton("Cancel", (sender, arg) => { builder.Dispose(); });
AlertDialog alert = builder.Create();
alert.Show();
}
Now, after selecting the conversion, we can complete the process by calling the AddGeofence method in the GeofenceManager class by pressing the save button in the dialog window.
Code:
public void AddGeofences(GeofenceModel geofenceModel)
{
//Set parameters
geofenceModel.Id = Guid.NewGuid().ToString();
if (geofenceModel.Conversion == 5) //Expiration value that indicates the geofence should never expire.
geofenceModel.Timeout = Geofence.GeofenceNeverExpire;
else
geofenceModel.Timeout = 10000;
List<IGeofence> geofenceList = new List<IGeofence>();
//Geofence Service
GeofenceService geofenceService = LocationServices.GetGeofenceService(activity);
PendingIntent pendingIntent = CreatePendingIntent();
GeofenceBuilder somewhereBuilder = new GeofenceBuilder()
.SetUniqueId(geofenceModel.Id)
.SetValidContinueTime(geofenceModel.Timeout)
.SetRoundArea(geofenceModel.LatLng.Latitude, geofenceModel.LatLng.Longitude, geofenceModel.Radius)
.SetDwellDelayTime(10000)
.SetConversions(geofenceModel.Conversion); ;
//Create geofence request
geofenceList.Add(somewhereBuilder.Build());
GeofenceRequest geofenceRequest = new GeofenceRequest.Builder()
.CreateGeofenceList(geofenceList)
.Build();
//Register geofence
var geoTask = geofenceService.CreateGeofenceList(geofenceRequest, pendingIntent);
geoTask.AddOnSuccessListener(new CreateGeoSuccessListener(activity));
geoTask.AddOnFailureListener(new CreateGeoFailListener(activity));
}
In the AddGeofence method, we need to set the geofence request parameters, like the selected conversion, unique Id and timeout according to conversion, etc. with GeofenceBuilder. We create GeofenceBroadcastReceiver and display a toast message when a geofence action occurs.
Code:
[BroadcastReceiver(Enabled = true)]
[IntentFilter(new[] { "com.huawei.hms.geofence.ACTION_PROCESS_ACTIVITY" })]
class GeofenceBroadcastReceiver : BroadcastReceiver
{
public static readonly string ActionGeofence = "com.huawei.hms.geofence.ACTION_PROCESS_ACTIVITY";
public override void OnReceive(Context context, Intent intent)
{
if (intent != null)
{
var action = intent.Action;
if (action == ActionGeofence)
{
GeofenceData geofenceData = GeofenceData.GetDataFromIntent(intent);
if (geofenceData != null)
{
Toast.MakeText(context, "Geofence triggered: " + geofenceData.ConvertingLocation.Latitude +"\n" + geofenceData.ConvertingLocation.Longitude + "\n" + geofenceData.Conversion.ToConversionName(), ToastLength.Long).Show();
}
}
}
}
}
After that in CreateGeoSuccessListener and CreateGeoFailureListener that we implement IOnSuccessListener and IOnFailureListener respectively, we display a toast message to the user like this:
Code:
public class CreateGeoFailListener : Java.Lang.Object, IOnFailureListener
{
public void OnFailure(Java.Lang.Exception ex)
{
Toast.MakeText(mainActivity, "Geofence request failed: " + GeofenceErrorCodes.GetErrorMessage((ex as ApiException).StatusCode), ToastLength.Long).Show();
}
}
public class CreateGeoSuccessListener : Java.Lang.Object, IOnSuccessListener
{
public void OnSuccess(Java.Lang.Object data)
{
Toast.MakeText(mainActivity, "Geofence request successful", ToastLength.Long).Show();
}
}
Set geofence location using Nearby Search
On the main layout when the user clicks the Search Nearby Places button, a search dialog like below appears:
Create search_alert_layout.xml with a search input In Main Activity, create click event of that button and open an alert dialog after it’s view is set to search_alert_layout. And make NearbySearch when clicking the Search button:
Code:
private void btnGeoWithAddress_Click(object sender, EventArgs e)
{
search_view = base.LayoutInflater.Inflate(Resource.Layout.search_alert_layout, null);
AlertDialog.Builder builder = new AlertDialog.Builder(this);
builder.SetView(search_view);
builder.SetTitle("Search Location");
builder.SetNegativeButton("Cancel", (sender, arg) => { builder.Dispose(); });
search_view.FindViewById<Button>(Resource.Id.btnSearch).Click += btnSearchClicked;
alert = builder.Create();
alert.Show();
}
private void btnSearchClicked(object sender, EventArgs e)
{
string searchText = search_view.FindViewById<TextView>(Resource.Id.txtSearch).Text;
GeocodeManager geocodeManager = new GeocodeManager(this);
geocodeManager.NearbySearch(CurrentPosition, searchText);
}
We pass search text and Current Location into the GeocodeManager NearbySearch method as parameters. We need to modify GeoCodeManager class and add nearby search method to it.
Code:
public void NearbySearch(LatLng currentLocation, string searchText)
{
ISearchService searchService = SearchServiceFactory.Create(activity, Android.Net.Uri.Encode("YOUR_API_KEY"));
NearbySearchRequest nearbySearchRequest = new NearbySearchRequest();
nearbySearchRequest.Query = searchText;
nearbySearchRequest.Language = "en";
nearbySearchRequest.Location = new Coordinate(currentLocation.Latitude, currentLocation.Longitude);
nearbySearchRequest.Radius = (Integer)2000;
nearbySearchRequest.PageIndex = (Integer)1;
nearbySearchRequest.PageSize = (Integer)5;
nearbySearchRequest.PoiType = LocationType.Address;
searchService.NearbySearch(nearbySearchRequest, new QuerySuggestionResultListener(activity as MainActivity));
}
And to handle the result we must create a listener and implement the ISearchResultListener interface to it.
Code:
public class NearbySearchResultListener : Java.Lang.Object, ISearchResultListener
{
public void OnSearchError(SearchStatus status)
{
Toast.MakeText(context, "Error Code: " + status.ErrorCode + " Error Message: " + status.ErrorMessage, ToastLength.Long);
}
public void OnSearchResult(Java.Lang.Object results)
{
NearbySearchResponse nearbySearchResponse = (NearbySearchResponse)results;
if (nearbySearchResponse != null && nearbySearchResponse.TotalCount > 0)
context.SetSearchResultOnMap(nearbySearchResponse.Sites);
}
}
In OnSearchResult method, NearbySearchResponse object return. We will insert markers to the mapper element in this response. The map will look like this:
In Main Activity create a method named SetSearchResultOnMap and pass IList<Site> as a parameter to insert multiple markers on the map.
Code:
public void SetSearchResultOnMap(IList<Com.Huawei.Hms.Site.Api.Model.Site> sites)
{
hMap.Clear();
if (searchMarkers != null && searchMarkers.Count > 0)
foreach (var item in searchMarkers)
item.Remove();
searchMarkers = new List<Marker>();
for (int i = 0; i < sites.Count; i++)
{
MarkerOptions marker1Options = new MarkerOptions()
.InvokePosition(new LatLng(sites[i].Location.Lat, sites[i].Location.Lng))
.InvokeTitle(sites[i].Name).Clusterable(true);
hMap.SetInfoWindowAdapter(new MapInfoWindowAdapter(this));
var marker1 = hMap.AddMarker(marker1Options);
searchMarkers.Add(marker1);
RepositionMapCamera(sites[i].Location.Lat, sites[i].Location.Lng);
}
hMap.SetMarkersClustering(true);
alert.Dismiss();
}
Now, we add markers as we did above. But here we use SetMarkersClustering(true) to consolidates markers into clusters when zooming out of the map.
You can download the source code from below:
github.com/stugcearar/HMSCore-Xamarin-Android-Samples/tree/master/LocationKit/HMS_Geofence
Also if you have any questions, ask away in Huawei Developer Forums.
Errors
If your location permission set “Allowed only while in use instead” of ”Allowed all the time” below exception will be thrown.
int GEOFENCE_INSUFFICIENT_PERMISSION
Insufficient permission to perform geofence-related operations.
You can see all result codes including errors, in here for Location service.
You can find result codes with details here for Geofence request.

Expert: Integration of Huawei ML Kit for Scene Detection in Xamarin(Android)

Overview
In this article, I will create a demo app along with the integration of ML Kit Scene Detection which is based on Cross platform Technology Xamarin. It will classify image sets by scenario and generates intelligent album sets. User can select camera parameters based on the photographing scene in app, to take better-looking photos.
Scene Detection Service Introduction
ML Text Recognition service can classify the scenario content of images and add labels, such as outdoor scenery, indoor places, and buildings, helps to understand the image content. Based on the detected information, you can create more personalized app experience for users. Currently, on-device detection supports 102 scenarios.
Prerequisite
Xamarin Framework
Huawei phone
Visual Studio 2019
App Gallery Integration process
Sign In and Create or Choosea project on AppGallery Connect portal.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Navigate to Project settings and downloadthe configuration file.
Navigate to General Information, and then provide Data Storage location.
Navigate to Manage APIs and enable ML Kit.
Installing the Huawei ML NuGet package
Navigate to Solution Explore > Project > Right Click > Manage NuGet Packages.
Install Huawei.Hms.MlComputerVisionScenedetectionin reference.
Install Huawei.Hms.MlComputerVisionScenedetectionInner in reference.
Install Huawei.Hms.MlComputerVisionScenedetectionModel in reference.
Xamarin App Development
Open Visual Studio 2019 and Create A New Project.
Configure Manifest file and add following permissions and tags.
Code:
<uses-feature android:name="android.hardware.camera" />
<uses-permission android:name="android.permission.CAMERA" />
<uses-permission android:name="android.permission.INTERNET" />
<uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE" />
<uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" />
<uses-permission android:name="android.permission.RECORD_AUDIO" />
<uses-permission android:name="android.permission.NETWORK_STATE" />
<uses-permission android:name="android.permission.ACCESS_WIFI_STATE" />
</manifest>
3.Create Activity class with XML UI.
GraphicOverlay.cs
This Class performs scaling and mirroring of the graphics relative to the camera's preview properties.
Code:
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using Android.App;
using Android.Content;
using Android.Graphics;
using Android.OS;
using Android.Runtime;
using Android.Util;
using Android.Views;
using Android.Widget;
using Huawei.Hms.Mlsdk.Common;
namespace SceneDetectionDemo
{
public class GraphicOverlay : View
{
private readonly object mLock = new object();
public int mPreviewWidth;
public float mWidthScaleFactor = 1.0f;
public int mPreviewHeight;
public float mHeightScaleFactor = 1.0f;
public int mFacing = LensEngine.BackLens;
private HashSet<Graphic> mGraphics = new HashSet<Graphic>();
public GraphicOverlay(Context context, IAttributeSet attrs) : base(context,attrs)
{
}
/// <summary>
/// Removes all graphics from the overlay.
/// </summary>
public void Clear()
{
lock(mLock) {
mGraphics.Clear();
}
PostInvalidate();
}
/// <summary>
/// Adds a graphic to the overlay.
/// </summary>
public void Add(Graphic graphic)
{
lock(mLock) {
mGraphics.Add(graphic);
}
PostInvalidate();
}
/// <summary>
/// Removes a graphic from the overlay.
/// </summary>
public void Remove(Graphic graphic)
{
lock(mLock)
{
mGraphics.Remove(graphic);
}
PostInvalidate();
}
/// <summary>
/// Sets the camera attributes for size and facing direction, which informs how to transform image coordinates later.
/// </summary>
public void SetCameraInfo(int previewWidth, int previewHeight, int facing)
{
lock(mLock) {
mPreviewWidth = previewWidth;
mPreviewHeight = previewHeight;
mFacing = facing;
}
PostInvalidate();
}
/// <summary>
/// Draws the overlay with its associated graphic objects.
/// </summary>
protected override void OnDraw(Canvas canvas)
{
base.OnDraw(canvas);
lock (mLock)
{
if ((mPreviewWidth != 0) && (mPreviewHeight != 0))
{
mWidthScaleFactor = (float)canvas.Width / (float)mPreviewWidth;
mHeightScaleFactor = (float)canvas.Height / (float)mPreviewHeight;
}
foreach (Graphic graphic in mGraphics)
{
graphic.Draw(canvas);
}
}
}
}
/// <summary>
/// Base class for a custom graphics object to be rendered within the graphic overlay. Subclass
/// this and implement the {Graphic#Draw(Canvas)} method to define the
/// graphics element. Add instances to the overlay using {GraphicOverlay#Add(Graphic)}.
/// </summary>
public abstract class Graphic
{
private GraphicOverlay mOverlay;
public Graphic(GraphicOverlay overlay)
{
mOverlay = overlay;
}
/// <summary>
/// Draw the graphic on the supplied canvas. Drawing should use the following methods to
/// convert to view coordinates for the graphics that are drawn:
/// <ol>
/// <li>{Graphic#ScaleX(float)} and {Graphic#ScaleY(float)} adjust the size of
/// the supplied value from the preview scale to the view scale.</li>
/// <li>{Graphic#TranslateX(float)} and {Graphic#TranslateY(float)} adjust the
/// coordinate from the preview's coordinate system to the view coordinate system.</li>
/// </ ol >param canvas drawing canvas
/// </summary>
/// <param name="canvas"></param>
public abstract void Draw(Canvas canvas);
/// <summary>
/// Adjusts a horizontal value of the supplied value from the preview scale to the view
/// scale.
/// </summary>
public float ScaleX(float horizontal)
{
return horizontal * mOverlay.mWidthScaleFactor;
}
public float UnScaleX(float horizontal)
{
return horizontal / mOverlay.mWidthScaleFactor;
}
/// <summary>
/// Adjusts a vertical value of the supplied value from the preview scale to the view scale.
/// </summary>
public float ScaleY(float vertical)
{
return vertical * mOverlay.mHeightScaleFactor;
}
public float UnScaleY(float vertical) { return vertical / mOverlay.mHeightScaleFactor; }
/// <summary>
/// Adjusts the x coordinate from the preview's coordinate system to the view coordinate system.
/// </summary>
public float TranslateX(float x)
{
if (mOverlay.mFacing == LensEngine.FrontLens)
{
return mOverlay.Width - ScaleX(x);
}
else
{
return ScaleX(x);
}
}
/// <summary>
/// Adjusts the y coordinate from the preview's coordinate system to the view coordinate system.
/// </summary>
public float TranslateY(float y)
{
return ScaleY(y);
}
public void PostInvalidate()
{
this.mOverlay.PostInvalidate();
}
}
}
LensEnginePreview.cs
This Class performs camera's lens preview properties which help to detect and identify the preview.
Code:
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using Android.App;
using Android.Content;
using Android.Graphics;
using Android.OS;
using Android.Runtime;
using Android.Util;
using Android.Views;
using Android.Widget;
using Huawei.Hms.Mlsdk.Common;
namespace HmsXamarinMLDemo.Camera
{
public class LensEnginePreview :ViewGroup
{
private const string Tag = "LensEnginePreview";
private Context mContext;
protected SurfaceView mSurfaceView;
private bool mStartRequested;
private bool mSurfaceAvailable;
private LensEngine mLensEngine;
private GraphicOverlay mOverlay;
public LensEnginePreview(Context context, IAttributeSet attrs) : base(context,attrs)
{
this.mContext = context;
this.mStartRequested = false;
this.mSurfaceAvailable = false;
this.mSurfaceView = new SurfaceView(context);
this.mSurfaceView.Holder.AddCallback(new SurfaceCallback(this));
this.AddView(this.mSurfaceView);
}
public void start(LensEngine lensEngine)
{
if (lensEngine == null)
{
this.stop();
}
this.mLensEngine = lensEngine;
if (this.mLensEngine != null)
{
this.mStartRequested = true;
this.startIfReady();
}
}
public void start(LensEngine lensEngine, GraphicOverlay overlay)
{
this.mOverlay = overlay;
this.start(lensEngine);
}
public void stop()
{
if (this.mLensEngine != null)
{
this.mLensEngine.Close();
}
}
public void release()
{
if (this.mLensEngine != null)
{
this.mLensEngine.Release();
this.mLensEngine = null;
}
}
private void startIfReady()
{
if (this.mStartRequested && this.mSurfaceAvailable) {
this.mLensEngine.Run(this.mSurfaceView.Holder);
if (this.mOverlay != null)
{
Huawei.Hms.Common.Size.Size size = this.mLensEngine.DisplayDimension;
int min = Math.Min(640, 480);
int max = Math.Max(640, 480);
if (this.isPortraitMode())
{
// Swap width and height sizes when in portrait, since it will be rotated by 90 degrees.
this.mOverlay.SetCameraInfo(min, max, this.mLensEngine.LensType);
}
else
{
this.mOverlay.SetCameraInfo(max, min, this.mLensEngine.LensType);
}
this.mOverlay.Clear();
}
this.mStartRequested = false;
}
}
private class SurfaceCallback : Java.Lang.Object, ISurfaceHolderCallback
{
private LensEnginePreview lensEnginePreview;
public SurfaceCallback(LensEnginePreview LensEnginePreview)
{
this.lensEnginePreview = LensEnginePreview;
}
public void SurfaceChanged(ISurfaceHolder holder, [GeneratedEnum] Format format, int width, int height)
{
}
public void SurfaceCreated(ISurfaceHolder holder)
{
this.lensEnginePreview.mSurfaceAvailable = true;
try
{
this.lensEnginePreview.startIfReady();
}
catch (Exception e)
{
Log.Info(LensEnginePreview.Tag, "Could not start camera source.", e);
}
}
public void SurfaceDestroyed(ISurfaceHolder holder)
{
this.lensEnginePreview.mSurfaceAvailable = false;
}
}
protected override void OnLayout(bool changed, int l, int t, int r, int b)
{
int previewWidth = 480;
int previewHeight = 360;
if (this.mLensEngine != null)
{
Huawei.Hms.Common.Size.Size size = this.mLensEngine.DisplayDimension;
if (size != null)
{
previewWidth = 640;
previewHeight = 480;
}
}
// Swap width and height sizes when in portrait, since it will be rotated 90 degrees
if (this.isPortraitMode())
{
int tmp = previewWidth;
previewWidth = previewHeight;
previewHeight = tmp;
}
int viewWidth = r - l;
int viewHeight = b - t;
int childWidth;
int childHeight;
int childXOffset = 0;
int childYOffset = 0;
float widthRatio = (float)viewWidth / (float)previewWidth;
float heightRatio = (float)viewHeight / (float)previewHeight;
// To fill the view with the camera preview, while also preserving the correct aspect ratio,
// it is usually necessary to slightly oversize the child and to crop off portions along one
// of the dimensions. We scale up based on the dimension requiring the most correction, and
// compute a crop offset for the other dimension.
if (widthRatio > heightRatio)
{
childWidth = viewWidth;
childHeight = (int)((float)previewHeight * widthRatio);
childYOffset = (childHeight - viewHeight) / 2;
}
else
{
childWidth = (int)((float)previewWidth * heightRatio);
childHeight = viewHeight;
childXOffset = (childWidth - viewWidth) / 2;
}
for (int i = 0; i < this.ChildCount; ++i)
{
// One dimension will be cropped. We shift child over or up by this offset and adjust
// the size to maintain the proper aspect ratio.
this.GetChildAt(i).Layout(-1 * childXOffset, -1 * childYOffset, childWidth - childXOffset,
childHeight - childYOffset);
}
try
{
this.startIfReady();
}
catch (Exception e)
{
Log.Info(LensEnginePreview.Tag, "Could not start camera source.", e);
}
}
private bool isPortraitMode()
{
return true;
}
}
}
activity_scene_detection.xml
Code:
<?xml version="1.0" encoding="utf-8"?>
<RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android"
android:layout_width="match_parent"
android:layout_height="match_parent"
android:background="#000"
android:fitsSystemWindows="true"
android:keepScreenOn="true"
android:orientation="vertical">
<ToggleButton
android:id="@+id/facingSwitch"
android:layout_width="65dp"
android:layout_height="65dp"
android:layout_alignParentBottom="true"
android:layout_centerHorizontal="true"
android:layout_marginBottom="5dp"
android:background="@drawable/facingswitch_stroke"
android:textOff=""
android:textOn="" />
<com.huawei.mlkit.sample.camera.LensEnginePreview
android:id="@+id/preview"
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:layout_alignParentStart="true"
android:layout_alignParentTop="true">
<com.huawei.mlkit.sample.views.overlay.GraphicOverlay
android:id="@+id/overlay"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_marginStart="20dp"
android:layout_marginEnd="20dp" />
</com.huawei.mlkit.sample.camera.LensEnginePreview>
<RelativeLayout
android:id="@+id/rl_select_album_result"
android:layout_width="match_parent"
android:layout_height="match_parent"
android:background="#000"
android:visibility="gone">
<ImageView
android:id="@+id/iv_result"
android:layout_width="match_parent"
android:layout_height="match_parent"
android:layout_alignParentRight="true" />
<TextView
android:id="@+id/result"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_alignParentBottom="true"
android:layout_centerHorizontal="true"
android:layout_marginBottom="100dp"
android:textColor="@color/upsdk_white" />
</RelativeLayout>
<ImageView
android:id="@+id/iv_select_album"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_alignParentRight="true"
android:layout_marginTop="20dp"
android:layout_marginEnd="20dp"
android:src="@drawable/select_album" />
<ImageView
android:id="@+id/iv_return_back"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_marginStart="20dp"
android:layout_marginTop="20dp"
android:src="@drawable/return_back" />
<ImageView
android:id="@+id/iv_left_top"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_below="@id/iv_return_back"
android:layout_marginStart="20dp"
android:layout_marginTop="20dp"
android:src="@drawable/left_top_arrow" />
<ImageView
android:id="@+id/iv_right_top"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_below="@id/iv_select_album"
android:layout_alignParentRight="true"
android:layout_marginTop="23dp"
android:layout_marginEnd="20dp"
android:src="@drawable/right_top_arrow" />
<ImageView
android:id="@+id/iv_left_bottom"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_alignParentBottom="true"
android:layout_marginStart="20dp"
android:layout_marginBottom="70dp"
android:src="@drawable/left_bottom_arrow" />
<ImageView
android:id="@+id/iv_right_bottom"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_alignParentRight="true"
android:layout_alignParentBottom="true"
android:layout_marginEnd="20dp"
android:layout_marginBottom="70dp"
android:src="@drawable/right_bottom_arrow" />
</RelativeLayout>
SceneDetectionActivity.cs
This activity performs all the operation regarding live scene detection.
Code:
using Android.App;
using Android.Content;
using Android.OS;
using Android.Runtime;
using Android.Support.V7.App;
using Android.Views;
using Android.Widget;
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using Huawei.Hms.Mlsdk.Common;
using Huawei.Hms.Mlsdk.Scd;
using HmsXamarinMLDemo.Camera;
using Android.Support.V4.App;
using Android;
using Android.Util;
namespace SceneDetectionDemo
{
[Activity(Label = "SceneDetectionActivity")]
public class SceneDetectionActivity : AppCompatActivity, View.IOnClickListener, MLAnalyzer.IMLTransactor
{
private const string Tag = "SceneDetectionLiveAnalyseActivity";
private const int CameraPermissionCode = 0;
private MLSceneDetectionAnalyzer analyzer;
private LensEngine mLensEngine;
private LensEnginePreview mPreview;
private GraphicOverlay mOverlay;
private int lensType = LensEngine.FrontLens;
private bool isFront = true;
protected override void OnCreate(Bundle savedInstanceState)
{
base.OnCreate(savedInstanceState);
this.SetContentView(Resource.Layout.activity_live_scene_analyse);
this.mPreview = (LensEnginePreview)this.FindViewById(Resource.Id.scene_preview);
this.mOverlay = (GraphicOverlay)this.FindViewById(Resource.Id.scene_overlay);
this.FindViewById(Resource.Id.facingSwitch).SetOnClickListener(this);
if (savedInstanceState != null)
{
this.lensType = savedInstanceState.GetInt("lensType");
}
this.CreateSegmentAnalyzer();
// Checking Camera Permissions
if (ActivityCompat.CheckSelfPermission(this, Manifest.Permission.Camera) == Android.Content.PM.Permission.Granted)
{
this.CreateLensEngine();
}
else
{
this.RequestCameraPermission();
}
}
private void CreateLensEngine()
{
Context context = this.ApplicationContext;
// Create LensEngine.
this.mLensEngine = new LensEngine.Creator(context, this.analyzer).SetLensType(this.lensType)
.ApplyDisplayDimension(960, 720)
.ApplyFps(25.0f)
.EnableAutomaticFocus(true)
.Create();
}
public override void OnRequestPermissionsResult(int requestCode, string[] permissions, [GeneratedEnum] Permission[] grantResults)
{
if (requestCode != CameraPermissionCode)
{
base.OnRequestPermissionsResult(requestCode, permissions, grantResults);
return;
}
if (grantResults.Length != 0 && grantResults[0] == Permission.Granted)
{
this.CreateLensEngine();
return;
}
}
protected override void OnSaveInstanceState(Bundle outState)
{
outState.PutInt("lensType", this.lensType);
base.OnSaveInstanceState(outState);
}
protected override void OnResume()
{
base.OnResume();
if (ActivityCompat.CheckSelfPermission(this, Manifest.Permission.Camera) == Permission.Granted)
{
this.CreateLensEngine();
this.StartLensEngine();
}
else
{
this.RequestCameraPermission();
}
}
public void OnClick(View v)
{
this.isFront = !this.isFront;
if (this.isFront)
{
this.lensType = LensEngine.FrontLens;
}
else
{
this.lensType = LensEngine.BackLens;
}
if (this.mLensEngine != null)
{
this.mLensEngine.Close();
}
this.CreateLensEngine();
this.StartLensEngine();
}
private void StartLensEngine()
{
if (this.mLensEngine != null)
{
try
{
this.mPreview.start(this.mLensEngine, this.mOverlay);
}
catch (Exception e)
{
Log.Error(Tag, "Failed to start lens engine.", e);
this.mLensEngine.Release();
this.mLensEngine = null;
}
}
}
private void CreateSegmentAnalyzer()
{
this.analyzer = MLSceneDetectionAnalyzerFactory.Instance.SceneDetectionAnalyzer;
this.analyzer.SetTransactor(this);
}
protected override void OnPause()
{
base.OnPause();
this.mPreview.stop();
}
protected override void OnDestroy()
{
base.OnDestroy();
if (this.mLensEngine != null)
{
this.mLensEngine.Release();
}
if (this.analyzer != null)
{
this.analyzer.Stop();
}
}
//Request permission
private void RequestCameraPermission()
{
string[] permissions = new string[] { Manifest.Permission.Camera };
if (!ActivityCompat.ShouldShowRequestPermissionRationale(this, Manifest.Permission.Camera))
{
ActivityCompat.RequestPermissions(this, permissions, CameraPermissionCode);
return;
}
}
/// <summary>
/// Implemented from MLAnalyzer.IMLTransactor interface
/// </summary>
public void Destroy()
{
throw new NotImplementedException();
}
/// <summary>
/// Implemented from MLAnalyzer.IMLTransactor interface.
/// Process the results returned by the analyzer.
/// </summary>
public void TransactResult(MLAnalyzer.Result result)
{
mOverlay.Clear();
SparseArray imageSegmentationResult = result.AnalyseList;
IList<MLSceneDetection> list = new List<MLSceneDetection>();
for (int i = 0; i < imageSegmentationResult.Size(); i++)
{
list.Add((MLSceneDetection)imageSegmentationResult.ValueAt(i));
}
MLSceneDetectionGraphic sceneDetectionGraphic = new MLSceneDetectionGraphic(mOverlay, list);
mOverlay.Add(sceneDetectionGraphic);
mOverlay.PostInvalidate();
}
}
}
Xamarin App Build Result
Navigate to Build > Build Solution.
Navigate to Solution Explore > Project > Right Click > Archive/View Archive to generate SHA-256 for build release and Click on Distribute.
Choose Archive > Distribute.
Choose Distribution Channel > Ad Hoc to sign apk.
Choose Demo keystore to release apk.
Build succeed and click Save.
Result.
Tips and Tricks
The minimum resolution is 224 x 224 and the maximum resolution is 4096 x 4960.
Obtains the confidence threshold corresponding to the scene detection result. Call synchronous and asynchronous APIs for scene detection to obtain a data set. Based on the confidence threshold, results whose confidence is less than the threshold can be filtered out.
Conclusion
In this article, we have learned how to integrate ML Text Recognition in Xamarin based Android application. User can live detect indoor and outdoor places and things with the help of Scene Detection API in Application.
Thanks for reading this article. Be sure to like and comments to this article, if you found it helpful. It means a lot to me.
References
HMS Core ML Scene Detection Docs: https://developer.huawei.com/consumer/en/doc/development/HMSCore-Guides-V5/scene-detection-0000001055162807-V5
Original Source

Book Reader Application Using General Text Recognition by Huawei HiAI in Android

Introduction
In this article, we will learn how to integrate Huawei General Text Recognition using Huawei HiAI. We will build the Book reader application.
About application:
Usually user get bored to read book. This application helps them to listen book reading instead of manual book reading. So all they need to do is just capture photo of book and whenever user is travelling or whenever user want to read the book on their free time. Just user need to select image from galley and listen like music.
Huawei general text recognition works on OCR technology.
First let us understand about OCR.
What is optical character recognition (OCR)?
Optical Character Recognition (OCR) technology is a business solution for automating data extraction from printed or written text from a scanned document or image file and then converting the text into a machine-readable form to be used for data processing like editing or searching.
Now let us understand about General Text Recognition (GTR).
At the core of the GTR is Optical Character Recognition (OCR) technology, which extracts text in screenshots and photos taken by the phone camera. For photos taken by the camera, this API can correct for tilts, camera angles, reflections, and messy backgrounds up to a certain degree. It can also be used for document and streetscape photography, as well as a wide range of usage scenarios, and it features strong anti-interference capability. This API works on device side processing and service connection.
Features
For photos: Provides text area detection and text recognition for Chinese, English, Japanese, Korean, Russian, Italian, Spanish, Portuguese, German, and French texts in multiple printing fonts. A wide range of scenarios are supported, and a high recognition accuracy can be achieved even under the influence of complex lighting condition, background, or more.
For screenshots: Optimizes text extraction algorithms based on the characteristics of screenshots captured on mobile phones. Currently, this function is available in the Chinese mainland supporting Chinese and English texts.
OCR features
Lightweight: This API greatly reduces the computing time and ROM space the algorithm model takes up, making your app more lightweight.
Customized hierarchical result return: You can choose to return the coordinates of text blocks, text lines, and text characters in the screenshot based on app requirements.
How to integrate General Text Recognition
1. Configure the application on the AGC.
2. Apply for HiAI Engine Library
3. Client application development process.
Configure application on the AGC
Follow the steps
Step 1: We need to register as a developer account in AppGallery Connect. If you are already a developer ignore this step.
Step 2: Create an app by referring to Creating a Project and Creating an App in the Project
Step 3: Set the data storage location based on the current location.
Step 4: Generating a Signing Certificate Fingerprint.
Step 5: Configuring the Signing Certificate Fingerprint.
Step 6: Download your agconnect-services.json file, paste it into the app root directory.
Apply for HiAI Engine Library
What is Huawei HiAI?
HiAI is Huawei’s AI computing platform. HUAWEI HiAI is a mobile terminal–oriented artificial intelligence (AI) computing platform that constructs three layers of ecology: service capability openness, application capability openness, and chip capability openness. The three-layer open platform that integrates terminals, chips, and the cloud brings more extraordinary experience for users and developers.
How to apply for HiAI Engine?
Follow the steps
Step 1: Navigate to this URL, choose App Service > Development and click HUAWEI HiAI.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Step 2: Click Apply for HUAWEI HiAI kit.
Step 3: Enter required information like Product name and Package name, click Next button.
Step 4: Verify the application details and click Submit button.
Step 5: Click the Download SDK button to open the SDK list.
Step 6: Unzip downloaded SDK and add into your android project under libs folder.
Step 7: Add jar files dependences into app build.gradle file.
Code:
implementation fileTree(include: ['*.aar', '*.jar'], dir: 'libs')
implementation 'com.google.code.gson:gson:2.8.6'
repositories {
flatDir {
dirs 'libs'
}
}
Client application development process
Follow the steps
Step 1: Create an Android application in the Android studio (Any IDE which is your favorite).
Step 2: Add the App level Gradle dependencies. Choose inside project Android > app > build.gradle.
Code:
apply plugin: 'com.android.application'
apply plugin: 'com.huawei.agconnect'
Root level gradle dependencies.
Code:
maven { url 'https://developer.huawei.com/repo/' }
classpath 'com.huawei.agconnect:agcp:1.4.1.300'
Step 3: Add permission in AndroidManifest.xml
XML:
<uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" />
<uses-permission android:name="android.permission.READ_INTERNAL_STORAGE" />
<uses-permission android:name="android.permission.CAMERA" />
Step 4: Build application.
Initialize all view.
Java:
private void initializeView() {
mPlayAudio = findViewById(R.id.playAudio);
mTxtViewResult = findViewById(R.id.result);
mImageView = findViewById(R.id.imgViewPicture);
}
Request the runtime permission
Java:
private void requestPermissions() {
try {
if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.M) {
int permission1 = ActivityCompat.checkSelfPermission(this,
Manifest.permission.WRITE_EXTERNAL_STORAGE);
int permission2 = ActivityCompat.checkSelfPermission(this,
Manifest.permission.CAMERA);
if (permission1 != PackageManager.PERMISSION_GRANTED || permission2 != PackageManager
.PERMISSION_GRANTED) {
ActivityCompat.requestPermissions(this, new String[]{Manifest.permission.WRITE_EXTERNAL_STORAGE,
Manifest.permission.READ_EXTERNAL_STORAGE, Manifest.permission.CAMERA}, 0x0010);
}
}
} catch (Exception e) {
e.printStackTrace();
}
}
@override
public void onRequestPermissionsResult(int requestCode, String[] permissions, int[] grantResults) {
super.onRequestPermissionsResult(requestCode, permissions, grantResults);
if (grantResults.length <= 0
|| grantResults[0] != PackageManager.PERMISSION_GRANTED) {
Toast.makeText(this, "Permission denied", Toast.LENGTH_SHORT).show();
}
}
Initialize vision base
Java:
private void initVision() {
VisionBase.init(this, new ConnectionCallback() {
@override
public void onServiceConnect() {
Log.e(TAG, " onServiceConnect");
}
@override
public void onServiceDisconnect() {
Log.e(TAG, " onServiceDisconnect");
}
});
}
Initialize text to speech
Java:
private void initializeTextToSpeech() {
textToSpeech = new TextToSpeech(getApplicationContext(), new TextToSpeech.OnInitListener() {
@override
public void onInit(int status) {
if (status != TextToSpeech.ERROR) {
textToSpeech.setLanguage(Locale.UK);
}
}
});
}
Copy code
Create TextDetector instance.
Java:
mTextDetector = new TextDetector(this);
Define Vision image.
Java:
VisionImage image = VisionImage.fromBitmap(mBitmap);
Create instance of Text class.
Java:
final Text result = new Text();
Create and set VisionTextConfiguration
Java:
VisionTextConfiguration config = new VisionTextConfiguration.Builder()
.setAppType(VisionTextConfiguration.APP_NORMAL)
.setProcessMode(VisionTextConfiguration.MODE_IN)
.setDetectType(TextDetectType.TYPE_TEXT_DETECT_FOCUS_SHOOT)
.setLanguage(TextConfiguration.AUTO).build();
//Set vision configuration
mTextDetector.setVisionConfiguration(config);
Call detect method to get the result
Java:
int result_code = mTextDetector.detect(image, result, new VisionCallback<Text>() {
@override
public void onResult(Text text) {
dismissDialog();
Message message = Message.obtain();
message.what = TYPE_SHOW_RESULT;
message.obj = text;
mHandler.sendMessage(message);
}
@override
public void onError(int i) {
Log.d(TAG, "Callback: onError " + i);
mHandler.sendEmptyMessage(TYPE_TEXT_ERROR);
}
@override
public void onProcessing(float v) {
Log.d(TAG, "Callback: onProcessing:" + v);
}
});
Copy code
Create Handler
Java:
private final Handler mHandler = new Handler() {
[USER=439709]@override[/USER]
public void handleMessage(Message msg) {
super.handleMessage(msg);
int status = msg.what;
Log.d(TAG, "handleMessage status = " + status);
switch (status) {
case TYPE_CHOOSE_PHOTO: {
if (mBitmap == null) {
Log.e(TAG, "bitmap is null");
return;
}
mImageView.setImageBitmap(mBitmap);
mTxtViewResult.setText("");
showDialog();
detectTex();
break;
}
case TYPE_SHOW_RESULT: {
Text result = (Text) msg.obj;
if (dialog != null && dialog.isShowing()) {
dialog.dismiss();
}
if (result == null) {
mTxtViewResult.setText("Failed to detect text lines, result is null.");
break;
}
String textValue = result.getValue();
Log.d(TAG, "text value: " + textValue);
StringBuffer textResult = new StringBuffer();
List<TextLine> textLines = result.getBlocks().get(0).getTextLines();
for (TextLine line : textLines) {
textResult.append(line.getValue() + " ");
}
Log.d(TAG, "OCR Detection succeeded.");
mTxtViewResult.setText(textResult.toString());
textToSpeechString = textResult.toString();
break;
}
case TYPE_TEXT_ERROR: {
mTxtViewResult.setText("Failed to detect text lines, result is null.");
}
default:
break;
}
}
};
Complete code as follows
Java:
import android.Manifest;
import android.app.Activity;
import android.app.ProgressDialog;
import android.content.Intent;
import android.content.pm.PackageManager;
import android.database.Cursor;
import android.graphics.Bitmap;
import android.graphics.BitmapFactory;
import android.net.Uri;
import android.os.Build;
import android.os.Handler;
import android.os.Message;
import android.provider.MediaStore;
import android.speech.tts.TextToSpeech;
import android.support.v4.app.ActivityCompat;
import android.support.v7.app.AppCompatActivity;
import android.os.Bundle;
import android.util.Log;
import android.view.View;
import android.widget.Button;
import android.widget.ImageView;
import android.widget.TextView;
import android.widget.Toast;
import com.huawei.hiai.vision.common.ConnectionCallback;
import com.huawei.hiai.vision.common.VisionBase;
import com.huawei.hiai.vision.common.VisionCallback;
import com.huawei.hiai.vision.common.VisionImage;
import com.huawei.hiai.vision.text.TextDetector;
import com.huawei.hiai.vision.visionkit.text.Text;
import com.huawei.hiai.vision.visionkit.text.TextDetectType;
import com.huawei.hiai.vision.visionkit.text.TextLine;
import com.huawei.hiai.vision.visionkit.text.config.TextConfiguration;
import com.huawei.hiai.vision.visionkit.text.config.VisionTextConfiguration;
import java.util.List;
import java.util.Locale;
public class MainActivity extends AppCompatActivity {
private static final String TAG = MainActivity.class.getSimpleName();
private static final int REQUEST_CHOOSE_PHOTO_CODE = 2;
private Bitmap mBitmap;
private ImageView mPlayAudio;
private ImageView mImageView;
private TextView mTxtViewResult;
protected ProgressDialog dialog;
private TextDetector mTextDetector;
Text imageText = null;
TextToSpeech textToSpeech;
String textToSpeechString = "";
private static final int TYPE_CHOOSE_PHOTO = 1;
private static final int TYPE_SHOW_RESULT = 2;
private static final int TYPE_TEXT_ERROR = 3;
[USER=439709]@override[/USER]
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
initializeView();
requestPermissions();
initVision();
initializeTextToSpeech();
}
private void initializeView() {
mPlayAudio = findViewById(R.id.playAudio);
mTxtViewResult = findViewById(R.id.result);
mImageView = findViewById(R.id.imgViewPicture);
}
private void initVision() {
VisionBase.init(this, new ConnectionCallback() {
[USER=439709]@override[/USER]
public void onServiceConnect() {
Log.e(TAG, " onServiceConnect");
}
[USER=439709]@override[/USER]
public void onServiceDisconnect() {
Log.e(TAG, " onServiceDisconnect");
}
});
}
private void initializeTextToSpeech() {
textToSpeech = new TextToSpeech(getApplicationContext(), new TextToSpeech.OnInitListener() {
[USER=439709]@override[/USER]
public void onInit(int status) {
if (status != TextToSpeech.ERROR) {
textToSpeech.setLanguage(Locale.UK);
}
}
});
}
public void onChildClick(View view) {
switch (view.getId()) {
case R.id.btnSelect: {
Log.d(TAG, "Select an image");
Intent intent = new Intent(Intent.ACTION_PICK);
intent.setType("image/*");
startActivityForResult(intent, REQUEST_CHOOSE_PHOTO_CODE);
break;
}
case R.id.playAudio: {
if (textToSpeechString != null && !textToSpeechString.isEmpty())
textToSpeech.speak(textToSpeechString, TextToSpeech.QUEUE_FLUSH, null);
break;
}
}
}
private void detectTex() {
/* create a TextDetector instance firstly */
mTextDetector = new TextDetector(this);
/*Define VisionImage and transfer the Bitmap image to be detected*/
VisionImage image = VisionImage.fromBitmap(mBitmap);
/*Define the Text class.*/
final Text result = new Text();
/*Use VisionTextConfiguration to select the type of the image to be called. */
VisionTextConfiguration config = new VisionTextConfiguration.Builder()
.setAppType(VisionTextConfiguration.APP_NORMAL)
.setProcessMode(VisionTextConfiguration.MODE_IN)
.setDetectType(TextDetectType.TYPE_TEXT_DETECT_FOCUS_SHOOT)
.setLanguage(TextConfiguration.AUTO).build();
//Set vision configuration
mTextDetector.setVisionConfiguration(config);
/*Call the detect method of TextDetector to obtain the result*/
int result_code = mTextDetector.detect(image, result, new VisionCallback<Text>() {
[USER=439709]@override[/USER]
public void onResult(Text text) {
dismissDialog();
Message message = Message.obtain();
message.what = TYPE_SHOW_RESULT;
message.obj = text;
mHandler.sendMessage(message);
}
[USER=439709]@override[/USER]
public void onError(int i) {
Log.d(TAG, "Callback: onError " + i);
mHandler.sendEmptyMessage(TYPE_TEXT_ERROR);
}
[USER=439709]@override[/USER]
public void onProcessing(float v) {
Log.d(TAG, "Callback: onProcessing:" + v);
}
});
}
private void showDialog() {
if (dialog == null) {
dialog = new ProgressDialog(MainActivity.this);
dialog.setTitle("Detecting text...");
dialog.setMessage("Please wait...");
dialog.setIndeterminate(true);
dialog.setCancelable(false);
}
dialog.show();
}
private final Handler mHandler = new Handler() {
[USER=439709]@override[/USER]
public void handleMessage(Message msg) {
super.handleMessage(msg);
int status = msg.what;
Log.d(TAG, "handleMessage status = " + status);
switch (status) {
case TYPE_CHOOSE_PHOTO: {
if (mBitmap == null) {
Log.e(TAG, "bitmap is null");
return;
}
mImageView.setImageBitmap(mBitmap);
mTxtViewResult.setText("");
showDialog();
detectTex();
break;
}
case TYPE_SHOW_RESULT: {
Text result = (Text) msg.obj;
if (dialog != null && dialog.isShowing()) {
dialog.dismiss();
}
if (result == null) {
mTxtViewResult.setText("Failed to detect text lines, result is null.");
break;
}
String textValue = result.getValue();
Log.d(TAG, "text value: " + textValue);
StringBuffer textResult = new StringBuffer();
List<TextLine> textLines = result.getBlocks().get(0).getTextLines();
for (TextLine line : textLines) {
textResult.append(line.getValue() + " ");
}
Log.d(TAG, "OCR Detection succeeded.");
mTxtViewResult.setText(textResult.toString());
textToSpeechString = textResult.toString();
break;
}
case TYPE_TEXT_ERROR: {
mTxtViewResult.setText("Failed to detect text lines, result is null.");
}
default:
break;
}
}
};
[USER=439709]@override[/USER]
protected void onActivityResult(int requestCode, int resultCode, Intent data) {
super.onActivityResult(requestCode, resultCode, data);
if (requestCode == REQUEST_CHOOSE_PHOTO_CODE && resultCode == Activity.RESULT_OK) {
if (data == null) {
return;
}
Uri selectedImage = data.getData();
getBitmap(selectedImage);
}
}
private void requestPermissions() {
try {
if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.M) {
int permission1 = ActivityCompat.checkSelfPermission(this,
Manifest.permission.WRITE_EXTERNAL_STORAGE);
int permission2 = ActivityCompat.checkSelfPermission(this,
Manifest.permission.CAMERA);
if (permission1 != PackageManager.PERMISSION_GRANTED || permission2 != PackageManager
.PERMISSION_GRANTED) {
ActivityCompat.requestPermissions(this, new String[]{Manifest.permission.WRITE_EXTERNAL_STORAGE,
Manifest.permission.READ_EXTERNAL_STORAGE, Manifest.permission.CAMERA}, 0x0010);
}
}
} catch (Exception e) {
e.printStackTrace();
}
}
private void getBitmap(Uri imageUri) {
String[] pathColumn = {MediaStore.Images.Media.DATA};
Cursor cursor = getContentResolver().query(imageUri, pathColumn, null, null, null);
if (cursor == null) return;
cursor.moveToFirst();
int columnIndex = cursor.getColumnIndex(pathColumn[0]);
/* get image path */
String picturePath = cursor.getString(columnIndex);
cursor.close();
mBitmap = BitmapFactory.decodeFile(picturePath);
if (mBitmap == null) {
return;
}
//You can set image here
//mImageView.setImageBitmap(mBitmap);
// You can pass it handler as well
mHandler.sendEmptyMessage(TYPE_CHOOSE_PHOTO);
mTxtViewResult.setText("");
mPlayAudio.setEnabled(true);
}
private void dismissDialog() {
if (dialog != null && dialog.isShowing()) {
dialog.dismiss();
}
}
[USER=439709]@override[/USER]
public void onRequestPermissionsResult(int requestCode, String[] permissions, int[] grantResults) {
super.onRequestPermissionsResult(requestCode, permissions, grantResults);
if (grantResults.length <= 0
|| grantResults[0] != PackageManager.PERMISSION_GRANTED) {
Toast.makeText(this, "Permission denied", Toast.LENGTH_SHORT).show();
}
}
[USER=439709]@override[/USER]
protected void onDestroy() {
super.onDestroy();
/* release ocr instance and free the npu resources*/
if (mTextDetector != null) {
mTextDetector.release();
}
dismissDialog();
if (mBitmap != null) {
mBitmap.recycle();
}
}
}
activity_main.xml
XML:
<?xml version="1.0" encoding="utf-8"?>
<LinearLayout xmlns:android="http://schemas.android.com/apk/res/android"
android:layout_width="match_parent"
android:layout_height="match_parent"
xmlns:app="http://schemas.android.com/apk/res-auto"
android:fitsSystemWindows="true"
androidrientation="vertical"
android:background="@android:color/darker_gray">
<android.support.v7.widget.Toolbar
android:layout_width="match_parent"
android:layout_height="50dp"
android:background="#ff0000"
android:elevation="10dp">
<LinearLayout
android:layout_width="match_parent"
android:layout_height="match_parent"
androidrientation="horizontal">
<TextView
android:layout_width="match_parent"
android:layout_height="match_parent"
android:text="Book Reader"
android:layout_gravity="center"
android:gravity="center|start"
android:layout_weight="1"
android:textColor="@android:color/white"
android:textStyle="bold"
android:textSize="20sp"/>
<ImageView
android:layout_width="40dp"
android:layout_height="40dp"
android:src="@drawable/ic_baseline_play_circle_outline_24"
android:layout_gravity="center|end"
android:layout_marginEnd="10dp"
android:id="@+id/playAudio"
androidadding="5dp"/>
</LinearLayout>
</android.support.v7.widget.Toolbar>
<ScrollView
android:layout_width="match_parent"
android:layout_height="match_parent"
android:fitsSystemWindows="true">
<LinearLayout
android:layout_width="match_parent"
android:layout_height="match_parent"
androidrientation="vertical"
android:background="@android:color/darker_gray"
>
<android.support.v7.widget.CardView
android:layout_width="match_parent"
android:layout_height="wrap_content"
app:cardCornerRadius="5dp"
app:cardElevation="10dp"
android:layout_marginStart="10dp"
android:layout_marginEnd="10dp"
android:layout_marginTop="20dp"
android:layout_gravity="center">
<ImageView
android:id="@+id/imgViewPicture"
android:layout_width="300dp"
android:layout_height="300dp"
android:layout_margin="8dp"
android:layout_gravity="center_horizontal"
android:scaleType="fitXY" />
</android.support.v7.widget.CardView>
<android.support.v7.widget.CardView
android:layout_width="match_parent"
android:layout_height="wrap_content"
app:cardCornerRadius="5dp"
app:cardElevation="10dp"
android:layout_marginStart="10dp"
android:layout_marginEnd="10dp"
android:layout_marginTop="10dp"
android:layout_gravity="center"
android:layout_marginBottom="20dp">
<LinearLayout
android:layout_width="match_parent"
android:layout_height="wrap_content"
androidrientation="vertical"
>
<TextView
android:layout_margin="5dp"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:textColor="@android:color/black"
android:text="Text on the image"
android:textStyle="normal"
/>
<TextView
android:id="@+id/result"
android:layout_margin="5dp"
android:layout_marginBottom="20dp"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:textSize="18sp"
android:textColor="#ff0000"/>
</LinearLayout>
</android.support.v7.widget.CardView>
<Button
android:id="@+id/btnSelect"
android:layout_width="match_parent"
android:layout_height="wrap_content"
androidnClick="onChildClick"
android:layout_marginStart="10dp"
android:layout_marginEnd="10dp"
android:layout_marginBottom="10dp"
android:text="[USER=936943]@string[/USER]/select_picture"
android:background="@drawable/round_button_bg"
android:textColor="@android:color/white"
android:textAllCaps="false"/>
</LinearLayout>
</ScrollView>
</LinearLayout>
Result
Tips and Tricks
Maximum width and height: 1440 px and 15210 px (If the image is larger than this, you will receive error code 200).
Photos recommended size for optimal recognition accuracy.
Resolution > 720P
Aspect ratio < 2:1
If you are taking Video from a camera or gallery make sure your app has camera and storage permission.
Add the downloaded huawei-hiai-vision-ove-10.0.4.307.aar, huawei-hiai-pdk-1.0.0.aar file to libs folder.
Check dependencies added properly
Latest HMS Core APK is required.
Min SDK is 21. Otherwise you will get Manifest merge issue.
Conclusion
In this article, we have learnt the following concepts.
What is OCR?
Learnt about general text recognition.
Feature of GTR
Features of OCR
How to integrate General Text Recognition using Huawei HiAI
How to Apply Huawei HiAI
How to build the application
Reference
General Text Recognition
Apply for Huawei HiAI
Happy coding
how many languages can it be detected?

Categories

Resources