Exo Player — HMS Video Kit Comparison by building a flavored Android Application . Part I - Huawei Developers

{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
In this comparison article you will read about:
Learn about simple transition animations.
Learn about JUnit testing for URL.
Learn how to record videos using the CameraX library.
Learn to create a simple messaging system with Firebase Realtime Database. ( Bonus Content )
Learn how to add video files to Firebase Storage, call their URLs to Realtime NoSQL of Firebase Database, and use those as our video sources.
How to implement both Exo Player, Widevine, and HMS Video Kit, WisePlayer, and their pros and cons while building both of these projects by their flavor dimensions.
Please also note that the code language that supports this article will be on Kotlin and XML.
Global Setups for our application
What you will need for building this application is listed below:
Hardware Requirements
A computer that can run Android Studio.
An Android phone for debugging.
Software Requirements
Android SDK package
Android Studio 3.X
API level of at least 23
HMS Core (APK) 4.0.1.300 or later or later (Not needed for Exo Player 2.0)
To not hold too much space on our phone let’s start with adding necessary plugins to necessary builds.
Code:
plugins {
id 'com.android.application'
id 'kotlin-android'
id 'kotlin-kapt'
id 'kotlin-android-extensions'
id 'com.huawei.agconnect'
id 'com.google.gms.google-services'
}
if (getGradle().getStartParameter().getTaskRequests().toString().toLowerCase().contains("gms")) {
apply plugin: 'com.google.gms.google-services'
} else {
apply plugin: 'com.huawei.agconnect'
}
To compare them both let us create a flavor dimension first on our project. Separating GMS (Google Mobile Services) and HMS (Huawei Mobile Services) products.
Code:
flavorDimensions "version"
productFlavors {
hms {
dimension "version"
}
gms {
dimension "version"
}
}
After that add gms and hms as a directory in your src folder. The selected build variant will be highlighted as blue. Note that “java” and “res” must also be added by hand! And don’t worry I’ll show you how to fill them properly.
Binding build features that we’ll use a lot in this project.
Code:
buildFeatures {
dataBinding true
viewBinding = true
}
Let us separate our dependencies now that we have flavor product builds. Note separated implementations are starting with their prefixed names.
Code:
/* ExoPlayer */
def exoP= "2.12.1"
/* Slider Activity */
def roadkill = "2.1.0"
/* CameraX */
def camerax_version = "1.0.0-beta03"
def camerax_extensions = "1.0.0-alpha10"
/* BottomBar */
implementation 'nl.joery.animatedbottombar:library:1.0.9'
/* Firebase */
implementation platform('com.google.firebase:firebase-bom:26.1.0')
implementation 'com.google.firebase:firebase-analytics'
implementation 'com.google.firebase:firebase-database:19.5.1'
implementation 'com.google.firebase:firebase-storage:19.2.1'
implementation 'com.firebaseui:firebase-ui-auth:6.2.0'
implementation 'com.firebaseui:firebase-ui-database:6.2.0'
implementation 'com.firebaseui:firebase-ui-firestore:6.2.0'
implementation 'com.firebaseui:firebase-ui-storage:6.2.0'
/* Simple Video View */
implementation 'com.klinkerapps:simple_videoview:1.2.4'
/* Lottie */
implementation "com.airbnb.android:lottie: 3.4.0"
/*Sliding Activity*/
implementation "com.r0adkll:slidableactivity:$roadkill"
/* Exo */
gmsImplementation "com.google.android.exoplayer:exoplayer:$exoP"
gmsImplementation "com.google.android.exoplayer:exoplayer-core:$exoP"
gmsImplementation "com.google.android.exoplayer:exoplayer-dash:$exoP"
gmsImplementation "com.google.android.exoplayer:exoplayer-ui:$exoP"
gmsImplementation "com.google.android.exoplayer:exoplayer-hls:$exoP"
gmsImplementation "com.google.android.exoplayer:extension-okhttp:$exoP"
/* Huawei */
implementation 'com.huawei.agconnect:agconnect-core:1.4.1.300'
hmsImplementation 'com.huawei.agconnect:agconnect-auth:1.4.1.300'
hmsImplementation 'com.huawei.hms:hwid:5.0.3.302'
hmsImplementation 'com.huawei.hms:videokit-player:1.0.1.300'
// CameraX core library using the camera2 implementation
//noinspection GradleDependency
implementation "androidx.camera:camera-core:${camerax_version}"
//noinspection GradleDependency
implementation "androidx.camera:camera-camera2:${camerax_version}"
// If you want to additionally use the CameraX Lifecycle library
//noinspection GradleDependency
implementation "androidx.camera:camera-lifecycle:${camerax_version}"
// If you want to additionally use the CameraX View class
//noinspection GradleDependency
implementation "androidx.camera:camera-view:${camerax_extensions}"
// If you want to additionally use the CameraX Extensions library
//noinspection GradleDependency
implementation "androidx.camera:camera-extensions:${camerax_extensions}"
// Login
…
Adding Predefined Video URL’s
These URL’s will be our predefined tests that will be used.
XML:
<resources>
<string-array name="data_url">
<item>http://videoplay-mos-dra.dbankcdn.com/P_VT/video_injection/61/v3/519249A7370974110613784576/MP4Mix_H.264_1920x1080_6000_HEAAC1_PVC_NoCut.mp4?accountinfo=Qj3ukBa%2B5OssJ6UBs%2FNh3iJ24kpPHADlWrk80tR3gxSjRYb5YH0Gk7Vv6TMUZcd5Q%2FK%2BEJYB%2BKZvpCwiL007kA%3D%3D%3A20200720094445%3AUTC%2C%2C%2C20200720094445%2C%2C%2C-1%2C1%2C0%2C%2C%2C1%2C%2C%2C%2C1%2C%2C0%2C%2C%2C%2C%2C1%2CEND&GuardEncType=2&contentCode=M2020072015070339800030113000000&spVolumeId=MP2020072015070339800030113000000&server=videocontent-dra.himovie.hicloud.com&protocolType=1&formatPriority=504*%2C204*%2C2</item>
<item>http://commondatastorage.googleapis.com/gtv-videos-bucket/sample/BigBuckBunny.mp4</item>
<item>http://commondatastorage.googleapis.com/gtv-videos-bucket/sample/ElephantsDream.mp4</item>
<item>http://commondatastorage.googleapis.com/gtv-videos-bucket/sample/ForBiggerBlazes.mp4</item>
<item>http://commondatastorage.googleapis.com/gtv-videos-bucket/sample/ForBiggerEscapes.mp4</item>
<item>http://commondatastorage.googleapis.com/gtv-videos-bucket/sample/ForBiggerFun.mp4</item>
<item>http://commondatastorage.googleapis.com/gtv-videos-bucket/sample/ForBiggerJoyrides.mp4</item>
<item>http://commondatastorage.googleapis.com/gtv-videos-bucket/sample/ForBiggerMeltdowns.mp4</item>
<item>http://commondatastorage.googleapis.com/gtv-videos-bucket/sample/Sintel.mp4</item>
<item>http://commondatastorage.googleapis.com/gtv-videos-bucket/sample/SubaruOutbackOnStreetAndDirt.mp4</item>
<item>http://commondatastorage.googleapis.com/gtv-videos-bucket/sample/TearsOfSteel.mp4</item>
<item>http://commondatastorage.googleapis.com/gtv-videos-bucket/sample/VolkswagenGTIReview.mp4</item>
<item>http://commondatastorage.googleapis.com/gtv-videos-bucket/sample/WeAreGoingOnBullrun.mp4</item>
<item>http://commondatastorage.googleapis.com/gtv-videos-bucket/sample/WhatCarCanYouGetForAGrand.mp4</item>
</string-array>
<string-array name="data_name">
<item>Analytics - HMS Core</item>
<item>Big Buck Bunny</item>
<item>Elephant Dream</item>
<item>For Bigger Blazes</item>
<item>For Bigger Escape</item>
<item>For Bigger Fun</item>
<item>For Bigger Joyrides</item>
<item>For Bigger Meltdowns</item>
<item>Sintel</item>
<item>Subaru Outback On Street And Dirt</item>
<item>Tears of Steel</item>
<item>Volkswagen GTI Review</item>
<item>We Are Going On Bullrun</item>
<item>What care can you get for a grand?</item>
</string-array>
</resources>
Defining Firebase constants
I am skipping all those setups a project part in Firebase part and moving directly to the part that interests you the most, my dear reader. Since we are about to bind videos to each individual user there must be also login processes. I’ll also leave you, my dear reader.
Let us start with getting our logged-in users ID.
Code:
object AppUser {
private var userId = ""
fun setUserId(userId: String) {
if (userId != "")
this.userId = userId
else
this.userId = "dummy"
}
fun getUserId() = userId
}
Firebase Database Helper:
Code:
object FirebaseDbHelper {
fun rootRef() : DatabaseReference {
return FirebaseDatabase.getInstance().reference
}
fun getUser(userID : String) : DatabaseReference {
return FirebaseDatabase.getInstance().reference.child("User").child(userID)
}
fun onDisconnect(db : DatabaseReference) : Task<Void> {
return db.child("onlineStatus").onDisconnect().setValue(ServerValue.TIMESTAMP)
}
fun onConnect(db : DatabaseReference) : Task<Void> {
return db.child("onlineStatus").setValue("true")
}
fun getUserInfo(userID : String) : DatabaseReference {
return FirebaseDatabase.getInstance().reference.child("UserInfo").child(userID)
}
fun getVideoFeed(userID : String) : DatabaseReference {
return FirebaseDatabase.getInstance().reference.child("UserFeed/Video/$userID")
}
fun getVideoFeedItem(userID: String, listId:String):DatabaseReference{
return FirebaseDatabase.getInstance().reference.child("UserFeed/Video/$userID/$listId")
}
fun getShared(): DatabaseReference{
return FirebaseDatabase.getInstance().reference.child("uploads/Shareable")
}
fun getShareItem(listId: String):DatabaseReference{
return FirebaseDatabase.getInstance().reference.child("uploads/Shareable/$listId")
}
fun getUnSharedItem(listId: String): DatabaseReference{
return FirebaseDatabase.getInstance().reference.child("uploads/UnShareable/$listId")
}
fun getPostMessageRootRef() : DatabaseReference {
return FirebaseDatabase.getInstance().reference.child("PostMessage")
}
fun getPostMessageRef(listID : String) : DatabaseReference {
return FirebaseDatabase.getInstance().reference.child("PostMessage/$listID")
}
fun getPostMessageUnderRef(postID : String, listID : String) : DatabaseReference {
return FirebaseDatabase.getInstance().reference.child("PostMessageUnder/$postID/$listID/Message")
}
}
Now that we have helper lets create our global constants:
Code:
class Constants {
companion object {
/* Firebase References */
val fUserInfoDB = FirebaseDbHelper.getUserInfo(AppUser.getUserId())
val fFeedRef = FirebaseDbHelper.getVideoFeed(AppUser.getUserId())
val fSharedRef = FirebaseDbHelper.getShared()
/* Recording Video */
const val FILENAME = "yyyy-MM-dd-HH-mm-ss-SSS" //"yyyy-MM-dd-HH-mm-ss-SSS"
const val VIDEO_EXTENSION = ".mp4"
var recPath = Environment.getExternalStorageDirectory().path + "/Pictures/ExoVideoReference"
fun getOutputDirectory(context: Context): File {
val appContext = context.applicationContext
val mediaDir = context.externalMediaDirs.firstOrNull()?.let {
File(
recPath
).apply { mkdirs() }
}
return if (mediaDir != null && mediaDir.exists()) mediaDir else appContext.filesDir
}
fun createFile(baseFolder: File, format: String, extension: String) =
File(
baseFolder, SimpleDateFormat(format, Locale.ROOT)
.format(System.currentTimeMillis()) + extension
)
}
}
Let’s create our Data classes to use:
Code:
object DataClass {
data class ProfileVideoDataClass(
val shareStat: String = "",
val like: String = "",
val timeDate: String = "",
val timeMillis: String = "",
val uploaderID: String = "",
val videoUrl: String = "",
val videoName:String = ""
)
data class UploadsShareableDataClass(
val like: String = "",
val timeDate: String = "",
val timeMillis: String = "",
val uploaderID: String = "",
val videoUrl: String = "",
val videoName:String = ""
)
data class PostMessageDataClass(
val comment : String = "",
val comment_lovely : String = "",
val commenter_ID : String = "",
val commenter_image : String = "",
val commenter_name : String = "",
val time : String = "",
val type : String = ""
)
}
Custom URL Test Implementation with JUnit
We need a test based controller to help our users to enter a valid URL address for that let’s begin by creating a new Url Validator class:
Code:
class UrlValidatorHelper : TextWatcher {
internal var isValid = false
override fun beforeTextChanged(s: CharSequence?, start: Int, count: Int, after: Int) {}
override fun onTextChanged(s: CharSequence?, start: Int, before: Int, count: Int) {}
override fun afterTextChanged(s: Editable?) {
isValid = isValidUrl(s)
}
companion object {
fun isValidUrl(url: CharSequence?): Boolean {
return url!=null && URLUtil.isValidUrl(url.trim().toString()) && Patterns.WEB_URL.matcher(url).matches()
}
}
}
Now that we have URL helper lets create our test classes under test>>java>>”package_name”>>UrlValidatorTest:
Code:
class UrlValidatorTest {
@Test
fun urlValidator_InvalidCertificate_RunsFalse(){
assertFalse(UrlValidatorHelper.isValidUrl("www.youtube.com/watch?v=Yr8xDSPjII8&list=RDMMYr8xDSPjII8&start_radio=1"))
}
@Test
fun urlValidator_InvalidIdentifier_RunsFalse(){
assertFalse(UrlValidatorHelper.isValidUrl("https://youtube.com/watch?v=Yr8xDSPjII8&list=RDMMYr8xDSPjII8&start_radio=1"))
}
@Test
fun urlValidator_InvalidDomain_RunsFalse(){
assertFalse(UrlValidatorHelper.isValidUrl("https://www.com/watch?v=Yr8xDSPjII8&list=RDMMYr8xDSPjII8&start_radio=1"))
}
@Test
fun urlValidator_InvalidExtension_RunsFalse(){
assertFalse(UrlValidatorHelper.isValidUrl("https://www.youtube/watch?v=Yr8xDSPjII8&list=RDMMYr8xDSPjII8&start_radio=1"))
}
@Test
fun urlValidator_EmptyUrl_RunsFalse(){
assertFalse(UrlValidatorHelper.isValidUrl(""))
}
@Test
fun urlValidator_NullUrl_RunsFalse(){
assertFalse(UrlValidatorHelper.isValidUrl(null))
}
}
------------------------------
END OF PART - I

Interesting.

Related

Object Detection & Tracking with HMS ML Kit (Video Mode)

More articles like this, you can visit HUAWEI Developer Forum.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
In this article I will tell you about Object Detection with HMS ML Kit first and then we are going to build an Android application which uses HMS ML Kit to detect and track objects in a camera stream. If you haven’t read my last article on detecting objects statically yet, here it is. You can also find introductive information about Artificial Intelligence, Machine Learning and Huawei ML Kit’s capabilities in that article.
The object detection and tracking service can detect and track multiple objects in an image. The detected objects can be located and classified in real time. It is also an ideal choice for filtering out unwanted objects in an image. By the way, Huawei ML Kit provides on device object detection capabilities, hence it is completely free.
Let’s don’t waste our precious time and start doing our sample project step by step!
1. If you haven’t registered as a Huawei Developer yet. Here is the link.
2. Create a Project on AppGalleryConnect. You can follow the steps shown here.
3. In HUAWEI Developer AppGallery Connect, go to Develop > Manage APIs. Make sure ML Kit is activated.
4. Integrate ML Kit SDK into your project. Your app level build.gradle will look like this:
Code:
apply plugin: 'com.android.application'
apply plugin: 'kotlin-android'
apply plugin: 'kotlin-android-extensions'
android {
compileSdkVersion 29
buildToolsVersion "29.0.3"
defaultConfig {
applicationId "com.demo.objectdetection"
minSdkVersion 21
targetSdkVersion 29
versionCode 1
versionName "1.0"
testInstrumentationRunner "androidx.test.runner.AndroidJUnitRunner"
}
buildTypes {
release {
minifyEnabled false
proguardFiles getDefaultProguardFile('proguard-android-optimize.txt'), 'proguard-rules.pro'
}
}
compileOptions {
sourceCompatibility JavaVersion.VERSION_1_8
targetCompatibility JavaVersion.VERSION_1_8
}
kotlinOptions { jvmTarget = "1.8" }
}
dependencies {
implementation fileTree(dir: 'libs', include: ['*.jar'])
implementation "org.jetbrains.kotlin:kotlin-stdlib-jdk7:$kotlin_version"
implementation 'androidx.appcompat:appcompat:1.1.0'
implementation 'androidx.core:core-ktx:1.3.0'
implementation 'androidx.constraintlayout:constraintlayout:1.1.3'
testImplementation 'junit:junit:4.12'
androidTestImplementation 'androidx.test.ext:junit:1.1.1'
androidTestImplementation 'androidx.test.espresso:espresso-core:3.2.0'
//HMS ML Kit
implementation 'com.huawei.hms:ml-computer-vision:1.0.2.300'
}
and your project-level build.gradle is like this:
Code:
buildscript {
ext.kotlin_version = '1.3.72'
repositories {
google()
jcenter()
maven { url 'https://developer.huawei.com/repo/' }
}
dependencies {
classpath 'com.android.tools.build:gradle:3.6.3'
classpath "org.jetbrains.kotlin:kotlin-gradle-plugin:$kotlin_version"
classpath 'com.huawei.agconnect:agcp:1.3.1.300'
}
}
allprojects {
repositories {
google()
jcenter()
maven { url 'https://developer.huawei.com/repo/' }
}
}
task clean(type: Delete) {
delete rootProject.buildDir
}
5. Create the layout first. There will be two surfaceViews. The first surfaceView is to display our camera stream, the second surfaceView is to draw our canvas. We will draw rectangles around detected objects and write their respective types on our canvas and show this canvas on our second surfaceView. Here is the sample:
Code:
<?xml version="1.0" encoding="utf-8"?>
<androidx.constraintlayout.widget.ConstraintLayout xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:tools="http://schemas.android.com/tools"
android:layout_width="match_parent"
android:layout_height="match_parent"
tools:context=".MainActivity">
<SurfaceView
android:id="@+id/surface_view_camera"
android:layout_width="match_parent"
android:layout_height="match_parent" />
<SurfaceView
android:id="@+id/surface_view_overlay"
android:layout_width="match_parent"
android:layout_height="match_parent" />
</androidx.constraintlayout.widget.ConstraintLayout>
6. By the way, make sure you set your activity style as “Theme.AppCompat.Light.NoActionBar” or similar in res/styles.xml to hide the action bar.
7.1. We have two important classes that help us detect objects in HMS ML Kit. MLObjectAnalyzer and LensEngine. MLObjectAnalyzer detects object information (MLObject) in an image. We can also customize it using MLObjectAnalyzerSetting. Here is our createAnalyzer method:
Code:
private fun createAnalyzer(): MLObjectAnalyzer {
val analyzerSetting = MLObjectAnalyzerSetting.Factory()
.setAnalyzerType(MLObjectAnalyzerSetting.TYPE_VIDEO)
.allowMultiResults()
.allowClassification()
.create()
return MLAnalyzerFactory.getInstance().getLocalObjectAnalyzer(analyzerSetting)
}
7.2. Other important class that we are using today is LensEngine. LensEngine is responsible for camera initialization, frame obtaining, and logic control functions. Here is our createLensEngine method:
Code:
private fun createLensEngine(orientation: Int): LensEngine {
val lensEngineCreator = LensEngine.Creator(applicationContext, mAnalyzer)
.setLensType(LensEngine.BACK_LENS)
.applyFps(10F)
.enableAutomaticFocus(true)
return when(orientation) {
Configuration.ORIENTATION_PORTRAIT ->
lensEngineCreator.applyDisplayDimension(getDisplayMetrics().heightPixels, getDisplayMetrics().widthPixels).create()
else ->
lensEngineCreator.applyDisplayDimension(getDisplayMetrics().widthPixels, getDisplayMetrics().heightPixels).create()
}
}
8. Well, LensEngine handles camera frames, MLObjectAnalyzer detects MLObjects in those frames. Now we need to create our ObjectAnalyzerTranscator class which implements MLAnalyzer.MLTransactor interface. The detected MLObjects are going to be dropped in transactResult method of this class. I will share our ObjectAnalyzerTransactor class here with an additional draw method for drawing rectangles and some text around detected objects.
Code:
package com.demo.objectdetection
import android.graphics.Color
import android.graphics.Paint
import android.graphics.PorterDuff
import android.util.Log
import android.util.SparseArray
import android.view.SurfaceHolder
import androidx.core.util.forEach
import androidx.core.util.isNotEmpty
import androidx.core.util.valueIterator
import com.huawei.hms.mlsdk.common.MLAnalyzer
import com.huawei.hms.mlsdk.objects.MLObject
class ObjectAnalyzerTransactor : MLAnalyzer.MLTransactor<MLObject> {
companion object {
private const val TAG = "ML_ObAnalyzerTransactor"
}
private var mSurfaceHolderOverlay: SurfaceHolder? = null
fun setSurfaceHolderOverlay(surfaceHolder: SurfaceHolder) {
mSurfaceHolderOverlay = surfaceHolder
}
override fun transactResult(results: MLAnalyzer.Result<MLObject>?) {
val items = results?.analyseList
items?.forEach { key, value ->
Log.d(TAG, "transactResult -> " +
"Border: ${value.border} " + //Rectangle around this object
"Type Possibility: ${value.typePossibility} " + //Possibility between 0-1
"Tracing Identity: ${value.tracingIdentity} " + //Tracing number of this object
"Type Identity: ${value.typeIdentity}") //Furniture, Plant, Food etc.
}
items?.also {
draw(it)
}
}
private fun draw(items: SparseArray<MLObject>) {
val canvas = mSurfaceHolderOverlay?.lockCanvas()
if (canvas != null) {
//Clear canvas first
canvas.drawColor(0, PorterDuff.Mode.CLEAR)
for (item in items.valueIterator()) {
val type = getItemType(item)
//Draw a rectangle around detected object.
val rectangle = item.border
Paint().also {
it.color = Color.YELLOW
it.style = Paint.Style.STROKE
it.strokeWidth = 8F
canvas.drawRect(rectangle, it)
}
//Draw text on the upper left corner of the detected object, writing its type.
Paint().also {
it.color = Color.BLACK
it.style = Paint.Style.FILL
it.textSize = 24F
canvas.drawText(type, (rectangle.left).toFloat(), (rectangle.top).toFloat(), it)
}
}
}
mSurfaceHolderOverlay?.unlockCanvasAndPost(canvas)
}
private fun getItemType(item: MLObject) = when(item.typeIdentity) {
MLObject.TYPE_OTHER -> "Other"
MLObject.TYPE_FACE -> "Face"
MLObject.TYPE_FOOD -> "Food"
MLObject.TYPE_FURNITURE -> "Furniture"
MLObject.TYPE_PLACE -> "Place"
MLObject.TYPE_PLANT -> "Plant"
MLObject.TYPE_GOODS -> "Goods"
else -> "No match"
}
override fun destroy() {
Log.d(TAG, "destroy")
}
}
9. Our lensEngine needs a surfaceHolder to run on. Therefore will start it when our surfaceHolder is ready. Here is our callback:
Code:
private val surfaceHolderCallback = object : SurfaceHolder.Callback {
override fun surfaceChanged(holder: SurfaceHolder?, format: Int, width: Int, height: Int) {
mLensEngine.close()
init()
mLensEngine.run(holder)
}
override fun surfaceDestroyed(holder: SurfaceHolder?) {
mLensEngine.release()
}
override fun surfaceCreated(holder: SurfaceHolder?) {
mLensEngine.run(holder)
}
}
10. We require CAMERA and WRITE_EXTERNAL_STORAGE permissions. Make sure you add them to your AndroidManifest.xml file and ask user at runtime. For the sake of simplicity we do it as shown below:
Code:
class MainActivity : AppCompatActivity() {
companion object {
private const val PERMISSION_REQUEST_CODE = 8
private val requiredPermissions = arrayOf(Manifest.permission.CAMERA, Manifest.permission.WRITE_EXTERNAL_STORAGE)
}
override fun onCreate(savedInstanceState: Bundle?) {
if (hasPermissions(requiredPermissions))
init()
else
ActivityCompat.requestPermissions(this, requiredPermissions, PERMISSION_REQUEST_CODE)
}
private fun hasPermissions(permissions: Array<String>) = permissions.all {
ContextCompat.checkSelfPermission(this, it) == PackageManager.PERMISSION_GRANTED
}
override fun onRequestPermissionsResult(requestCode: Int, permissions: Array<out String>, grantResults: IntArray) {
super.onRequestPermissionsResult(requestCode, permissions, grantResults)
if (requestCode == PERMISSION_REQUEST_CODE && hasPermissions(requiredPermissions))
init()
}
}
11. Let’s bring them all the pieces together. Here is our MainActivity.
Code:
package com.demo.objectdetection
import android.Manifest
import android.content.Context
import android.content.pm.PackageManager
import android.content.res.Configuration
import android.graphics.PixelFormat
import android.os.Bundle
import android.util.DisplayMetrics
import android.view.SurfaceHolder
import android.view.WindowManager
import androidx.appcompat.app.AppCompatActivity
import androidx.core.app.ActivityCompat
import androidx.core.content.ContextCompat
import com.huawei.hms.mlsdk.MLAnalyzerFactory
import com.huawei.hms.mlsdk.common.LensEngine
import com.huawei.hms.mlsdk.objects.MLObjectAnalyzer
import com.huawei.hms.mlsdk.objects.MLObjectAnalyzerSetting
import kotlinx.android.synthetic.main.activity_main.*
class MainActivity : AppCompatActivity() {
companion object {
private const val TAG = "ML_MainActivity"
private const val PERMISSION_REQUEST_CODE = 8
private val requiredPermissions = arrayOf(Manifest.permission.CAMERA, Manifest.permission.WRITE_EXTERNAL_STORAGE)
}
private lateinit var mAnalyzer: MLObjectAnalyzer
private lateinit var mLensEngine: LensEngine
private lateinit var mSurfaceHolderCamera: SurfaceHolder
private lateinit var mSurfaceHolderOverlay: SurfaceHolder
private lateinit var mObjectAnalyzerTransactor: ObjectAnalyzerTransactor
override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
setContentView(R.layout.activity_main)
if (hasPermissions(requiredPermissions))
init()
else
ActivityCompat.requestPermissions(this, requiredPermissions, PERMISSION_REQUEST_CODE)
}
private fun init() {
mAnalyzer = createAnalyzer()
mLensEngine = createLensEngine(resources.configuration.orientation)
mSurfaceHolderCamera = surface_view_camera.holder
mSurfaceHolderOverlay = surface_view_overlay.holder
mSurfaceHolderOverlay.setFormat(PixelFormat.TRANSPARENT)
mSurfaceHolderCamera.addCallback(surfaceHolderCallback)
mObjectAnalyzerTransactor = ObjectAnalyzerTransactor()
mObjectAnalyzerTransactor.setSurfaceHolderOverlay(mSurfaceHolderOverlay)
mAnalyzer.setTransactor(mObjectAnalyzerTransactor)
}
private fun createAnalyzer(): MLObjectAnalyzer {
val analyzerSetting = MLObjectAnalyzerSetting.Factory()
.setAnalyzerType(MLObjectAnalyzerSetting.TYPE_VIDEO)
.allowMultiResults()
.allowClassification()
.create()
return MLAnalyzerFactory.getInstance().getLocalObjectAnalyzer(analyzerSetting)
}
private fun createLensEngine(orientation: Int): LensEngine {
val lensEngineCreator = LensEngine.Creator(applicationContext, mAnalyzer)
.setLensType(LensEngine.BACK_LENS)
.applyFps(10F)
.enableAutomaticFocus(true)
return when(orientation) {
Configuration.ORIENTATION_PORTRAIT ->
lensEngineCreator.applyDisplayDimension(getDisplayMetrics().heightPixels, getDisplayMetrics().widthPixels).create()
else ->
lensEngineCreator.applyDisplayDimension(getDisplayMetrics().widthPixels, getDisplayMetrics().heightPixels).create()
}
}
private val surfaceHolderCallback = object : SurfaceHolder.Callback {
override fun surfaceChanged(holder: SurfaceHolder?, format: Int, width: Int, height: Int) {
mLensEngine.close()
init()
mLensEngine.run(holder)
}
override fun surfaceDestroyed(holder: SurfaceHolder?) {
mLensEngine.release()
}
override fun surfaceCreated(holder: SurfaceHolder?) {
mLensEngine.run(holder)
}
}
override fun onDestroy() {
super.onDestroy()
//Release resources
mAnalyzer.stop()
mLensEngine.release()
}
private fun getDisplayMetrics() = DisplayMetrics().let {
(getSystemService(Context.WINDOW_SERVICE) as WindowManager).defaultDisplay.getMetrics(it)
it
}
private fun hasPermissions(permissions: Array<String>) = permissions.all {
ContextCompat.checkSelfPermission(this, it) == PackageManager.PERMISSION_GRANTED
}
override fun onRequestPermissionsResult(requestCode: Int, permissions: Array<out String>, grantResults: IntArray) {
super.onRequestPermissionsResult(requestCode, permissions, grantResults)
if (requestCode == PERMISSION_REQUEST_CODE && hasPermissions(requiredPermissions))
init()
}
}
12. In summary we used LensEngine to handle camera frames for us. We displayed the frames on our first surfaceView. Then MLObjectAnalyzer analyzed these frames and detected objects came into transactResult of our ObjectAnalyzerTrasactor class. In this method we iterated through all objects detected and drew them on our second surfaceView which we used as an overlay. Here is the output:

Build a Face Detection App with Huawei ML Kit

{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Hi all,
In the era of powerful mobile devices we store thousands of photos, have video calls, shop, manage our bank accounts and perform many other tasks in the palm of our hands. We can also manage to take photos and videos or have video calls with our cameras integrated on our mobile phones. But, it would be inefficient to only use these cameras we carry by ourselves all day to take raw photos and videos. They can perform more, much more.
Face detection services are used by many applications in different industries. It is mainly used for security and entertainment purposes. For example, it can be used by a taxi app to identify its taxi’s customer, it can be used by a smart home app to identify guests’ faces and announce to the host who the person ringing the bell at the door is, it can be used to draw moustache on faces in an entertainment app or it can be used to detect if a drivers eyes are open or not and warn our driver if his/her eyes closed.
In contrary to how many different areas face detection can be used and how important tasks this service performs, it is really easy to implement a face detection app with the help of Huawei ML Kit. As it’s a device side capability that works on all Android devices with ARM architecture, it is completely free, faster and more secure than other services. The face detection service can detect the shapes and features of your user’s face, including their facial expression, age, gender, and wearing.
With face detection service you can detect up to 855 face contour points to locate face coordinates including face contour, eyebrows, eyes, nose, mouth, and ears, and identify the pitch, yaw, and roll angles of a face. You can detect seven facial features including the possibility of opening the left eye, possibility of opening the right eye, possibility of wearing glasses, gender possibility, possibility of wearing a hat, possibility of wearing a beard, and age. In addition to there, you can also detect facial expressions, namely, smiling, neutral, anger, disgust, fear, sadness, and surprise.
Let’s start to build our demo application step by step from scratch!
1.Firstly, let’s create our project on Android Studio. We can create a project selecting Empty Activity option and then follow the steps described in this post to create and sign our project in App Gallery Connect. You can follow this guide.
2. Secondly, In HUAWEI Developer AppGallery Connect, go to Develop > Manage APIs. Make sure ML Kit is activated.
3. Now we have integrated Huawei Mobile Services (HMS) into our project. Now let’s follow the documentation on developer.huawei.com and find the packages to add to our project. In the website click Developer > HMS Core > AI > ML Kit. Here you will find introductory information to services, references, SDKs to download and others. Under ML Kit tab follow Android > Getting Started > Integrating HMS Core SDK > Adding Build Dependencies > Integrating the Face Detection SDK. We can follow the guide here to add face detection capability to our project. To later go round and learn more about this service I added all of the three packages shown here. You can only choose the base SDK or select packages according to your needs. After the integration your app-level build.gradle file will look like this.
Code:
apply plugin: 'com.android.application'
apply plugin: 'kotlin-android'
apply plugin: 'kotlin-android-extensions'
apply plugin: 'com.huawei.agconnect'
android {
compileSdkVersion 29
buildToolsVersion "30.0.1"
defaultConfig {
applicationId "com.demo.faceapp"
minSdkVersion 21
targetSdkVersion 29
versionCode 1
versionName "1.0"
testInstrumentationRunner "androidx.test.runner.AndroidJUnitRunner"
}
buildTypes {
release {
minifyEnabled false
proguardFiles getDefaultProguardFile('proguard-android-optimize.txt'), 'proguard-rules.pro'
}
}
}
dependencies {
implementation fileTree(dir: "libs", include: ["*.jar"])
implementation "org.jetbrains.kotlin:kotlin-stdlib:$kotlin_version"
implementation 'androidx.core:core-ktx:1.3.1'
implementation 'androidx.appcompat:appcompat:1.2.0'
implementation 'androidx.constraintlayout:constraintlayout:1.1.3'
testImplementation 'junit:junit:4.12'
androidTestImplementation 'androidx.test.ext:junit:1.1.1'
androidTestImplementation 'androidx.test.espresso:espresso-core:3.2.0'
// Import the base SDK.
implementation 'com.huawei.hms:ml-computer-vision-face:2.0.1.300'
// Import the contour and key point detection model package.
implementation 'com.huawei.hms:ml-computer-vision-face-shape-point-model:2.0.1.300'
// Import the facial expression detection model package.
implementation 'com.huawei.hms:ml-computer-vision-face-emotion-model:2.0.1.300'
// Import the facial feature detection model package.
implementation 'com.huawei.hms:ml-computer-vision-face-feature-model:2.0.1.300'
}
And your project-level build.gradle file will look like this.
Code:
buildscript {
ext.kotlin_version = "1.3.72"
repositories {
google()
jcenter()
maven { url 'https://developer.huawei.com/repo/' }
}
dependencies {
classpath "com.android.tools.build:gradle:4.0.1"
classpath "org.jetbrains.kotlin:kotlin-gradle-plugin:$kotlin_version"
classpath 'com.huawei.agconnect:agcp:1.3.1.300'
}
}
allprojects {
repositories {
google()
jcenter()
maven { url 'https://developer.huawei.com/repo/' }
}
}
task clean(type: Delete) {
delete rootProject.buildDir
}
Don’t forget to add the following meta-data tags in your AndroidManifest.xml. This is for automatic update of the machine learning model.
Code:
<?xml version="1.0" encoding="utf-8"?>
<manifest ...
<application ...
</application>
<meta-data
android:name="com.huawei.hms.ml.DEPENDENCY"
android:value= "face"/>
</manifest>
4. Now we can select detecting faces on a static image or on a camera stream. Let’s choose detecting faces on a camera stream for this example. Firstly, let’s create our analyzer. Its type is MLFaceAnalyzer. It is responsible for analyzing the faces detected. Here is the sample implementation. We can also use our MLFaceAnalyzer with default settings to make it simple.
Code:
private lateinit var mAnalyzer: MLFaceAnalyzer
override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
setContentView(R.layout.activity_main)
mAnalyzer = createAnalyzer()
}
private fun createAnalyzer(): MLFaceAnalyzer {
val settings = MLFaceAnalyzerSetting.Factory()
.allowTracing()
.setFeatureType(MLFaceAnalyzerSetting.TYPE_FEATURES)
.setShapeType(MLFaceAnalyzerSetting.TYPE_SHAPES)
.setMinFaceProportion(.5F)
.setKeyPointType(MLFaceAnalyzerSetting.TYPE_KEYPOINTS)
.create()
return MLAnalyzerFactory.getInstance().getFaceAnalyzer(settings)
}
5. Create a simple layout. Use two surfaceViews, one above the other, for camera frames and for our overlay. Because, later we will draw some shapes on our overlay view. Here is a sample layout.
Code:
<?xml version="1.0" encoding="utf-8"?>
<androidx.constraintlayout.widget.ConstraintLayout
xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:tools="http://schemas.android.com/tools"
android:layout_width="match_parent"
android:layout_height="match_parent"
tools:context=".MainActivity">
<SurfaceView
android:id="@+id/surfaceViewCamera"
android:layout_width="match_parent"
android:layout_height="match_parent" />
<SurfaceView
android:id="@+id/surfaceViewOverlay"
android:layout_width="match_parent"
android:layout_height="match_parent" />
</androidx.constraintlayout.widget.ConstraintLayout>
6. We should prepare our views. We need two surfaceHolders in our application. We are going to make surfaceHolderOverlay transparent, because we want to see our camera frames. Later we are going to add a callback to our surfaceHolderCamera to know when it’s created, changed and destroyed. Let’s create them.
Code:
private lateinit var mAnalyzer: MLFaceAnalyzer
private var surfaceHolderCamera: SurfaceHolder? = null
private var surfaceHolderOverlay: SurfaceHolder? = null
override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
setContentView(R.layout.activity_main)
mAnalyzer = createAnalyzer()
prepareViews()
}
private fun prepareViews() {
surfaceHolderCamera = surfaceViewCamera.holder
surfaceHolderOverlay = surfaceViewOverlay.holder
surfaceHolderOverlay?.setFormat(PixelFormat.TRANSPARENT)
surfaceHolderCamera?.addCallback(surfaceHolderCallback)
}
private val surfaceHolderCallback = object : SurfaceHolder.Callback {
override fun surfaceChanged(holder: SurfaceHolder?, format: Int, width: Int, height: Int) {
}
override fun surfaceDestroyed(holder: SurfaceHolder?) {
}
override fun surfaceCreated(holder: SurfaceHolder?) {
}
}
7. Now we can create our LensEngine. It is a magic class that handles camera frames for us. You can set different settings to your LensEngine. Here is how you can create it simply. As you can see in the example, the order of width and height passed to LensEngine changes according to orientation. We can create our LensEngine inside surfaceChanged method of surfaceHolderCallback and release it inside surfaceDestroyed. Here is an example of creating and running LensEngine. LensEngine needs a surfaceHolder or a surfaceTexture to run on.
Code:
private lateinit var mAnalyzer: MLFaceAnalyzer
private lateinit var mLensEngine: LensEngine
private var surfaceHolderCamera: SurfaceHolder? = null
private var surfaceHolderOverlay: SurfaceHolder? = null
override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
setContentView(R.layout.activity_main)
mAnalyzer = createAnalyzer()
prepareViews()
}
private fun prepareViews() {
surfaceHolderCamera = surfaceViewCamera.holder
surfaceHolderOverlay = surfaceViewOverlay.holder
surfaceHolderOverlay?.setFormat(PixelFormat.TRANSPARENT)
surfaceHolderCamera?.addCallback(surfaceHolderCallback)
}
private val surfaceHolderCallback = object : SurfaceHolder.Callback {
override fun surfaceChanged(holder: SurfaceHolder?, format: Int, width: Int, height: Int) {
mLensEngine = createLensEngine(width, height)
mLensEngine.run(holder)
}
override fun surfaceDestroyed(holder: SurfaceHolder?) {
mLensEngine.release()
}
override fun surfaceCreated(holder: SurfaceHolder?) {
}
}
private fun createLensEngine(width: Int, height: Int): LensEngine {
val lensEngineCreator = LensEngine.Creator(this, mAnalyzer)
.applyFps(20F)
.setLensType(LensEngine.FRONT_LENS)
.enableAutomaticFocus(true)
return if (resources.configuration.orientation == Configuration.ORIENTATION_PORTRAIT) {
lensEngineCreator.let {
it.applyDisplayDimension(height, width)
it.create()
}
} else {
lensEngineCreator.let {
it.applyDisplayDimension(width, height)
it.create()
}
}
}
8. We also need somewhere to receive detected results and interact with them. For this purpose we create our FaceAnalyzerTransactor class, you can name it as you wish. It should implement MLAnalyzer.MLTransactor<MLFace> interface. We are going to set an overlay which is of type SurfaceHolder, get our canvas from this overlay and draw some shapes on the canvas. We have the required data about the detected face inside transactResult method. Here is the sample implementation of the whole of our FaceAnalyzerTransactor class.
Code:
class FaceAnalyzerTransactor : MLAnalyzer.MLTransactor<MLFace> {
private var mOverlay: SurfaceHolder? = null
fun setOverlay(surfaceHolder: SurfaceHolder) {
mOverlay = surfaceHolder
}
override fun transactResult(result: MLAnalyzer.Result<MLFace>?) {
draw(result?.analyseList)
}
private fun draw(faces: SparseArray<MLFace>?) {
val canvas = mOverlay?.lockCanvas()
if (canvas != null && faces != null) {
//Clear the canvas
canvas.drawColor(0, PorterDuff.Mode.CLEAR)
for (face in faces.valueIterator()) {
//Draw all 855 points of the face. If Front Lens is selected, change x points side.
for (point in face.allPoints) {
val x = mOverlay?.surfaceFrame?.right?.minus(point.x)
if (x != null) {
Paint().also {
it.color = Color.YELLOW
it.style = Paint.Style.FILL
it.strokeWidth = 16F
canvas.drawPoint(x, point.y, it)
}
}
}
//Prepare a string to show if the user smiles or not and draw a text on the canvas.
val smilingString = if (face.emotions.smilingProbability > 0.5) "SMILING" else "NOT SMILING"
Paint().also {
it.color = Color.RED
it.textSize = 60F
it.textAlign = Paint.Align.CENTER
canvas.drawText(smilingString, face.border.exactCenterX(), face.border.exactCenterY(), it)
}
}
mOverlay?.unlockCanvasAndPost(canvas)
}
}
override fun destroy() {
}
}
9. Create a FaceAnalyzerTransactor instance in MainActivity and use it as shown below.
Code:
private lateinit var mAnalyzer: MLFaceAnalyzer
private lateinit var mLensEngine: LensEngine
private lateinit var mFaceAnalyzerTransactor: FaceAnalyzerTransactor
private var surfaceHolderCamera: SurfaceHolder? = null
private var surfaceHolderOverlay: SurfaceHolder? = null
override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
setContentView(R.layout.activity_main)
init()
}
private fun init() {
mAnalyzer = createAnalyzer()
mFaceAnalyzerTransactor = FaceAnalyzerTransactor()
mAnalyzer.setTransactor(mFaceAnalyzerTransactor)
prepareViews()
}
Also, don’t forget to set the overlay of our transactor. We can do this inside surfaceChanged method like this.
Code:
private val surfaceHolderCallback = object : SurfaceHolder.Callback {
override fun surfaceChanged(holder: SurfaceHolder?, format: Int, width: Int, height: Int) {
mLensEngine = createLensEngine(width, height)
surfaceHolderOverlay?.let { mFaceAnalyzerTransactor.setOverlay(it) }
mLensEngine.run(holder)
}
override fun surfaceDestroyed(holder: SurfaceHolder?) {
mLensEngine.release()
}
override fun surfaceCreated(holder: SurfaceHolder?) {
}
}
10. We are almost done! Don’t forget to ask for permissions from our users. We need CAMERA and WRITE_EXTERNAL_STORAGE permissions. WRITE_EXTERNAL_STORAGE permission is for automatically updating the machine learning model. Add these permissions to your AndroidManifest.xml and ask from user to grant them at runtime. Here is a simple example.
Code:
private val requiredPermissions = arrayOf(Manifest.permission.CAMERA, Manifest.permission.WRITE_EXTERNAL_STORAGE)
override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
setContentView(R.layout.activity_main)
if (hasPermissions(requiredPermissions))
init()
else
ActivityCompat.requestPermissions(this, requiredPermissions, 0)
}
private fun hasPermissions(permissions: Array<String>) = permissions.all {
ContextCompat.checkSelfPermission(this, it) == PackageManager.PERMISSION_GRANTED
}
override fun onRequestPermissionsResult(requestCode: Int, permissions: Array<out String>, grantResults: IntArray) {
super.onRequestPermissionsResult(requestCode, permissions, grantResults)
if (requestCode == 0 && grantResults.isNotEmpty() && hasPermissions(requiredPermissions))
init()
}
11. Well done! We have finished all the steps and created our project. Now we can test it. Here are some examples.
12. We have created a simple FaceApp which detects faces, face features and emotions. You can produce countless number of types of face detection apps, it is up to your imagination. ML Kit empowers your apps with the power of AI. If you have any questions, please ask through the link below. You can also find this project on Github.
Related Links
Thanks to Oğuzhan Demirci for this article.
Original post: https://medium.com/huawei-developers/build-a-face-detection-app-with-huawei-ml-kit-32caec06484
Nice and useful article
Does face detection depend on specific hardware devices?

Creating a simple News Searcher with Huawei Search Kit

Introduction
Hello reader, in this article, I am going to demonstrate how to utilize Huawei Mobile Services (HMS) Search Kit to search for news articles from the web with customizable parameters. Also, I will show you how to use tools like auto suggestions and spellcheck capabilities provided by HMS Search Kit.
Getting Started
First, we need to follow instructions on the official website to integrate Search Kit into our app.
Getting Started
After we’re done with that, let’s start coding. First, we need to initialize Search Kit in our Application/Activity.
Java:
@HiltAndroidApp
class NewsApp : Application() {
override fun onCreate() {
super.onCreate()
SearchKitInstance.init(this, YOUR_APP_ID)
}
}
Next, let’s not forget adding our Application class to manifest. Also to allow HTTP network requests on devices with targetSdkVersion 28 or later, we need to allow clear text traffic. (Search Kit doesn’t support minSdkVersion below 24).
XML:
<application
android:name=".NewsApp"
android:usesCleartextTraffic="true">
...
</application>
Acquiring Access Token
The token is used to verify a search request on the server. Search results of the request are returned only after the verification is successful. Therefore, before we implement any search functions, we need to get the Access Token first.
OAuth 2.0-based Authentication
If you scroll down, you will see a method called Client Credentials, which does not require authorization from a user. In this mode, your app can generate an access token to access Huawei public app-level APIs. Exactly what we need. I have used Retrofit to do this job.
Let’s create a data class that represents the token response from Huawei servers.
Java:
data class TokenResponse(val access_token: String, val expires_in: Int, val token_type: String)
Then, let’s create an interface like below to generate Retrofit Service.
Java:
interface TokenRequestService {
@FormUrlEncoded
@POST("oauth2/v3/token")
suspend fun getRequestToken(
@Field("grant_type") grantType: String,
@Field("client_id") clientId: String,
@Field("client_secret") clientSecret: String
): TokenResponse
}
Then, let’s create a repository class to call our API service.
Java:
class NewsRepository(
private val tokenRequestService: TokenRequestService
) {
suspend fun getRequestToken() = tokenRequestService.getRequestToken(
"client_credentials",
YOUR_APP_ID,
YOUR_APP_SECRET
)
}
You can find your App ID and App secret from console.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
I have used Dagger Hilt to provide Repository for view models that need it. Here is the Repository Module class that creates the objects to be injected to view models.
Java:
@InstallIn(SingletonComponent::class)
@Module
class RepositoryModule {
@Provides
@Singleton
fun provideRepository(
tokenRequestService: TokenRequestService
): NewsRepository {
return NewsRepository(tokenRequestService)
}
@Provides
@Singleton
fun providesOkHttpClient(): OkHttpClient {
return OkHttpClient.Builder().build()
}
@Provides
@Singleton
fun providesRetrofitClientForTokenRequest(okHttpClient: OkHttpClient): TokenRequestService {
val baseUrl = "https://oauth-login.cloud.huawei.com/"
return Retrofit.Builder()
.baseUrl(baseUrl)
.addCallAdapterFactory(CoroutineCallAdapterFactory())
.addConverterFactory(GsonConverterFactory.create())
.client(okHttpClient)
.build()
.create(TokenRequestService::class.java)
}
}
In order to inject our module, we need to add @HiltAndroidApp annotation to NewsApp application class. Also, add @AndroidEntryPoint to fragments that need dependency injection. Now we can use our repository in our view models.
I have created a splash fragment to get access token, because without it, none of the search functionalities would work.
Java:
@AndroidEntryPoint
class SplashFragment : Fragment(R.layout.fragment_splash) {
private var _binding: FragmentSplashBinding? = null
private val binding get() = _binding!!
private val viewModel: SplashViewModel by viewModels()
override fun onViewCreated(view: View, savedInstanceState: Bundle?) {
super.onViewCreated(view, savedInstanceState)
_binding = FragmentSplashBinding.bind(view)
lifecycleScope.launch {
viewModel.accessToken.collect {
if (it is TokenState.Success) {
findNavController().navigate(R.id.action_splashFragment_to_homeFragment)
}
if (it is TokenState.Failure) {
binding.progressBar.visibility = View.GONE
binding.tv.text = "An error occurred, check your connection"
}
}
}
}
override fun onDestroyView() {
super.onDestroyView()
_binding = null
}
}
Java:
class SplashViewModel @ViewModelInject constructor(private val repository: NewsRepository) :
ViewModel() {
private var _accessToken = MutableStateFlow<TokenState>(TokenState.Loading)
var accessToken: StateFlow<TokenState> = _accessToken
init {
getRequestToken()
}
private fun getRequestToken() {
viewModelScope.launch {
try {
val token = repository.getRequestToken().access_token
SearchKitInstance.getInstance()
.setInstanceCredential(token)
SearchKitInstance.instance.newsSearcher.setTimeOut(5000)
Log.d(
TAG,
"SearchKitInstance.instance.setInstanceCredential done $token"
)
_accessToken.emit(TokenState.Success(token))
} catch (e: Exception) {
Log.e(HomeViewModel.TAG, "get token error", e)
_accessToken.emit(TokenState.Failure(e))
}
}
}
companion object {
const val TAG = "SplashViewModel"
}
}
As you can see, once we receive our access token, we call setInstanceCredential() method with the token as the parameter. Also I have set a 5 second timeout for the News Searcher. Then, Splash Fragment should react to the change in access token flow, navigate the app to the home fragment while popping splash fragment from back stack, because we don’t want to go back there. But if token request fails, the fragment will show an error message.
Setting up Search Kit Functions
Since we have given Search Kit the token it requires, we can proceed with the rest. Let’s add three more function to our repository.
1. getNews()
This function will take two parameters — search term, and page which will be used for pagination. NewsState is a sealed class that represents two states of news search request, success or failure.
Search Kit functions are synchronous, therefore we launch them in in the Dispatchers.IO context so they don’t block our UI.
In order to start a search request, we create an CommonSearchRequest, then apply our search parameters. setQ to set search term, setLang to set in which language we want to get our news (I have selected English), setSregion to set from which region we want to get our news (I have selected whole world), setPs to set how many news we want in single page, setPn to set which page of news we want to get.
Then we call the search() method to get a response from the server. if it is successful, we get a result in the type of BaseSearchResponse<List<NewsItem>>. If it’s unsuccessful (for example there is no network connection) we get null in return. In that case It returns failure state.
Java:
class NewsRepository(
private val tokenRequestService: TokenRequestService
) {
...
suspend fun getNews(query: String, pageNumber: Int): NewsState = withContext(Dispatchers.IO) {
var newsState: NewsState
Log.i(TAG, "getting news $query $pageNumber")
val commonSearchRequest = CommonSearchRequest()
commonSearchRequest.setQ(query)
commonSearchRequest.setLang(Language.ENGLISH)
commonSearchRequest.setSregion(Region.WHOLEWORLD)
commonSearchRequest.setPs(10)
commonSearchRequest.setPn(pageNumber)
try {
val result = SearchKitInstance.instance.newsSearcher.search(commonSearchRequest)
newsState = if (result != null) {
if (result.data.size > 0) {
Log.i(TAG, "got news ${result.data.size}")
NewsState.Success(result.data)
} else {
NewsState.Error(Exception("no more news"))
}
} else {
NewsState.Error(Exception("fetch news error"))
}
} catch (e: Exception) {
newsState = NewsState.Error(e)
Log.e(TAG, "caught news search exception", e)
}
[email protected] newsState
}
suspend fun getAutoSuggestions(str: String): AutoSuggestionsState =
withContext(Dispatchers.IO) {
val autoSuggestionsState: AutoSuggestionsState
autoSuggestionsState = try {
val result = SearchKitInstance.instance.searchHelper.suggest(str, Language.ENGLISH)
if (result != null) {
AutoSuggestionsState.Success(result.suggestions)
} else {
AutoSuggestionsState.Failure(Exception("fetch suggestions error"))
}
} catch (e: Exception) {
AutoSuggestionsState.Failure(e)
}
[email protected] autoSuggestionsState
}
suspend fun getSpellCheck(str: String): SpellCheckState = withContext(Dispatchers.IO) {
val spellCheckState: SpellCheckState
spellCheckState = try {
val result = SearchKitInstance.instance.searchHelper.spellCheck(str, Language.ENGLISH)
if (result != null) {
SpellCheckState.Success(result)
} else {
SpellCheckState.Failure(Exception("fetch spellcheck error"))
}
} catch (
e: Exception
) {
SpellCheckState.Failure(e)
}
[email protected] spellCheckState
}
companion object {
const val TAG = "NewsRepository"
}
}
2. getAutoSuggestions()
Search Kit can provide search suggestions with SearchHelper.suggest() method. It takes two parameters, a String to provide suggestions for, and a language type. If the operation is successful, a result in the type AutoSuggestResponse. We can access a list of SuggestObject from suggestions field of this AutoSuggestResponse. Every SuggestObject represents a suggestion from HMS which contains a String value.
3. getSpellCheck()
It works pretty much the same with auto suggestions. SearchHelper.spellCheck() method takes the same two parameters like suggest() method. But it returns a SpellCheckResponse, which has two important fields: correctedQuery and confidence. correctedQuery is what Search Kit thinks the corrected spelling should be, confidence is how confident Search kit is about the recommendation. Confidence has 3 values, which are 0 (not confident, we should not rely on it), 1 (confident), 2 (highly confident).
Using the functions above in our app
Home Fragments has nothing to show when it launches, because nothing has been searched yet. User can click the magnifier icon in toolbar to navigate to Search Fragment. Code for Search Fragment/View Model is below.
Notes:
Search View should expand on default with keyboard showing so user can start typing right away.
Every time query text changes, it will be emitted to a flow in view model. then it will be collected by two listeners in the fragment, first one to search for auto suggestions, second one to spell check. I did this to avoid unnecessary network calls, debounce(500) will make sure subsequent entries when the user is typing fast (less than half a second for a character) will be ignored and only the last search query will be used.
When user submit query term, the string will be sent back to HomeFragment using setFragmentResult() (which is only available fragment-ktx library Fragment 1.3.0-alpha04 and above).
Java:
@AndroidEntryPoint
class SearchFragment : Fragment(R.layout.fragment_search) {
private var _binding: FragmentSearchBinding? = null
private val binding get() = _binding!!
private val viewModel: SearchViewModel by viewModels()
@FlowPreview
@ExperimentalCoroutinesApi
override fun onViewCreated(view: View, savedInstanceState: Bundle?) {
super.onViewCreated(view, savedInstanceState)
_binding = FragmentSearchBinding.bind(view)
(activity as AppCompatActivity).setSupportActionBar(binding.toolbar)
setHasOptionsMenu(true)
//listen to the change in query text, trigger getSuggestions function after debouncing and filtering
lifecycleScope.launch {
viewModel.searchQuery.debounce(500).filter { s: String ->
[email protected] s.length > 3
}.distinctUntilChanged().flatMapLatest { query ->
Log.d(TAG, "getting suggestions for term: $query")
viewModel.getSuggestions(query).catch {
}
}.flowOn(Dispatchers.Default).collect {
if (it is AutoSuggestionsState.Success) {
val list = it.data
Log.d(TAG, "${list.size} suggestion")
binding.chipGroup.removeAllViews()
//create a chip for each suggestion and add them to chip group
list.forEach { suggestion ->
val chip = Chip(requireContext())
chip.text = suggestion.name
chip.isClickable = true
chip.setOnClickListener {
//set fragment result to return search term to home fragment.
setFragmentResult(
"requestKey",
bundleOf("bundleKey" to suggestion.name)
)
findNavController().popBackStack()
}
binding.chipGroup.addView(chip)
}
} else if (it is AutoSuggestionsState.Failure) {
Log.e(TAG, "suggestions request error", it.exception)
}
}
}
//listen to the change in query text, trigger spellcheck function after debouncing and filtering
lifecycleScope.launch {
viewModel.searchQuery.debounce(500).filter { s: String ->
[email protected] s.length > 3
}.distinctUntilChanged().flatMapLatest { query ->
Log.d(TAG, "spellcheck for term: $query")
viewModel.getSpellCheck(query).catch {
Log.e(TAG, "spellcheck request error", it)
}
}.flowOn(Dispatchers.Default).collect {
if (it is SpellCheckState.Success) {
val spellCheckResponse = it.data
val correctedStr = spellCheckResponse.correctedQuery
val confidence = spellCheckResponse.confidence
Log.d(
TAG,
"corrected query $correctedStr confidence level $confidence"
)
if (confidence > 0) {
//show spellcheck layout, and set on click listener to send corrected term to home fragment
//to be searched
binding.tvDidYouMeanToSearch.visibility = View.VISIBLE
binding.tvCorrected.visibility = View.VISIBLE
binding.tvCorrected.text = correctedStr
binding.llSpellcheck.setOnClickListener {
setFragmentResult(
"requestKey",
bundleOf("bundleKey" to correctedStr)
)
findNavController().popBackStack()
}
} else {
binding.tvDidYouMeanToSearch.visibility = View.GONE
binding.tvCorrected.visibility = View.GONE
}
} else if (it is SpellCheckState.Failure) {
Log.e(TAG, "spellcheck request error", it.exception)
}
}
}
}
override fun onCreateOptionsMenu(menu: Menu, inflater: MenuInflater) {
super.onCreateOptionsMenu(menu, inflater)
inflater.inflate(R.menu.menu_search, menu)
val searchMenuItem = menu.findItem(R.id.searchItem)
val searchView = searchMenuItem.actionView as SearchView
searchView.setIconifiedByDefault(false)
searchMenuItem.expandActionView()
searchMenuItem.setOnActionExpandListener(object : MenuItem.OnActionExpandListener {
override fun onMenuItemActionExpand(item: MenuItem?): Boolean {
return true
}
override fun onMenuItemActionCollapse(item: MenuItem?): Boolean {
findNavController().popBackStack()
return true
}
})
searchView.setOnQueryTextListener(object : SearchView.OnQueryTextListener {
override fun onQueryTextSubmit(query: String?): Boolean {
return if (query != null && query.length > 3) {
setFragmentResult("requestKey", bundleOf("bundleKey" to query))
findNavController().popBackStack()
true
} else {
Toast.makeText(requireContext(), "Search term is too short", Toast.LENGTH_SHORT)
.show()
true
}
}
override fun onQueryTextChange(newText: String?): Boolean {
viewModel.emitNewTextToSearchQueryFlow(newText ?: "")
return true
}
})
}
override fun onDestroyView() {
super.onDestroyView()
_binding = null
}
companion object {
const val TAG = "SearchFragment"
}
}
Java:
class SearchViewModel @ViewModelInject constructor(private val repository: NewsRepository) :
ViewModel() {
private var _searchQuery = MutableStateFlow<String>("")
var searchQuery: StateFlow<String> = _searchQuery
fun getSuggestions(str: String): Flow<AutoSuggestionsState> {
return flow {
try {
val result = repository.getAutoSuggestions(str)
emit(result)
} catch (e: Exception) {
}
}
}
fun getSpellCheck(str: String): Flow<SpellCheckState> {
return flow {
try {
val result = repository.getSpellCheck(str)
emit(result)
} catch (e: Exception) {
}
}
}
fun emitNewTextToSearchQueryFlow(str: String) {
viewModelScope.launch {
_searchQuery.emit(str)
}
}
}
Now the HomeFragment has a search term to search for.
When the view is created, we receive the search term returned from Search Fragment on setFragmentResultListener. Then search for news using this query, then submit the PagingData to the recycler view adapter. Also, I made sure same flow will be returned if the new query is the same with the previous one so no unnecessary calls will be made.
Java:
@AndroidEntryPoint
class HomeFragment : Fragment(R.layout.fragment_home) {
private var _binding: FragmentHomeBinding? = null
private val binding get() = _binding!!
private val viewModel: HomeViewModel by viewModels()
private lateinit var listAdapter: NewsAdapter
private var startedLoading = false
override fun onViewCreated(view: View, savedInstanceState: Bundle?) {
super.onViewCreated(view, savedInstanceState)
_binding = FragmentHomeBinding.bind(view)
(activity as AppCompatActivity).setSupportActionBar(binding.toolbar)
setHasOptionsMenu(true)
listAdapter = NewsAdapter(NewsAdapter.NewsComparator, onItemClicked)
binding.rv.adapter =
listAdapter.withLoadStateFooter(NewsLoadStateAdapter(listAdapter))
//if user swipe down to refresh, refresh paging adapter
binding.swipeRefreshLayout.setOnRefreshListener {
listAdapter.refresh()
}
// Listen to search term returned from Search Fragment
setFragmentResultListener("requestKey") { _, bundle ->
// We use a String here, but any type that can be put in a Bundle is supported
val result = bundle.getString("bundleKey")
binding.tv.visibility = View.GONE
if (result != null) {
binding.toolbar.subtitle = "News about $result"
lifecycleScope.launchWhenResumed {
binding.swipeRefreshLayout.isRefreshing = true
viewModel.searchNews(result).collectLatest { value: PagingData<NewsItem> ->
listAdapter.submitData(value)
}
}
}
}
//need to listen to paging adapter load state to stop swipe to refresh layout animation
//if load state contain error, show a toast.
listAdapter.addLoadStateListener {
if (it.refresh is LoadState.NotLoading && startedLoading) {
binding.swipeRefreshLayout.isRefreshing = false
} else if (it.refresh is LoadState.Error && startedLoading) {
binding.swipeRefreshLayout.isRefreshing = false
val loadState = it.refresh as LoadState.Error
val errorMsg = loadState.error.localizedMessage
Toast.makeText(requireContext(), errorMsg, Toast.LENGTH_SHORT).show()
} else if (it.refresh is LoadState.Loading) {
startedLoading = true
}
}
}
override fun onCreateOptionsMenu(menu: Menu, inflater: MenuInflater) {
super.onCreateOptionsMenu(menu, inflater)
inflater.inflate(R.menu.menu_home, menu)
}
override fun onOptionsItemSelected(item: MenuItem): Boolean {
return when (item.itemId) {
R.id.searchItem -> {
//launch search fragment when search item clicked
findNavController().navigate(R.id.action_homeFragment_to_searchFragment)
true
}
else ->
super.onOptionsItemSelected(item)
}
}
//callback function to be passed to paging adapter, used to launch news links.
private val onItemClicked = { it: NewsItem ->
val builder = CustomTabsIntent.Builder()
val customTabsIntent = builder.build()
customTabsIntent.launchUrl(requireContext(), Uri.parse(it.clickUrl))
}
override fun onDestroyView() {
super.onDestroyView()
_binding = null
}
companion object {
const val TAG = "HomeFragment"
}
}
Java:
class HomeViewModel @ViewModelInject constructor(private val repository: NewsRepository) :
ViewModel() {
private var lastSearchQuery: String? = null
var lastFlow: Flow<PagingData<NewsItem>>? = null
fun searchNews(query: String): Flow<PagingData<NewsItem>> {
return if (query != lastSearchQuery) {
lastSearchQuery = query
lastFlow = Pager(PagingConfig(pageSize = 10)) {
NewsPagingDataSource(repository, query)
}.flow.cachedIn(viewModelScope)
lastFlow as Flow<PagingData<NewsItem>>
} else {
lastFlow!!
}
}
companion object {
const val TAG = "HomeViewModel"
}
}
The app also uses Paging 3 library to provide endless scrolling for news articles, which is out of scope for this article, you may check the GitHub repo for how to achieve pagination with Search Kit. The end result looks like the images below.
Check the repo here.
Tips
When Search Kit fails to fetch results (example: no internet connection), it will return null object, you can manually return an exception so you can handle the error.
Conclusion
HMS Search Kit provide easy to use APIs for fast and efficient customizable searching for web sites, images, videos and news articles in many languages and regions. Also, it provides convenient features like auto suggestions and spellchecking.
Reference
Huawei Search Kit
What other features search kit provides other than news?
any additional feature can be supported?
Can we search daily base news ?
ask011 said:
What other features search kit provides other than news?
Click to expand...
Click to collapse
Hello, can reference documentation at https://developer.huawei.com/consum.../HMSCore-Guides/introduction-0000001055591730
ask011 said:
What other features search kit provides other than news?
Click to expand...
Click to collapse
Hello, can reference documentation at https://developer.huawei.com/consum.../HMSCore-Guides/introduction-0000001055591730

Getting Latest Corona News with Huawei Search Kit

{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Huawei Search Kit includes device-side SDK and cloud-side APIs to use all features of Petal Search capabilities. It helps developers to integrate mobile app search experience into their application.
Huawei Search Kit offers to developers so much different and helpful features. It decreases our development cost with SDKs and APIs, it returns responses quickly and it helps us to develop our application faster.
As a developer, we have some responsibilities and function restrictions while using Huawei Search Kit. If you would like to learn about these responsibilities and function restrictions, I recommend you to visit following website.
https://developer.huawei.com/consum.../HMSCore-Guides/introduction-0000001055591730
Also, Huawei Search Kit supports limited countries and regions. If you wonder about these countries and regions, you can visit the following website.
https://developer.huawei.com/consum...HMSCore-Guides-V5/regions-0000001056871703-V5
How to use Huawei Search Kit?
First of all, we need to create an app on AppGallery Connect and add related details about HMS Core to our project.
If you don’t know about how to integrate HMS Core to our project, you can learn all details from following Medium article.
https://medium.com/huawei-developers/android-integrating-your-apps-with-huawei-hms-core-1f1e2a090e98
After we have done all steps in above Medium article, we can focus on special steps of integrating Huawei Search Kit.
Our minSdkVersion should be 24 at minimum.
We need to add following dependency to our app level build.gradle file.
Code:
implementation "com.huawei.hms:searchkit:5.0.4.303"
Then, we need to do some changes on AppGallery Connect. We need to define a data storage location on AppGallery Connect.
Note: If we don’t define a data storage location, all responses will return null.
We need to initialize the SearchKit instance on our application which we have extended from android.app.Application class. To initialize the SearchKit instance, we need to set the app id on second parameter which has mentioned as Constants.APP_ID.
While adding our application class to AndroidManifest.xml file, we need to set android:usesCleartextTraffic as true. You can do all these steps as mentioned in red rectangles.
Getting Access Token
For each request on Search Kit, we need to use access token. I prefer to get this access token on splash screen of the application. Thus, we will be able to save access token and save it with SharedPreferences.
First of all, we need to create our methods and objects about network operations. I am using Koin Framework for dependency injection on this project.
For creating objects about network operations, I have created following single objects and methods.
Note: In above picture, I have initialized the koin framework and added network module. Check this step to use this module in the app.
Java:
val networkModule = module {
single { getOkHttpClient(androidContext()) }
single { getRetrofit(get()) }
single { getService<AccessTokenService>(get()) }
}
fun getRetrofit(okHttpClient: OkHttpClient): Retrofit {
return Retrofit.Builder().baseUrl("https://oauth-login.cloud.huawei.com/")
.client(okHttpClient)
.addConverterFactory(GsonConverterFactory.create())
.build()
}
fun getOkHttpClient(context: Context): OkHttpClient {
return OkHttpClient().newBuilder()
.sslSocketFactory(SecureSSLSocketFactory.getInstance(context), SecureX509TrustManager(context))
.hostnameVerifier(StrictHostnameVerifier())
.readTimeout(10, TimeUnit.SECONDS)
.connectTimeout(1, TimeUnit.SECONDS)
.retryOnConnectionFailure(true)
.build()
}
inline fun <reified T> getService(retrofit: Retrofit): T = retrofit.create(T::class.java)
We have defined methods to create OkHttpClient and Retrofit objects. These objects have used as single to create Singleton objects. Also, we have defined one generic method to use Retrofit with our services.
To get an access token, our base URL will be “https://oauth-login.cloud.huawei.com/".
To get response from access token request, we need to define an object for response. The best way to do that is creating data class which is as shown in the below.
Java:
data class AccessTokenResponse(
@SerializedName("access_token") val accessToken: String?,
@SerializedName("expires_in") val expiresIn: Int?,
@SerializedName("token_type") val tokenType: String?
)
Now, all we need to do is, creating an interface to send requests with Retrofit. To get access token, our total URL is “https://oauth-login.cloud.huawei.com/oauth2/v3/token". We need to send 3 parameters as x-www-form-url encoded. Let’s examine these parameters.
grant_type: This parameter will not change depends on our application. Value should be, “client_credentials”.
client_id: This parameter will be app id of our project.
client_secret: This parameter will be app secret of our project.
Java:
interface AccessTokenService {
@FormUrlEncoded
@POST("oauth2/v3/token")
fun getAccessToken(
@Field("grant_type") grantType: String,
@Field("client_id") appId: String,
@Field("client_secret") clientSecret: String
): Call<AccessTokenResponse>
}
Now, everything is ready to get an access token. We just need to send the request and save the access token with SharedPreferences.
To work with SharedPreferences, I have created a helper class as shown in the below.
Java:
class CacheHelper {
companion object {
private lateinit var instance: CacheHelper
private var gson: Gson = Gson()
private const val PREFERENCES_NAME = BuildConfig.APPLICATION_ID
private const val PREFERENCES_MODE = AppCompatActivity.MODE_PRIVATE
fun getInstance(context: Context): CacheHelper {
instance = CacheHelper(context)
return instance
}
}
private var context: Context
private var sharedPreferences: SharedPreferences
private var sharedPreferencesEditor: SharedPreferences.Editor
private constructor(context: Context) {
this.context = context
sharedPreferences = this.context.getSharedPreferences(PREFERENCES_NAME, PREFERENCES_MODE)
sharedPreferencesEditor = sharedPreferences.edit()
}
fun putObject(key: String, `object`: Any) {
sharedPreferencesEditor.apply {
putString(key, gson.toJson(`object`))
commit()
}
}
fun <T> getObject(key: String, `object`: Class<T>): T? {
return sharedPreferences.getString(key, null)?.let {
gson.fromJson(it, `object`)
} ?: kotlin.run {
null
}
}
}
With the help of this class, we will be able to work with SharedPreferences easier.
Now, all we need to do it, sending request and getting access token.
Java:
object SearchKitService: KoinComponent {
private val accessTokenService: AccessTokenService by inject()
private val cacheHelper: CacheHelper by inject()
fun initAccessToken(requestListener: IRequestListener<Boolean, Boolean>) {
accessTokenService.getAccessToken(
"client_credentials",
Constants.APP_ID,
Constants.APP_SECRET
).enqueue(object: retrofit2.Callback<AccessTokenResponse> {
override fun onResponse(call: Call<AccessTokenResponse>, response: Response<AccessTokenResponse>) {
response.body()?.accessToken?.let { accessToken ->
cacheHelper.putObject(Constants.ACCESS_TOKEN_KEY, accessToken)
requestListener.onSuccess(true)
} ?: kotlin.run {
requestListener.onError(true)
}
}
override fun onFailure(call: Call<AccessTokenResponse>, t: Throwable) {
requestListener.onError(false)
}
})
}
}
If API returns as access token successfully, we will save this access token to device using SharedPreferences. And on our SplashFragment, we need to listen IRequestListener and if onSuccess method returns true, that means we got the access token successfully and we can navigate application to BrowserFragment.
Huawei Search Kit
In this article, I will give examples about News Search, Image Search and Video Search features of Huawei Search Kit.
In this article, I will give examples about News Search, Image Search and Video Search features of Huawei Search Kit.
To send requests for News Search, Image Search and Video Search, we need a CommonSearchRequest object.
In this app, I will get results about Corona in English. I have created the following method to return to CommonSearchRequest object.
Java:
private fun returnCommonRequest(): CommonSearchRequest {
return CommonSearchRequest().apply {
setQ("Corona Virus")
setLang(Language.ENGLISH)
setSregion(Region.WHOLEWORLD)
setPs(20)
setPn(1)
}
}
Here, we have setted some informations. Let’s examine this setter methods.
setQ(): Setting the keyword for search.
setLang(): Setting the language for search. Search Kit has it’s own model for language. If you would like examine this enum and learn about which Languages are supporting by Search Kit, you can visit the following website.
Huawei Search Kit — Language Model
setSregion(): Setting the region for search. Search Kit has it’s own model for region. If you would like examine this enum and learn about which Regions are supporting by Search Kit, you can visit the following website.
Huawei Search Kit — Region Model
setPn(): Setting the number about how much items will be in current page. The value ranges from 1 to 100, and the default value is 1.
setPs(): Setting the number of search results that will be returned on a page. The value ranges from 1 to 100, and the default value is 10.
Now, all we need to do is getting news, images, videos and show the results for these on the screen.
News Search
To get news, we can use the following method.
Java:
fun newsSearch(requestListener: IRequestListener<List<NewsItem>, String>) {
SearchKitInstance.getInstance().newsSearcher.setCredential(SearchKitService.accessToken)
var newsList = SearchKitInstance.getInstance().newsSearcher.search(SearchKitService.returnCommonRequest())
newsList?.getData()?.let { newsItems ->
requestListener.onSuccess(newsItems)
} ?: kotlin.run {
requestListener.onError("No value returned")
}
}
Image Search
To get images, we can use the following method.
Java:
fun imageSearch(requestListener: IRequestListener<List<ImageItem>, String>) {
SearchKitInstance.getInstance().imageSearcher.setCredential(SearchKitService.accessToken)
var imageList = SearchKitInstance.getInstance().imageSearcher.search(SearchKitService.returnCommonRequest())
imageList?.getData()?.let { imageItems ->
requestListener.onSuccess(imageItems)
} ?: kotlin.run {
requestListener.onError("No value returned")
}
}
Video Search
To get images, we can use the following method.
Java:
fun videoSearch(requestListener: IRequestListener<List<VideoItem>, String>) {
SearchKitInstance.getInstance().videoSearcher.setCredential(SearchKitService.accessToken)
var videoList = SearchKitInstance.getInstance().videoSearcher.search(SearchKitService.returnCommonRequest())
videoList?.getData()?.let { videoList ->
requestListener.onSuccess(videoList)
} ?: kotlin.run {
requestListener.onError("No value returned")
}
}
Showing on screen
All these results return a clickable url for each one. We can create an intent to open these URLs on the browser which has installed to device before.
To do that and other operations, I will share BrowserFragment codes for fragment and the SearchItemAdapter codes for recyclerview.
Java:
class BrowserFragment: Fragment() {
private lateinit var viewBinding: FragmentBrowserBinding
private lateinit var searchOptionsTextViews: ArrayList<TextView>
override fun onCreateView(inflater: LayoutInflater, container: ViewGroup?, savedInstanceState: Bundle?): View? {
viewBinding = FragmentBrowserBinding.inflate(inflater, container, false)
searchOptionsTextViews = arrayListOf(viewBinding.news, viewBinding.images, viewBinding.videos)
return viewBinding.root
}
private fun setListeners() {
viewBinding.news.setOnClickListener { getNews() }
viewBinding.images.setOnClickListener { getImages() }
viewBinding.videos.setOnClickListener { getVideos() }
}
private fun getNews() {
SearchKitService.newsSearch(object: IRequestListener<List<NewsItem>, String>{
override fun onSuccess(newsItemList: List<NewsItem>) {
setupRecyclerView(newsItemList, viewBinding.news)
}
override fun onError(errorMessage: String) {
Toast.makeText(requireContext(), errorMessage, Toast.LENGTH_SHORT).show()
}
})
}
private fun getImages(){
SearchKitService.imageSearch(object: IRequestListener<List<ImageItem>, String>{
override fun onSuccess(imageItemList: List<ImageItem>) {
setupRecyclerView(imageItemList, viewBinding.images)
}
override fun onError(errorMessage: String) {
Toast.makeText(requireContext(), errorMessage, Toast.LENGTH_SHORT).show()
}
})
}
private fun getVideos() {
SearchKitService.videoSearch(object: IRequestListener<List<VideoItem>, String>{
override fun onSuccess(videoItemList: List<VideoItem>) {
setupRecyclerView(videoItemList, viewBinding.videos)
}
override fun onError(errorMessage: String) {
Toast.makeText(requireContext(), errorMessage, Toast.LENGTH_SHORT).show()
}
})
}
private val clickListener = object: IClickListener<String> {
override fun onClick(clickedInfo: String) {
var intent = Intent(Intent.ACTION_VIEW).apply {
data = Uri.parse(clickedInfo)
}
startActivity(intent)
}
}
private fun <T> setupRecyclerView(itemList: List<T>, selectedSearchOption: TextView) {
viewBinding.searchKitRecyclerView.apply {
layoutManager = LinearLayoutManager(requireContext())
adapter = SearchItemAdapter<T>(itemList, clickListener)
}
changeSelectedTextUi(selectedSearchOption)
}
private fun changeSelectedTextUi(selectedSearchOption: TextView) {
for (textView in searchOptionsTextViews)
if (textView == selectedSearchOption) {
textView.background = requireContext().getDrawable(R.drawable.selected_text)
} else {
textView.background = requireContext().getDrawable(R.drawable.unselected_text)
}
}
override fun onViewCreated(view: View, savedInstanceState: Bundle?) {
super.onViewCreated(view, savedInstanceState)
setListeners()
getNews()
}
}
Java:
class SearchItemAdapter<T>(private val searchItemList: List<T>,
private val clickListener: IClickListener<String>):
RecyclerView.Adapter<SearchItemAdapter.SearchItemHolder<T>>(){
override fun onCreateViewHolder(parent: ViewGroup, viewType: Int): SearchItemHolder<T> {
val itemBinding = ItemSearchBinding.inflate(LayoutInflater.from(parent.context), parent, false)
return SearchItemHolder<T>(itemBinding)
}
override fun onBindViewHolder(holder: SearchItemHolder<T>, position: Int) {
val item = searchItemList[position]
var isLast = (position == searchItemList.size - 1)
holder.bind(item, isLast, clickListener)
}
override fun getItemCount(): Int = searchItemList.size
override fun getItemViewType(position: Int): Int = position
class SearchItemHolder<T>(private val itemBinding: ItemSearchBinding): RecyclerView.ViewHolder(itemBinding.root) {
fun bind(item: T, isLast: Boolean, clickListener: IClickListener<String>) {
if (isLast)
itemBinding.itemSeparator.visibility = View.GONE
lateinit var clickUrl: String
var imageUrl = "https://www.who.int/images/default-source/infographics/who-emblem.png?sfvrsn=877bb56a_2"
when(item){
is NewsItem -> {
itemBinding.searchResultTitle.text = item.title
itemBinding.searchResultDetail.text = item.provider.siteName
clickUrl = item.clickUrl
item.provider.logo?.let { imageUrl = it }
}
is ImageItem -> {
itemBinding.searchResultTitle.text = item.title
clickUrl = item.clickUrl
item.sourceImage.image_content_url?.let { imageUrl = it }
}
is VideoItem -> {
itemBinding.searchResultTitle.text = item.title
itemBinding.searchResultDetail.text = item.provider.siteName
clickUrl = item.clickUrl
item.provider.logo?.let { imageUrl = it }
}
}
itemBinding.searchItemRoot.setOnClickListener {
clickListener.onClick(clickUrl)
}
getImageFromUrl(imageUrl, itemBinding.searchResultImage)
}
private fun getImageFromUrl(url: String, imageView: ImageView) {
Glide.with(itemBinding.root)
.load(url)
.centerCrop()
.into(imageView);
}
}
}
End
If you would like to learn more about Search Kit and see the Codelab, you can visit the following websites:
https://developer.huawei.com/consum.../HMSCore-Guides/introduction-0000001055591730
https://developer.huawei.com/consumer/en/codelab/HMSSearchKit/index.html#0
Very nice guide.
Amazing.
Thank you very much
It's a very nice example.

Search News with Voice (Search Kit — ML Kit(ASR)— Network Kit)

{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Hello everyone. In this article, I will try to talk about the uses of Huawei Search Kit, Huawei ML Kit and Huawei Network Kit. I have developed a demo app using these 3 kits to make it clearer.
What is Search Kit?​HUAWEI Search Kit fully opens Petal Search capabilities through the device-side SDK and cloud-side APIs, enabling ecosystem partners to quickly provide the optimal mobile app search experience.
What is Network Kit?​Network Kit is a basic network service suite. It incorporates Huawei’s experience in far-field network communications, and utilizes scenario-based RESTful APIs as well as file upload and download APIs. Therefore, Network Kit can provide you with easy-to-use device-cloud transmission channels featuring low latency, high throughput, and high security.
What is ML Kit — ASR?​Automatic speech recognition (ASR) can recognize speech not longer than 60s and convert the input speech into text in real time. This service uses industry-leading deep learning technologies to achieve a recognition accuracy of over 95%.
Development Steps​1. Integration
First of all, we need to create an app on AppGallery Connect and add related details about HMS Core to our project. You can access the article about that steps from the link below.
Android | Integrating Your Apps With Huawei HMS Core
Hi, this article explains you how to integrate with HMS (Huawei Mobile Services) and making AppGallery Connect Console project settings.
medium.com
2. Adding Dependencies
After HMS Core is integrated into the project and the Search Kit and ML Kit are activated through the console, the required library should added to the build.gradle file in the app directory as follows. The project’s minSdkVersion value should be 24. For this, the minSdkVersion value in the same file should be updated to 24.
Java:
...
defaultConfig {
applicationId "com.myapps.searchappwithml"
minSdkVersion 24
targetSdkVersion 30
versionCode 1
versionName "1.0"
testInstrumentationRunner "androidx.test.runner.AndroidJUnitRunner"
}
...
dependencies {
...
implementation 'com.huawei.agconnect:agconnect-core:1.5.0.300'
implementation 'com.huawei.hms:network-embedded:5.0.1.301'
implementation 'com.huawei.hms:searchkit:5.0.4.303'
implementation 'com.huawei.hms:ml-computer-voice-asr-plugin:2.2.0.300'
...
}
3. Adding Permissions
Java:
<uses-permission android:name="android.permission.INTERNET" />
<uses-permission android:name="android.permission.RECORD_AUDIO" />
<uses-permission android:name="android.permission.ACCESS_NETWORK_STATE" />
<uses-permission android:name="android.permission.ACCESS_WIFI_STATE" />
4. Application Class
When the application is started, we need to initialize the Kits in the Application class. Then we need to specify the Application class to the “android: name” tag in the manifest file.
Java:
@HiltAndroidApp
class SearchApplication : Application(){
override fun onCreate() {
super.onCreate()
initNetworkKit()
initSearchKit()
initMLKit()
}
private fun initNetworkKit(){
NetworkKit.init(
applicationContext,
object : NetworkKit.Callback() {
override fun onResult(result: Boolean) {
if (result) {
Log.i(NETWORK_KIT_TAG, "init success")
} else {
Log.i(NETWORK_KIT_TAG, "init failed")
}
}
})
}
private fun initSearchKit(){
SearchKitInstance.init(this, APP_ID)
CoroutineScope(Dispatchers.IO).launch {
SearchKitInstance.instance.refreshToken()
}
}
private fun initMLKit() {
MLApplication.getInstance().apiKey = API_KEY
}
}
5. Getting Access Token
We need to use Access Token to send requests to Search Kit. I used the Network Kit to request the Access Token. Its use is very similar to services that perform other Network operations.
As with other Network Services, there are Annotations such as POST, FormUrlEncoded, Headers, Field.
Java:
interface AccessTokenService {
@POST("oauth2/v3/token")
@FormUrlEncoded
@Headers("Content-Type:application/x-www-form-urlencoded", "charset:UTF-8")
fun createAccessToken(
@Field("grant_type") grant_type: String,
@Field("client_secret") client_secret: String,
@Field("client_id") client_id: String
) : Submit<String>
}
We need to create our request structure using the RestClient class.
Java:
@Module
@InstallIn(ApplicationComponent::class)
class ApplicationModule {
companion object{
private const val TIMEOUT: Int = 500000
private var restClient: RestClient? = null
fun getClient() : RestClient {
val httpClient = HttpClient.Builder()
.connectTimeout(TIMEOUT)
.writeTimeout(TIMEOUT)
.readTimeout(TIMEOUT)
.build()
if (restClient == null) {
restClient = RestClient.Builder()
.baseUrl("https://oauth-login.cloud.huawei.com/")
.httpClient(httpClient)
.build()
}
return restClient!!
}
}
}
Finally, by sending the request, we reach the AccessToken.
Java:
data class AccessTokenModel (
var access_token : String,
var expires_in : Int,
var token_type : String
)
...
fun SearchKitInstance.refreshToken() {
ApplicationModule.getClient().create(AccessTokenService::class.java)
.createAccessToken(
GRANT_TYPE,
CLIENT_SECRET,
CLIENT_ID
)
.enqueue(object : Callback<String>() {
override fun onFailure(call: Submit<String>, t: Throwable) {
Log.d(ACCESS_TOKEN_TAG, "getAccessTokenErr " + t.message)
}
override fun onResponse(
call: Submit<String>,
response: Response<String>
) {
val convertedResponse =
Gson().fromJson(response.body, AccessTokenModel::class.java)
setInstanceCredential(convertedResponse.access_token)
}
})
}
6. ML Kit (ASR) — Search Kit
Since we are using ML Kit (ASR), we first need to get microphone permission from the user. Then we start ML Kit (ASR) with the help of a button and get a text from the user. By sending this text to the function we created for the Search Kit, we reach the data we will show on the screen.
Here I used the Search Kit’s Web search feature. Of course, News, Image, Video search features can be used according to need.
Java:
@AndroidEntryPoint
class MainActivity : AppCompatActivity() {
private lateinit var binding: MainBinding
private val adapter: ResultAdapter = ResultAdapter()
private var isPermissionGranted: Boolean = false
override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
binding = MainBinding.inflate(layoutInflater)
setContentView(binding.root)
binding.button.setOnClickListener {
if (isPermissionGranted) {
startASR()
}
}
binding.recycler.adapter = adapter
val permission = arrayOf(Manifest.permission.INTERNET, Manifest.permission.RECORD_AUDIO)
ActivityCompat.requestPermissions(this, permission,MIC_PERMISSION)
}
override fun onRequestPermissionsResult(
requestCode: Int,
permissions: Array<out String>,
grantResults: IntArray
) {
super.onRequestPermissionsResult(requestCode, permissions, grantResults)
when (requestCode) {
MIC_PERMISSION -> {
// If request is cancelled, the result arrays are empty.
if (grantResults.isNotEmpty()
&& grantResults[0] == PackageManager.PERMISSION_GRANTED
&& grantResults[1] == PackageManager.PERMISSION_GRANTED) {
// permission was granted
Toast.makeText(this, "Permission granted", Toast.LENGTH_SHORT).show()
isPermissionGranted = true
} else {
// permission denied,
Toast.makeText(this, "Permission denied", Toast.LENGTH_SHORT).show()
}
return
}
}
}
private fun startASR() {
val intent = Intent(this, MLAsrCaptureActivity::class.java)
.putExtra(MLAsrCaptureConstants.LANGUAGE, "en-US")
.putExtra(MLAsrCaptureConstants.FEATURE, MLAsrCaptureConstants.FEATURE_WORDFLUX)
startActivityForResult(intent, ASR_REQUEST_CODE)
}
override fun onActivityResult(requestCode: Int, resultCode: Int, data: Intent?) {
super.onActivityResult(requestCode, resultCode, data)
if (requestCode == ASR_REQUEST_CODE) {
when (resultCode) {
MLAsrCaptureConstants.ASR_SUCCESS -> if (data != null) {
val bundle = data.extras
if (bundle != null && bundle.containsKey(MLAsrCaptureConstants.ASR_RESULT)) {
val text = bundle.getString(MLAsrCaptureConstants.ASR_RESULT).toString()
performSearch(text)
}
}
MLAsrCaptureConstants.ASR_FAILURE -> if (data != null) {
val bundle = data.extras
if (bundle != null && bundle.containsKey(MLAsrCaptureConstants.ASR_ERROR_CODE)) {
val errorCode = bundle.getInt(MLAsrCaptureConstants.ASR_ERROR_CODE)
Toast.makeText(this, "Error Code $errorCode", Toast.LENGTH_LONG).show()
}
if (bundle != null && bundle.containsKey(MLAsrCaptureConstants.ASR_ERROR_MESSAGE)) {
val errorMsg = bundle.getString(MLAsrCaptureConstants.ASR_ERROR_MESSAGE)
Toast.makeText(this, "Error Code $errorMsg", Toast.LENGTH_LONG).show()
}
}
else -> {
Toast.makeText(this, "Failed to get data", Toast.LENGTH_LONG).show()
}
}
}
}
private fun performSearch(query: String) {
CoroutineScope(Dispatchers.IO).launch {
val searchKitInstance = SearchKitInstance.instance
val webSearchRequest = WebSearchRequest().apply {
setQ(query)
setLang(loadLang())
setSregion(loadRegion())
setPs(5)
setPn(1)
}
val response = searchKitInstance.webSearcher.search(webSearchRequest)
displayResults(response.data)
}
}
private fun displayResults(data: List<WebItem>) {
runOnUiThread {
adapter.items.apply {
clear()
addAll(data)
}
adapter.notifyDataSetChanged()
}
}
}
Output​
Conclusion​By using these 3 kits effortlessly, you can increase the quality of your application in a short time. I hope this article was useful to you. See you in other articles
References​Network Kit: https://developer.huawei.com/consum...s-V5/network-introduction-0000001050440045-V5
ML Kit: https://developer.huawei.com/consum...s-V5/service-introduction-0000001050040017-V5
Search Kit: https://developer.huawei.com/consum...re-Guides-V5/introduction-0000001055591730-V5

Categories

Resources