A Quick Introduction about How to Implement Sound Detection - Huawei Developers

For some apps, it's necessary to have a function called sound detection that can recognize sounds like knocks on the door, rings of the doorbell, and car horns. Developing such a function can be costly for small- and medium-sized developers, so what should they do in this situation?
There's no need to worry about if you have the sound detection service in HUAWEI ML Kit. Integrating its SDK into your app is simple, and you can equip it with the sound detection function that can work well even when the device does not connect to the network.
Introduction to Sound Detection in HUAWEI ML Kit
This service can detect sound events online by real-time recording. The detected sound events can help you perform subsequent actions. Currently, the following types of sound events are supported: laughter, child crying, snoring, sneezing, shouting, cat meowing, dog barking, running water (such as from taps, streams, and ocean waves), car horns, doorbells, knocking on doors, fire alarms (including smoke alarms), and sirens (such as those from fire trucks, ambulances, police cars, and air defenses).
Preparations
Configuring the Development Environment
Create an app in AppGallery Connect.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
For details, see Getting Started with Android.
Enable ML Kit.
Click here to get more information.
After the app is created, an agconnect-services.json file will be automatically generated. Download it and copy it to the root directory of your project.
Configure the Huawei Maven repository address.
To learn more, click here.
Integrate the sound detection SDK.
It is recommended to integrate the SDK in full SDK mode. Add build dependencies for the SDK in the app-level build.gradle file.
Code:
// Import the sound detection package.
implementation 'com.huawei.hms:ml-speech-semantics-sounddect-sdk:2.1.0.300'
implementation 'com.huawei.hms:ml-speech-semantics-sounddect-model:2.1.0.300'
Add the AppGallery Connect plugin configuration as needed using either of the following methods:
Method 1: Add the following information under the declaration in the file header:
Code:
apply plugin: 'com.android.application'
apply plugin: 'com.huawei.agconnect'
Method 2: Add the plugin configuration in the plugins block:
Code:
plugins {
id 'com.android.application'
id 'com.huawei.agconnect'
}
Automatically update the machine learning model.
Add the following statements to the AndroidManifest.xml file. After a user installs your app from HUAWEI AppGallery, the machine learning model is automatically updated to the user's device.
Code:
<meta-data
android:name="com.huawei.hms.ml.DEPENDENCY"
android:value= "sounddect"/>
For details, go to Integrating the Sound Detection SDK.
Development Procedure​
Obtain the microphone permission. If the app does not have this permission, error 12203 will be reported.
(Mandatory) Apply for the static permission.
Code:
<uses-permission android:name="android.permission.RECORD_AUDIO" />
(Mandatory) Apply for the dynamic permission.
ActivityCompat.requestPermissions(
this, new String[]{Manifest.permission.RECORD_AUDIO
}, 1);
Create an MLSoundDector object.
Code:
private static final String TAG = "MLSoundDectorDemo";
// Object of sound detection.
private MLSoundDector mlSoundDector;
// Create an MLSoundDector object and configure the callback.
private void initMLSoundDector(){
mlSoundDector = MLSoundDector.createSoundDector();
mlSoundDector.setSoundDectListener(listener);
}
Create a sound detection result callback to obtain the detection result and pass the callback to the sound detection instance.
// Create a sound detection result callback to obtain the detection result and pass the callback to the sound detection instance.
Code:
private MLSoundDectListener listener = new MLSoundDectListener() {
@Override
public void onSoundSuccessResult(Bundle result) {
// Processing logic when the detection is successful. The detection result ranges from 0 to 12, corresponding to the 13 sound types whose names start with SOUND_EVENT_TYPE. The types are defined in MLSoundDectConstants.java.
int soundType = result.getInt(MLSoundDector.RESULTS_RECOGNIZED);
Log.d(TAG,"Detection success:"+soundType);
}
@Override
public void onSoundFailResult(int errCode) {
// Processing logic for detection failure. The possible cause is that your app does not have the microphone permission (Manifest.permission.RECORD_AUDIO).
Log.d(TAG,"Detection failure"+errCode);
}
};
Note: The code above prints the type of the detected sound as an integer. In the actual situation, you can convert the integer into a data type that users can understand.
Definition for the types of detected sounds:
Code:
<string-array name="sound_dect_voice_type">
<item>laughter</item>
<item>baby crying sound</item>
<item>snore</item>
<item>sneeze</item>
<item>shout</item>
<item>cat's meow</item>
<item>dog's bark</item>
<item>running water</item>
<item>car horn sound</item>
<item>doorbell sound</item>
<item>knock</item>
<item>fire alarm sound</item>
<item>alarm sound</item>
</string-array>
Start and stop sound detection.
Code:
@Override
public void onClick(View v) {
switch (v.getId()){
case R.id.btn_start_detect:
if (mlSoundDector != null){
boolean isStarted = mlSoundDector.start(this); // context: Context.
// If the value of isStared is true, the detection is successfully started. If the value of isStared is false, the detection fails to be started. (The possible cause is that the microphone is occupied by the system or another app.)
if (isStarted){
Toast.makeText(this,"The detection is successfully started.", Toast.LENGTH_SHORT).show();
}
}
break;
case R.id.btn_stop_detect:
if (mlSoundDector != null){
mlSoundDector.stop();
}
break;
}
}
Call destroy() to release resources when the sound detection page is closed.
Code:
@Override
protected void onDestroy() {
super.onDestroy();
if (mlSoundDector != null){
mlSoundDector.destroy();
}
}
Testing the App​
Using the knock as an example, the output result of sound detection is expected to be 10.
Tap Start detecting and simulate a knock on the door. If you get logs as follows in Android Studio's console, they indicates that the integration of the sound detection SDK is successful.
More Information​
Sound detection belongs to one of the six capability categories of ML Kit, which are related to text, language/voice, image, face/body, natural language processing, and custom model.
Sound detection is a service in the language/voice-related category.
Interested in other categories? Feel free to have a look at the HUAWEI ML Kit document.
To learn more, please visit:
HUAWEI Developers official website
Development Guide
Reddit to join developer discussions
GitHub or Gitee to download the demo and sample code
Stack Overflow to solve integration problems
Follow our official account for the latest HMS Core-related news and updates.
Original Source

Related

How to Integrate HUAWEI ML Kit's Hand Keypoint Detection Capability

Introduction
In the previous post, we looked at how to use HUAWEI ML Kit's skeleton detection capability to detect points such as the head, neck, shoulders, knees and ankles. But as well as skeleton detection, ML Kit also provides a hand keypoint detection capability, which can locate 21 hand keypoints, such as fingertips, joints, and wrists.
Application Scenarios
Hand keypoint detection is useful in a huge range of situations. For example, short video apps can generate some cute and funny special effects based on hand keypoints, to add more fun to short videos.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Or, if smart home devices are integrated with hand keypoint detection, users could control them from a remote distance using customized gestures, so they could do things like activate a robot vacuum cleaner while they’re out.
Hand Keypoint Detection Development
Now, we’re going to see how to quickly integrate ML Kit's hand keypoint detection feature. Let’s take video stream detection as an example.
1. Preparations
You can find detailed information about the preparations you need to make on the HUAWEI Developers-Development Process.
Here, we'll just look at the most important procedures.
1.1 Configure the Maven Repository Address in the Project-Level build.gradle File
Code:
buildscript {
repositories {
...
maven {url 'https://developer.huawei.com/repo/'}
}
}
dependencies {
...
classpath 'com.huawei.agconnect:agcp:1.3.1.300'
}
allprojects {
repositories {
...
maven {url 'https://developer.huawei.com/repo/'}
}
}
1.2 Add SDK Dependencies to the App-Level build.gradle File
Code:
dependencies{
// Import the base SDK.
implementation 'com.huawei.hms:ml-computer-vision-handkeypoint:2.0.2.300'
// Import the hand keypoint detection model package.
implementation 'com.huawei.hms:ml-computer-vision-handkeypoint-model:2.0.2.300'
}
1.3 Add Configurations to the File Header
Code:
apply plugin: 'com.android.application'
apply plugin: 'com.huawei.agconnect'
1.4 Add these Statements to the AndroidManifest.xml File so the Machine Learning Model can Automatically Update
Code:
<meta-data
android:name="com.huawei.hms.ml.DEPENDENCY"
android:value= "handkeypoint"/>
1.5 Apply for Camera Permission and Local File Reading Permission
Code:
<!--Camera permission-->
<uses-permission android:name="android.permission.CAMERA" />
<!--Read permission-->
<uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE" />
2. Code Development
2.1 Create a Hand Keypoint Analyzer
Code:
MLHandKeypointAnalyzerSetting setting = new MLHandKeypointAnalyzerSetting.Factory()
// MLHandKeypointAnalyzerSetting.TYPE_ALL indicates that all results are returned.
// MLHandKeypointAnalyzerSetting.TYPE_KEYPOINT_ONLY indicates that only hand keypoint information is returned.
// MLHandKeypointAnalyzerSetting.TYPE_RECT_ONLY indicates that only palm information is returned.
setSceneType(MLHandKeypointAnalyzerSetting.TYPE_ALL)
// Set the maximum number of hand regions that can be detected within an image. A maximum of 10 hand regions can be detected by default.
setMaxHandResults(1)
create();
MLHandKeypointAnalyzer analyzer = MLHandKeypointAnalyzerFactory.getInstance().getHandKeypointAnalyzer(setting);
2.2 Create the HandKeypointTransactor Class for Processing Detection Results
This class implements the MLAnalyzer.MLTransactor<T> API and uses the transactResult method in this class to obtain the detection results and implement specific services. In addition to coordinate information for each hand keypoint, the detection results include a confidence value for the palm and each of the keypoints. Palm and hand keypoints which are incorrectly detected can be filtered out based on the confidence values. You can set a threshold based on misrecognition tolerance.
Code:
public class HandKeypointTransactor implements MLAnalyzer.MLTransactor<List<MLHandKeypoints>> {
@Override
public void transactResult(MLAnalyzer.Result<List<MLHandKeypoints>> results) {
SparseArray<List<MLHandKeypoints>> analyseList = result.getAnalyseList();
// Determine detection result processing as required. Note that only the detection results are processed.
// Other detection-related APIs provided by ML Kit cannot be called.
}
@Override
public void destroy() {
// Callback method used to release resources when the detection ends.
}
}
2.3 Set the Detection Result Processor to Bind the Analyzer to the Result Processor
Code:
analyzer.setTransactor(new HandKeypointTransactor());
2.4 Create an Instance of the LensEngine Class
The LensEngine Class is provided by the HMS Core ML SDK to capture dynamic camera streams and pass these streams to the analyzer. The camera display size should be set to a value between 320 x 320 px and 1920 x 1920 px.
Code:
LensEngine lensEngine = new LensEngine.Creator(getApplicationContext(), analyzer)
setLensType(LensEngine.BACK_LENS)
applyDisplayDimension(1280, 720)
applyFps(20.0f)
enableAutomaticFocus(true)
create();
2.5 Call the run Method to Start the Camera and Read Camera Streams for Detection
Code:
// Implement other logic of the SurfaceView control by yourself.
SurfaceView mSurfaceView = findViewById(R.id.surface_view);
try {
lensEngine.run(mSurfaceView.getHolder());
} catch (IOException e) {
// Exception handling logic.
}
2.6 Stop the Analyzer to Release Detection Resources Once the Detection is Complete
Code:
if (analyzer != null) {
analyzer.stop();
}
if (lensEngine != null) {
lensEngine.release();
}
Demo Effect
And that's it! We can now see hand keypoints appear when making different gestures. Remember that you can expand this capability if you need to.

Implement Eye-Enlarging and Face-Shaping Functions with ML Kit's Detection Capability

Introduction
Sometimes, we can't help taking photos to keep our unforgettable moments in life. But most of us are not professional photographers or models, so our photographs can end up falling short of our expectations. So, how can we produce more impressive snaps? If you have an image processing app on your phone, it can automatically detect faces in a photo, and you can then adjust the image until you're happy with it. So, after looking around online, I found HUAWEI ML Kit's face detection capability. By integrating this capability, you can add beautification functions to your apps. Have a try!
Application Scenarios
ML Kit's face detection capability detects up to 855 facial keypoints and returns the coordinates for the face's contour, eyebrows, eyes, nose, mouth, and ears, as well as the angle of the face. Once you've integrated this capability, you can quickly create beauty apps and enable users to add fun facial effects and features to their images.
Face detection also detects whether the subject's eyes are open, whether they're wearing glasses or a hat, whether they have a beard, and even their gender and age. This is useful if you want to add a parental control function to your app which prevents children from getting too close to their phone, or staring at the screen for too long.
In addition, face detection can detect up to seven facial expressions, including smiling, neutral, angry, disgusted, frightened, sad, and surprised faces. This is great if you want to create apps such as smile-cameras.
You can integrate any of these capabilities as needed. At the same time, face detection supports image and video stream detection, cross-frame face tracking, and multi-face detection. It really is powerful! Now, let's see how to integrate this capability.
Face Detection Development
1. Preparations
You can find detailed information about the preparations you need to make on the HUAWEI Developers-Development Process. Here, we'll just look at the most important procedures.
1.1 Configure the Maven Repository Address in the Project-Level build.gradle File
Code:
buildscript {
repositories {
...
maven {url 'https://developer.huawei.com/repo/'}
}
}
dependencies {
...
classpath 'com.huawei.agconnect:agcp:1.3.1.300'
}
allprojects {
repositories {
...
maven {url 'https://developer.huawei.com/repo/'}
}
}
1.2 Add Configurations to the File Header
After integrating the SDK, add the following configuration to the file header:
Code:
apply plugin: 'com.android.application'
apply plugin: 'com.huawei.agconnect'
1.3 Configure SDK Dependencies in the App-Level build.gradle File
Code:
dependencies{
// Import the base SDK.
implementation 'com.huawei.hms:ml-computer-vision-face:2.0.1.300'
// Import the contour and keypoint detection model package.
implementation 'com.huawei.hms:ml-computer-vision-face-shape-point-model:2.0.1.300'
// Import the facial expression detection model package.
implementation 'com.huawei.hms:ml-computer-vision-face-emotion-model:2.0.1.300'
// Import the facial feature detection model package.
implementation 'com.huawei.hms:ml-computer-vision-face-feature-model:2.0.1.300'
}
1.4 Add these Statements to the AndroidManifest.xml File so the Machine Learning Model can Update Automatically
Code:
<manifest
...
<meta-data
android:name="com.huawei.hms.ml.DEPENDENCY"
android:value= "face"/>
...
</manifest>
1.5 Apply for Camera Permission
Code:
<uses-permission android:name="android.permission.CAMERA" />
<uses-feature android:name="android.hardware.camera" />
2. Code Development
2.1 Create a Face Analyzer by Using the Default Parameter Configurations
Code:
analyzer = MLAnalyzerFactory.getInstance().getFaceAnalyzer();
2.2 Create an MLFrame Object by Using the android.graphics.Bitmap for the Analyzer to Detect Images
Code:
MLFrame frame = MLFrame.fromBitmap(bitmap);
2.3 Call the asyncAnalyseFrame Method to Perform Face Detection
Code:
Task<List<MLFace>> task = analyzer.asyncAnalyseFrame(frame);
task.addOnSuccessListener(new OnSuccessListener<List<MLFace>>() {
@Override
public void onSuccess(List<MLFace> faces) {
// Detection success. Obtain the face keypoints.
}
}).addOnFailureListener(new OnFailureListener() {
@Override
public void onFailure(Exception e) {
// Detection failure.
}
});
2.4 Use the Progress Bar to Process the Face in the Image
Call the magnifyEye and smallFaceMesh methods to implement the eye-enlarging algorithm and face-shaping algorithm.
Code:
private SeekBar.OnSeekBarChangeListener onSeekBarChangeListener = new SeekBar.OnSeekBarChangeListener() {
@Override
public void onProgressChanged(SeekBar seekBar, int progress, boolean fromUser) {
switch (seekBar.getId()) {
case R.id.seekbareye: // When the progress bar of the eye enlarging changes, ...
case R.id.seekbarface: // When the progress bar of the face shaping changes, ...
}
}
2.5 Release the Analyzer After the Detection is Complete
Code:
try {
if (analyzer != null) {
analyzer.stop();
}
} catch (IOException e) {
Log.e(TAG, "e=" + e.getMessage());
}
Demo
Now, let's see what it can do. Pretty cool, right?
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}

HUAWEI ML Kit's Automatic Speech Recognition (ASR) Service: Tongue Twisters

Overview
When I try to perform voice commands on my devices, the device will often fail to recognize what I am trying to say, because of my poor pronunciation. For example, sometimes I can't distinguish between syllables, or make the "ch" and "sh" sounds, which have led to some frustrating experiences. I've always envied people who can enunciate well, and recite tongue twisters with ease, and have dreamed of the day when that could be me. By chance, I came across the game Tongue Twister, which integrates HUAWEI ML Kit's ASR service, and has changed my life for the better. Let's take a look at how the game works.
Application Scenarios
There are five levels in Tongue Twister, and as you'd expect, each level contains a tongue twister. The key for passing each level is ML Kit's ASR service. By integrating the service, the game is able to recognize the player's voice with a high degree of accuracy. Players are thus able to pass each level when they demonstrate clear enunciation. The service has proven itself to be highly useful in certain fields, enhancing recognition capabilities for product, movie, and music searches, as well as navigation services.
Now, let's look at what the game looks like in practice.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Piqued your interest? With the ASR service, why not create a tongue twister game of your own? Here's how...
Development Procedures
1. For details about how to set the authentication information for your app, please refer to Notes on Using Cloud Authentication Information.
2. Call an API to create a speech recognizer.
Code:
MLAsrRecognizer mSpeechRecognizer = MLAsrRecognizer.createAsrRecognizer(context);
3. Create a speech recognition result listener callback.
Code:
/**
* Use the callback to implement the MLAsrListener API and methods in the API.
*/
protected class SpeechRecognitionListener implements MLAsrListener {
@Override
public void onStartListening() {
// The recorder starts to receive speech.
}
@Override
public void onStartingOfSpeech() {
// The user starts to speak, that is, the speech recognizer detects that the user starts to speak.
}
@Override
public void onVoiceDataReceived(byte[] data, float energy, Bundle bundle) {
// Return the original PCM stream and audio power to the user.
}
@Override
public void onRecognizingResults(Bundle partialResults) {
// Receive the recognized text from MLAsrRecognizer.
}
@Override
public void onResults(Bundle results) {
// Text data of ASR.
}
}
@Override
public void onError(int error, String errorMessage) {
// If you don't add this, there will be no response after you cut the network
}
@Override
public void onState(int state, Bundle params) {
// Notify the app status change.
}
}
4. Bind the new result listener callback to the speech recognizer.
Code:
mSpeechRecognizer.setAsrListener(new SpeechRecognitionListener());
5. Set the recognition parameters and initiate speech recognition.
Code:
// Set parameters and start the audio device.
Intent mSpeechRecognizerIntent = new Intent(MLAsrConstants.ACTION_HMS_ASR_SPEECH);
mSpeechRecognizerIntent
// Set the language that can be recognized to English. If this parameter is not set,
// English is recognized by default. Example: "zh-CN": Chinese;"en-US": English;"fr-FR": French;"es-ES": Spanish;"de-DE": German;"it-IT": Italian.
.putExtra(MLAsrConstants.LANGUAGE, language)
// Set to return the recognition result along with the speech. If you ignore the setting, this mode is used by default. Options are as follows:
// MLAsrConstants.FEATURE_WORDFLUX: Recognizes and returns texts through onRecognizingResults.
// MLAsrConstants.FEATURE_ALLINONE: After the recognition is complete, texts are returned through onResults.
.putExtra(MLAsrConstants.FEATURE, MLAsrConstants.FEATURE_WORDFLUX);mSpeechRecognizer.startRecognizing(mSpeechRecognizerIntent);
6. Release resources when the recognition ends.
Code:
if (mSpeechRecognizer != null) {
mSpeechRecognizer.destroy();
mSpeechRecognizer = null;
}
Maven repository address
Code:
buildscript {
repositories {
maven { url 'https://developer.huawei.com/repo/' }
}
}
allprojects {
repositories {
maven { url 'https://developer.huawei.com/repo/' }
}
}
SDK import
Code:
dependencies {
// Automatic speech recognition Long voice SDK.
implementation 'com.huawei.hms:ml-computer-voice-realtimetranscription:2.0.3.300'
// Automatic speech recognition SDK.
implementation 'com.huawei.hms:ml-computer-voice-asr:2.0.3.300'
// Automatic speech recognition plugin.
implementation 'com.huawei.hms:ml-computer-voice-asr-plugin:2.0.3.300'
}
Manifest files
Code:
<manifest
...
<meta-data
android:name="com.huawei.hms.ml.DEPENDENCY"
android:value="ocr />
...
</manifest>
Permission
Code:
<uses-permission android:name="android.permission.RECORD_AUDIO" />
Dynamic permission application
Code:
private void requestCameraPermission() {
final String[] permissions = new String[]{Manifest.permission.RECORD_AUDIO};
if (!ActivityCompat.shouldShowRequestPermissionRationale(this,
Manifest.permission.RECORD_AUDIO)) { ActivityCompat.requestPermissions(this,
permissions,
TongueTwisterActivity.AUDIO_CODE);
return;
}
}
Summary
In addition to game applications, ML Kit's ASR service also takes effect in other scenarios, such as in shopping apps. The service is able to recognize a spoken product name or feature, and convert it into text to search for the product. For music apps, the service can likewise, recognize song and artist names. For navigation as well, the driver will naturally prefer to speak a destination rather than type it, and have it converted into text using ASR, to enjoy an optimally safe driving experience.
Learn More
For more information, please visit HUAWEI Developers.
For detailed instructions, please visit Development Guide.
You can join the HMS Core developer discussion by going to Reddit.
You can download the demo and sample code on GitHub.
To solve integration problems, please go to Stack Overflow.

Create a New Paradigm for Photo Gallery Management by ML Kit

Overview
"Hey, I just took some pictures of that gorgeous view. Take a look."
"Yes, let me see."
... (a few minutes later)
"Where are they?"
"Wait a minute. There are too many pictures."
Have you experienced this type of frustration before? Finding one specific image in a gallery packed with thousands of images, can be a daunting task. Wouldn't it be nice if there was a way to search for images by category, rather than having to browse through your entire album to find what you want?
Our thoughts exactly! That's why we created HUAWEI ML Kit's scene detection service, which empowers your app to build a smart album, bolstered by intelligent image classification, the result of detecting and labeling elements within images. With this service, you'll be able to locate any image in little time, and with zero hassle.
Features
ML Kit's scene detection service is able to classify and annotate images with food, flowers, plants, cats, dogs, kitchens, mountains, and washers, among a multitude of other items, as well as provide for an enhanced user experience based on the detected information.
The service contains the following features:
Multi-scenario detection
Detects 102 scenarios, with more scenarios continually added.
High detection accuracy
Detects a wide range of objects with a high degree of accuracy.
Fast detection response
Responds in milliseconds, and continually optimizes performance.
Simple and efficient integration
Facilitates simple and cost-effective integration, with APIs and SDK packages.
Applicable Scenarios
In addition to creating smart albums, retrieving, and classifying images, the scene detection service can also automatically select corresponding filters and camera parameters to help users take better images, by detecting where the users are located.
Development Practice
1. Preparations
1.1 Configure app information in AppGallery Connect.
Before you start developing an app, configure the app information in AppGallery Connect. For details, please refer to Development Guide.
1.2 Configure the Maven repository address for the HMS Core SDK, and integrate the SDK for the service.
(1) Open the build.gradle file in the root directory of your Android Studio project.
{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
(2) Add the AppGallery Connect plug-in and the Maven repository.
Go to allprojects > repositories and configure the Maven repository address for the HMS Core SDK.
Go to buildscript > repositories and configure the Maven repository address for the HMS Core SDK.
If the agconnect-services.json file has been added to the app, go to buildscript > dependencies and add the AppGallery Connect plug-in configuration.
Code:
buildscript {
repositories {
google()
jcenter()
maven {url 'https://developer.huawei.com/repo/'}
}
dependencies {
...
classpath 'com.huawei.agconnect:agcp:1.4.1.300'
}
}
allprojects {
repositories {
google()
jcenter()
maven {url 'https://developer.huawei.com/repo/'}
}
}
2. Code Development
Static Image Detection
2.1 Create a scene detection analyzer instance.
Code:
// Method 1: Use default parameter settings.
MLSceneDetectionAnalyzer analyzer = MLSceneDetectionAnalyzerFactory.getInstance().getSceneDetectionAnalyzer();
// Method 2: Create a scene detection analyzer instance based on the customized configuration.
MLSceneDetectionAnalyzerSetting setting = new MLSceneDetectionAnalyzerSetting.Factory()
// Set confidence for scene detection.
.setConfidence(confidence)
.create();
MLSceneDetectionAnalyzer analyzer =
2.2 Create an MLFrame object by using the android.graphics.Bitmap. JPG, JPEG, PNG, and BMP images are all supported.
Code:
MLFrame frame = new MLFrame.Creator().setBitmap(bitmap).create();
2.3 Implement scene detection.
Code:
// Method 1: Detect in synchronous mode.
SparseArray<MLSceneDetection> results = analyzer.analyseFrame(frame);
// Method 2: Detect in asynchronous mode.
Task<List<MLSceneDetection>> task = analyzer.asyncAnalyseFrame(frame);
task.addOnSuccessListener(new OnSuccessListener<List<MLSceneDetection>>() {
public void onSuccess(List<MLSceneDetection> result) {
// Processing logic for scene detection success.
}})
.addOnFailureListener(new OnFailureListener() {
public void onFailure(Exception e) {
// Processing logic for scene detection failure.
// failure.
if (e instanceof MLException) {
MLException mlException = (MLException)e;
// Obtain the error code. You can process the error code and customize respective messages displayed to users.
int errorCode = mlException.getErrCode();
// Obtain the error information. You can quickly locate the fault based on the error code.
String errorMessage = mlException.getMessage();
} else {
// Other exceptions.
}
}
});
2.4 Stop the analyzer and release the detection resources when the detection ends.
Code:
if (analyzer != null) {
analyzer.stop();
}
Camera Stream Detection
You can process camera streams, convert them into an MLFrame object, and detect scenarios using the static image detection method.
If the synchronous detection API is called, you can also use the LensEngine class built into the SDK to detect scenarios in camera streams. The following is the sample code:
3.1 Create a scene detection analyzer, which can only be created on the device.
Code:
MLSceneDetectionAnalyzer analyzer = MLSceneDetectionAnalyzerFactory.getInstance().getSceneDetectionAnalyzer();
3.2 Create the SceneDetectionAnalyzerTransactor class for processing detection results. This class implements the MLAnalyzer.MLTransactor<T> API and uses the transactResult method in this API to obtain the detection results and implement specific services.
Code:
public class SceneDetectionAnalyzerTransactor implements MLAnalyzer.MLTransactor<MLSceneDetection> {
@Override
public void transactResult(MLAnalyzer.Result<MLSceneDetection> results) {
SparseArray<MLSceneDetection> items = results.getAnalyseList();
// Determine detection result processing as required. Note that only the detection results are processed.
// Other detection-related APIs provided by ML Kit cannot be called.
}
@Override
public void destroy() {
// Callback method used to release resources when the detection ends.
}
}
3.3 Set the detection result processor to bind the analyzer to the result processor.
Code:
analyzer.setTransactor(new SceneDetectionAnalyzerTransactor());
// Create an instance of the LensEngine class provided by the HMS Core ML SDK to capture dynamic camera streams and pass the streams to the analyzer.
Context context = this.getApplicationContext();
LensEngine lensEngine = new LensEngine.Creator(context, this.analyzer)
.setLensType(LensEngine.BACK_LENS)
.applyDisplayDimension(1440, 1080)
.applyFps(30.0f)
.enableAutomaticFocus(true)
.create();
3.4 Call the run method to start the camera and read camera streams for detection.
Code:
// Implement other logic of the SurfaceView control by yourself.
SurfaceView mSurfaceView = findViewById(R.id.surface_view);
try {
lensEngine.run(mSurfaceView.getHolder());
} catch (IOException e) {
// Exception handling logic.
}
3.5 Stop the analyzer and release the detection resources when the detection ends.
Code:
if (analyzer != null) {
analyzer.stop();
}
if (lensEngine != null) {
lensEngine.release();
}
You'll find that the sky, plants, and mountains in all of your images will be identified in an instant. Pretty exciting stuff, wouldn't you say? Feel free to try it out yourself!
GitHub Source Code
Reference
Official website of Huawei Developers
Development Guide
HMS Core official community on Reddit
Demo and sample code
Discussions on Stack Overflow

Build an Emoji Making App Effortlessly

{
"lightbox_close": "Close",
"lightbox_next": "Next",
"lightbox_previous": "Previous",
"lightbox_error": "The requested content cannot be loaded. Please try again later.",
"lightbox_start_slideshow": "Start slideshow",
"lightbox_stop_slideshow": "Stop slideshow",
"lightbox_full_screen": "Full screen",
"lightbox_thumbnails": "Thumbnails",
"lightbox_download": "Download",
"lightbox_share": "Share",
"lightbox_zoom": "Zoom",
"lightbox_new_window": "New window",
"lightbox_toggle_sidebar": "Toggle sidebar"
}
Emojis are a must-have tool in today's online communications as they help add color to text-based chatting and allow users to better express the emotions behind their words. Since the number of preset emojis is always limited, many apps now allow users to create their own custom emojis to keep things fresh and exciting.
For example, in a social media app, users who do not want to show their faces when making video calls can use an animated character to protect their privacy, with their facial expressions applied to the character; in a live streaming or e-commerce app, virtual streamers with realistic facial expressions are much more likely to attract watchers; in a video or photo shooting app, users can control the facial expressions of an animated character when taking a selfie, and then share the selfie via social media; and in an educational app for kids, a cute animated character with detailed facial expressions will make online classes much more fun and engaging for students.
I myself am developing such a messaging app. When chatting with friends and wanting to express themselves in ways other than words, users of my app can take a photo to create an emoji of themselves, or of an animated character they have selected. The app will then identify users' facial expressions, and apply their facial expressions to the emoji. In this way, users are able to create an endless amount of unique emojis. During the development of my app, I used the capabilities provided by HMS Core AR Engine to track users' facial expressions and convert the facial expressions into parameters, which greatly reduced the development workload. Now I will show you how I managed to do this.
Implementation​AR Engine provides apps with the ability to track and recognize facial expressions in real time, which can then be converted into facial expression parameters and used to accurately control the facial expressions of virtual characters.
Currently, AR Engine provides 64 facial expressions, including eyelid, eyebrow, eyeball, mouth, and tongue movements. It supports 21 eye-related movements, including eyeball movement and opening and closing the eyes; 28 mouth movements, including opening the mouth, puckering, pulling, or licking the lips, and moving the tongue; as well as 5 eyebrow movements, including raising or lowering the eyebrows.
Demo​Facial expression based emoji
Development Procedure​Requirements on the Development Environment​JDK: 1.8.211 or later
Android Studio: 3.0 or later
minSdkVersion: 26 or later
targetSdkVersion: 29 (recommended)
compileSdkVersion: 29 (recommended)
Gradle version: 6.1.1 or later (recommended)
Make sure that you have downloaded the AR Engine APK from AppGallery and installed it on the device.
Test device: see Software and Hardware Requirements of AR Engine Features
If you need to use multiple HMS Core kits, use the latest versions required for these kits.
Preparations​1. Before getting started, you will need to register as a Huawei developer and complete identity verification on HUAWEI Developers. You can click here to find out the detailed registration and identity verification procedure.
2. Before development, integrate the AR Engine SDK via the Maven repository into your development environment.
3. The procedure for configuring the Maven repository address in Android Studio varies for Gradle plugin earlier than 7.0, Gradle plugin 7.0, and Gradle plugin 7.1 or later. You need to configure it according to the specific Gradle plugin version.
4. Take Gradle plugin 7.0 as an example:
Open the project-level build.gradle file in your Android Studio project and configure the Maven repository address.
Go to buildscript > repositories and configure the Maven repository address for the SDK.
Code:
buildscript {
repositories {
google()
jcenter()
maven {url "https://developer.huawei.com/repo/" }
}
}
Open the project-level settings.gradle file and configure the Maven repository address for the HMS Core SDK.
Code:
dependencyResolutionManagement {
repositoriesMode.set(RepositoriesMode.FAIL_ON_PROJECT_REPOS)
repositories {
repositories {
google()
jcenter()
maven {url "https://developer.huawei.com/repo/" }
}
}
}
5. Add the following build dependency in the dependencies block.
Code:
dependencies {
implementation 'com.huawei.hms:arenginesdk:{version}
}
App Development​1. Check whether AR Engine has been installed on the current device. If yes, your app can run properly. If not, you need to prompt the user to install it, for example, by redirecting the user to AppGallery. The sample code is as follows:
Code:
boolean isInstallArEngineApk =AREnginesApk.isAREngineApkReady(this);
if (!isInstallArEngineApk) {
// ConnectAppMarketActivity.class is the activity for redirecting users to AppGallery.
startActivity(new Intent(this, com.huawei.arengine.demos.common.ConnectAppMarketActivity.class));
isRemindInstall = true;
}
2. Create an AR scene. AR Engine supports five scenes, including motion tracking (ARWorldTrackingConfig), face tracking (ARFaceTrackingConfig), hand recognition (ARHandTrackingConfig), human body tracking (ARBodyTrackingConfig), and image recognition(ARImageTrackingConfig).
The following takes creating a face tracking scene by calling ARFaceTrackingConfig as an example.
Code:
// Create an ARSession object.
mArSession = new ARSession(this);
// Select a specific Config to initialize the ARSession object based on the application scenario.
ARFaceTrackingConfig config = new ARFaceTrackingConfig(mArSession);
Set scene parameters using the config.setXXX method.
Code:
// Set the camera opening mode, which can be external or internal. The external mode can only be used in ARFace. Therefore, you are advised to use the internal mode.
mArConfig.setImageInputMode(ARConfigBase.ImageInputMode.EXTERNAL_INPUT_ALL);
3. Set the AR scene parameters for face tracking and start face tracking.
Code:
mArSession.configure(mArConfig);
mArSession.resume();
4. Initialize the FaceGeometryDisplay class to obtain the facial geometric data and render the data on the screen.
Code:
public class FaceGeometryDisplay {
// Initialize the OpenGL ES rendering related to face geometry, including creating the shader program.
void init(Context context) {...
}
}
5. Initialize the onDrawFrame method in the FaceGeometryDisplay class, and call face.getFaceGeometry() to obtain the face mesh.
Code:
public void onDrawFrame(ARCamera camera, ARFace face) {
ARFaceGeometry faceGeometry = face.getFaceGeometry();
updateFaceGeometryData(faceGeometry);
updateModelViewProjectionData(camera, face);
drawFaceGeometry();
faceGeometry.release();
}
6. Initialize updateFaceGeometryData() in the FaceGeometryDisplay class.
Pass the face mesh data for configuration and set facial expression parameters using OpenGl ES.
Code:
private void updateFaceGeometryData (ARFaceGeometry faceGeometry) {
FloatBuffer faceVertices = faceGeometry.getVertices();
FloatBuffer textureCoordinates =faceGeometry.getTextureCoordinates();
// Obtain an array consisting of face mesh texture coordinates, which is used together with the vertex data returned by getVertices() during rendering.
}
7. Initialize the FaceRenderManager class to manage facial data rendering.
Code:
public class FaceRenderManager implements GLSurfaceView.Renderer {
public FaceRenderManager(Context context, Activity activity) {
mContext = context;
mActivity = activity;
}
// Set ARSession to obtain the latest data.
public void setArSession(ARSession arSession) {
if (arSession == null) {
LogUtil.error(TAG, "Set session error, arSession is null!");
return;
}
mArSession = arSession;
}
// Set ARConfigBase to obtain the configuration mode.
public void setArConfigBase(ARConfigBase arConfig) {
if (arConfig == null) {
LogUtil.error(TAG, "setArFaceTrackingConfig error, arConfig is null.");
return;
}
mArConfigBase = arConfig;
}
// Set the camera opening mode.
public void setOpenCameraOutsideFlag(boolean isOpenCameraOutsideFlag) {
isOpenCameraOutside = isOpenCameraOutsideFlag;
}
...
@Override
public void onSurfaceCreated(GL10 gl, EGLConfig config) {
mFaceGeometryDisplay.init(mContext);
}
}
8. Implement the face tracking effect by calling methods like setArSession and setArConfigBase of FaceRenderManager in FaceActivity.
Code:
public class FaceActivity extends BaseActivity {
@Override
protected void onCreate(Bundle savedInstanceState) {
mFaceRenderManager = new FaceRenderManager(this, this);
mFaceRenderManager.setDisplayRotationManage(mDisplayRotationManager);
mFaceRenderManager.setTextView(mTextView);
glSurfaceView.setRenderer(mFaceRenderManager);
glSurfaceView.setRenderMode(GLSurfaceView.RENDERMODE_CONTINUOUSLY);
}
}
Conclusion​Emojis allow users to express their moods and excitement in a way words can't. Instead of providing users with a selection of the same old boring preset emojis that have been used a million times, you can now make your app more fun by allowing users to create emojis themselves! Users can easily create an emoji with their own smiles, simply by facing the camera, selecting an animated character they love, and smiling. With such an ability to customize emojis, users will be able to express their feelings in a more personalized and interesting manner. If you have any interest in developing such an app, AR Engine is a great choice for you. With accurate facial tracking capabilities, it is able to identify users' facial expressions in real time, convert the facial expressions into parameters, and then apply them to virtual characters. Integrating the capability can help you considerably streamline your app development process, leaving you with more time to focus on how to provide more interesting features to users and improve your app's user experience.
Reference​AR Engine Sample Code
Face Tracking Capability

Categories

Resources